Dec 11 10:11:20 crc systemd[1]: Starting Kubernetes Kubelet... Dec 11 10:11:20 crc restorecon[4762]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Dec 11 10:11:20 crc restorecon[4762]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 11 10:11:20 crc restorecon[4762]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 11 10:11:20 crc restorecon[4762]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 11 10:11:20 crc restorecon[4762]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 11 10:11:20 crc restorecon[4762]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 11 10:11:20 crc restorecon[4762]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 11 10:11:20 crc restorecon[4762]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 11 10:11:20 crc restorecon[4762]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 11 10:11:20 crc restorecon[4762]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 11 10:11:20 crc restorecon[4762]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Dec 11 10:11:20 crc restorecon[4762]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 11 10:11:20 crc restorecon[4762]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 11 10:11:20 crc restorecon[4762]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 11 10:11:20 crc restorecon[4762]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 11 10:11:20 crc restorecon[4762]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 11 10:11:20 crc restorecon[4762]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Dec 11 10:11:20 crc restorecon[4762]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 11 10:11:20 crc restorecon[4762]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 11 10:11:20 crc restorecon[4762]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 11 10:11:20 crc restorecon[4762]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 11 10:11:20 crc restorecon[4762]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 11 10:11:20 crc restorecon[4762]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 11 10:11:20 crc restorecon[4762]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 11 10:11:20 crc restorecon[4762]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 11 10:11:20 crc restorecon[4762]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 11 10:11:20 crc restorecon[4762]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 11 10:11:20 crc restorecon[4762]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 11 10:11:20 crc restorecon[4762]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 11 10:11:20 crc restorecon[4762]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 11 10:11:20 crc restorecon[4762]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 11 10:11:20 crc restorecon[4762]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 11 10:11:20 crc restorecon[4762]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Dec 11 10:11:20 crc restorecon[4762]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 11 10:11:20 crc restorecon[4762]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 11 10:11:20 crc restorecon[4762]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 11 10:11:20 crc restorecon[4762]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Dec 11 10:11:20 crc restorecon[4762]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Dec 11 10:11:20 crc restorecon[4762]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 11 10:11:20 crc restorecon[4762]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 11 10:11:20 crc restorecon[4762]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 11 10:11:20 crc restorecon[4762]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 11 10:11:20 crc restorecon[4762]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 11 10:11:20 crc restorecon[4762]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 11 10:11:20 crc restorecon[4762]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 11 10:11:20 crc restorecon[4762]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 11 10:11:20 crc restorecon[4762]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 11 10:11:20 crc restorecon[4762]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 11 10:11:20 crc restorecon[4762]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 11 10:11:20 crc restorecon[4762]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 11 10:11:20 crc restorecon[4762]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 11 10:11:20 crc restorecon[4762]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 11 10:11:20 crc restorecon[4762]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 11 10:11:20 crc restorecon[4762]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 11 10:11:20 crc restorecon[4762]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 11 10:11:20 crc restorecon[4762]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 11 10:11:20 crc restorecon[4762]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 11 10:11:20 crc restorecon[4762]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 11 10:11:20 crc restorecon[4762]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 11 10:11:20 crc restorecon[4762]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 11 10:11:20 crc restorecon[4762]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 11 10:11:20 crc restorecon[4762]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 11 10:11:20 crc restorecon[4762]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 11 10:11:20 crc restorecon[4762]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 11 10:11:20 crc restorecon[4762]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 11 10:11:20 crc restorecon[4762]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 11 10:11:20 crc restorecon[4762]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 11 10:11:20 crc restorecon[4762]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 11 10:11:20 crc restorecon[4762]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 11 10:11:20 crc restorecon[4762]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 11 10:11:20 crc restorecon[4762]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 11 10:11:20 crc restorecon[4762]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Dec 11 10:11:20 crc restorecon[4762]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Dec 11 10:11:20 crc restorecon[4762]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 11 10:11:20 crc restorecon[4762]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Dec 11 10:11:20 crc restorecon[4762]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Dec 11 10:11:20 crc restorecon[4762]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 11 10:11:20 crc restorecon[4762]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 11 10:11:20 crc restorecon[4762]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 11 10:11:20 crc restorecon[4762]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 11 10:11:20 crc restorecon[4762]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 11 10:11:20 crc restorecon[4762]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 11 10:11:20 crc restorecon[4762]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 11 10:11:20 crc restorecon[4762]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 11 10:11:20 crc restorecon[4762]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 11 10:11:20 crc restorecon[4762]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 11 10:11:20 crc restorecon[4762]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 11 10:11:20 crc restorecon[4762]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Dec 11 10:11:20 crc restorecon[4762]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 11 10:11:21 crc restorecon[4762]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 11 10:11:21 crc restorecon[4762]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Dec 11 10:11:22 crc kubenswrapper[4953]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 11 10:11:22 crc kubenswrapper[4953]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Dec 11 10:11:22 crc kubenswrapper[4953]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 11 10:11:22 crc kubenswrapper[4953]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 11 10:11:22 crc kubenswrapper[4953]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 11 10:11:22 crc kubenswrapper[4953]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.162860 4953 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.165373 4953 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.165390 4953 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.165395 4953 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.165399 4953 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.165403 4953 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.165407 4953 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.165411 4953 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.165415 4953 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.165420 4953 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.165425 4953 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.165429 4953 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.165433 4953 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.165438 4953 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.165442 4953 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.165446 4953 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.165449 4953 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.165453 4953 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.165469 4953 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.165475 4953 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.165479 4953 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.165482 4953 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.165486 4953 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.165489 4953 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.165493 4953 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.165497 4953 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.165501 4953 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.165504 4953 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.165508 4953 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.165511 4953 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.165515 4953 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.165519 4953 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.165522 4953 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.165525 4953 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.165529 4953 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.165533 4953 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.165538 4953 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.165543 4953 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.165548 4953 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.165552 4953 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.165556 4953 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.165560 4953 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.165564 4953 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.165568 4953 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.165586 4953 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.165590 4953 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.165595 4953 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.165599 4953 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.165603 4953 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.165608 4953 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.165611 4953 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.165615 4953 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.165619 4953 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.165623 4953 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.165627 4953 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.165632 4953 feature_gate.go:330] unrecognized feature gate: Example Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.165637 4953 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.165643 4953 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.165648 4953 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.165652 4953 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.165656 4953 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.165661 4953 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.165665 4953 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.165669 4953 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.165673 4953 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.165676 4953 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.165680 4953 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.165695 4953 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.165700 4953 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.165704 4953 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.165708 4953 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.165712 4953 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.165808 4953 flags.go:64] FLAG: --address="0.0.0.0" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.165824 4953 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.165832 4953 flags.go:64] FLAG: --anonymous-auth="true" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.165837 4953 flags.go:64] FLAG: --application-metrics-count-limit="100" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.165843 4953 flags.go:64] FLAG: --authentication-token-webhook="false" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.165848 4953 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.165853 4953 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.165859 4953 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.165864 4953 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.165868 4953 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.165872 4953 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.165877 4953 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.165882 4953 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.165886 4953 flags.go:64] FLAG: --cgroup-root="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.165890 4953 flags.go:64] FLAG: --cgroups-per-qos="true" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.165895 4953 flags.go:64] FLAG: --client-ca-file="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.165898 4953 flags.go:64] FLAG: --cloud-config="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.165902 4953 flags.go:64] FLAG: --cloud-provider="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.165907 4953 flags.go:64] FLAG: --cluster-dns="[]" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.165911 4953 flags.go:64] FLAG: --cluster-domain="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.165916 4953 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.165920 4953 flags.go:64] FLAG: --config-dir="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.165924 4953 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.165928 4953 flags.go:64] FLAG: --container-log-max-files="5" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.165934 4953 flags.go:64] FLAG: --container-log-max-size="10Mi" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.165938 4953 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.165943 4953 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.165947 4953 flags.go:64] FLAG: --containerd-namespace="k8s.io" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.165951 4953 flags.go:64] FLAG: --contention-profiling="false" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.165955 4953 flags.go:64] FLAG: --cpu-cfs-quota="true" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.165960 4953 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.165964 4953 flags.go:64] FLAG: --cpu-manager-policy="none" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.165982 4953 flags.go:64] FLAG: --cpu-manager-policy-options="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.165993 4953 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.166008 4953 flags.go:64] FLAG: --enable-controller-attach-detach="true" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.166016 4953 flags.go:64] FLAG: --enable-debugging-handlers="true" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.166021 4953 flags.go:64] FLAG: --enable-load-reader="false" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.166026 4953 flags.go:64] FLAG: --enable-server="true" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.166030 4953 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.166036 4953 flags.go:64] FLAG: --event-burst="100" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.166040 4953 flags.go:64] FLAG: --event-qps="50" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.166045 4953 flags.go:64] FLAG: --event-storage-age-limit="default=0" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.166049 4953 flags.go:64] FLAG: --event-storage-event-limit="default=0" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.166053 4953 flags.go:64] FLAG: --eviction-hard="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.166059 4953 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.166063 4953 flags.go:64] FLAG: --eviction-minimum-reclaim="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.166068 4953 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.166073 4953 flags.go:64] FLAG: --eviction-soft="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.166079 4953 flags.go:64] FLAG: --eviction-soft-grace-period="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.166084 4953 flags.go:64] FLAG: --exit-on-lock-contention="false" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.166089 4953 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.166094 4953 flags.go:64] FLAG: --experimental-mounter-path="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.166100 4953 flags.go:64] FLAG: --fail-cgroupv1="false" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.166104 4953 flags.go:64] FLAG: --fail-swap-on="true" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.166109 4953 flags.go:64] FLAG: --feature-gates="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.166115 4953 flags.go:64] FLAG: --file-check-frequency="20s" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.166121 4953 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.166126 4953 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.166130 4953 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.166135 4953 flags.go:64] FLAG: --healthz-port="10248" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.166141 4953 flags.go:64] FLAG: --help="false" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.166145 4953 flags.go:64] FLAG: --hostname-override="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.166149 4953 flags.go:64] FLAG: --housekeeping-interval="10s" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.166153 4953 flags.go:64] FLAG: --http-check-frequency="20s" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.166159 4953 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.166163 4953 flags.go:64] FLAG: --image-credential-provider-config="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.166168 4953 flags.go:64] FLAG: --image-gc-high-threshold="85" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.166172 4953 flags.go:64] FLAG: --image-gc-low-threshold="80" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.166176 4953 flags.go:64] FLAG: --image-service-endpoint="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.166180 4953 flags.go:64] FLAG: --kernel-memcg-notification="false" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.166185 4953 flags.go:64] FLAG: --kube-api-burst="100" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.166189 4953 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.166194 4953 flags.go:64] FLAG: --kube-api-qps="50" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.166198 4953 flags.go:64] FLAG: --kube-reserved="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.166202 4953 flags.go:64] FLAG: --kube-reserved-cgroup="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.166206 4953 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.166211 4953 flags.go:64] FLAG: --kubelet-cgroups="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.166215 4953 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.166219 4953 flags.go:64] FLAG: --lock-file="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.166224 4953 flags.go:64] FLAG: --log-cadvisor-usage="false" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.166228 4953 flags.go:64] FLAG: --log-flush-frequency="5s" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.166232 4953 flags.go:64] FLAG: --log-json-info-buffer-size="0" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.166238 4953 flags.go:64] FLAG: --log-json-split-stream="false" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.166243 4953 flags.go:64] FLAG: --log-text-info-buffer-size="0" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.166247 4953 flags.go:64] FLAG: --log-text-split-stream="false" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.166251 4953 flags.go:64] FLAG: --logging-format="text" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.166255 4953 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.166260 4953 flags.go:64] FLAG: --make-iptables-util-chains="true" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.166264 4953 flags.go:64] FLAG: --manifest-url="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.166268 4953 flags.go:64] FLAG: --manifest-url-header="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.166274 4953 flags.go:64] FLAG: --max-housekeeping-interval="15s" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.166278 4953 flags.go:64] FLAG: --max-open-files="1000000" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.166283 4953 flags.go:64] FLAG: --max-pods="110" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.166288 4953 flags.go:64] FLAG: --maximum-dead-containers="-1" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.166292 4953 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.166296 4953 flags.go:64] FLAG: --memory-manager-policy="None" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.166301 4953 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.166306 4953 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.166311 4953 flags.go:64] FLAG: --node-ip="192.168.126.11" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.166316 4953 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.166328 4953 flags.go:64] FLAG: --node-status-max-images="50" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.166334 4953 flags.go:64] FLAG: --node-status-update-frequency="10s" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.166339 4953 flags.go:64] FLAG: --oom-score-adj="-999" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.166344 4953 flags.go:64] FLAG: --pod-cidr="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.166348 4953 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.166356 4953 flags.go:64] FLAG: --pod-manifest-path="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.166362 4953 flags.go:64] FLAG: --pod-max-pids="-1" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.166367 4953 flags.go:64] FLAG: --pods-per-core="0" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.166372 4953 flags.go:64] FLAG: --port="10250" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.166376 4953 flags.go:64] FLAG: --protect-kernel-defaults="false" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.166381 4953 flags.go:64] FLAG: --provider-id="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.166385 4953 flags.go:64] FLAG: --qos-reserved="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.166389 4953 flags.go:64] FLAG: --read-only-port="10255" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.166393 4953 flags.go:64] FLAG: --register-node="true" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.166397 4953 flags.go:64] FLAG: --register-schedulable="true" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.166401 4953 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.166409 4953 flags.go:64] FLAG: --registry-burst="10" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.166413 4953 flags.go:64] FLAG: --registry-qps="5" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.166418 4953 flags.go:64] FLAG: --reserved-cpus="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.166422 4953 flags.go:64] FLAG: --reserved-memory="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.166427 4953 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.166432 4953 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.166436 4953 flags.go:64] FLAG: --rotate-certificates="false" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.166440 4953 flags.go:64] FLAG: --rotate-server-certificates="false" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.166444 4953 flags.go:64] FLAG: --runonce="false" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.166449 4953 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.166453 4953 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.166457 4953 flags.go:64] FLAG: --seccomp-default="false" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.166461 4953 flags.go:64] FLAG: --serialize-image-pulls="true" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.166466 4953 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.166471 4953 flags.go:64] FLAG: --storage-driver-db="cadvisor" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.166475 4953 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.166479 4953 flags.go:64] FLAG: --storage-driver-password="root" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.166484 4953 flags.go:64] FLAG: --storage-driver-secure="false" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.166488 4953 flags.go:64] FLAG: --storage-driver-table="stats" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.166492 4953 flags.go:64] FLAG: --storage-driver-user="root" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.166496 4953 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.166501 4953 flags.go:64] FLAG: --sync-frequency="1m0s" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.166505 4953 flags.go:64] FLAG: --system-cgroups="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.166510 4953 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.166516 4953 flags.go:64] FLAG: --system-reserved-cgroup="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.166520 4953 flags.go:64] FLAG: --tls-cert-file="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.166524 4953 flags.go:64] FLAG: --tls-cipher-suites="[]" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.166529 4953 flags.go:64] FLAG: --tls-min-version="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.166534 4953 flags.go:64] FLAG: --tls-private-key-file="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.166538 4953 flags.go:64] FLAG: --topology-manager-policy="none" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.166542 4953 flags.go:64] FLAG: --topology-manager-policy-options="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.166546 4953 flags.go:64] FLAG: --topology-manager-scope="container" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.166550 4953 flags.go:64] FLAG: --v="2" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.166556 4953 flags.go:64] FLAG: --version="false" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.166561 4953 flags.go:64] FLAG: --vmodule="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.166566 4953 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.166589 4953 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.166689 4953 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.166696 4953 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.166701 4953 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.166706 4953 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.166710 4953 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.166714 4953 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.166718 4953 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.166722 4953 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.166727 4953 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.166731 4953 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.166735 4953 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.166739 4953 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.166743 4953 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.166747 4953 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.166751 4953 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.166755 4953 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.166759 4953 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.166763 4953 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.166767 4953 feature_gate.go:330] unrecognized feature gate: Example Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.166770 4953 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.166774 4953 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.166777 4953 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.166781 4953 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.166785 4953 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.166788 4953 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.166792 4953 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.166795 4953 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.166799 4953 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.166803 4953 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.166807 4953 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.166810 4953 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.166815 4953 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.166819 4953 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.166824 4953 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.166829 4953 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.166833 4953 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.166838 4953 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.166842 4953 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.166846 4953 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.166853 4953 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.166868 4953 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.166873 4953 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.166878 4953 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.166883 4953 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.166888 4953 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.166892 4953 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.166897 4953 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.166901 4953 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.166905 4953 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.166910 4953 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.166914 4953 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.166918 4953 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.166922 4953 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.166926 4953 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.166931 4953 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.166936 4953 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.166941 4953 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.166946 4953 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.166951 4953 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.166956 4953 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.166961 4953 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.166965 4953 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.166970 4953 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.166977 4953 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.166982 4953 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.166986 4953 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.166991 4953 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.166995 4953 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.166999 4953 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.167005 4953 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.167010 4953 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.167017 4953 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.174952 4953 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.175003 4953 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.175058 4953 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.175066 4953 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.175073 4953 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.175079 4953 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.175083 4953 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.175087 4953 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.175091 4953 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.175095 4953 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.175098 4953 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.175102 4953 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.175106 4953 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.175110 4953 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.175114 4953 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.175118 4953 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.175123 4953 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.175128 4953 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.175132 4953 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.175136 4953 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.175140 4953 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.175144 4953 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.175148 4953 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.175152 4953 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.175156 4953 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.175159 4953 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.175163 4953 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.175166 4953 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.175170 4953 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.175175 4953 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.175179 4953 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.175183 4953 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.175187 4953 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.175190 4953 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.175194 4953 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.175197 4953 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.175202 4953 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.175205 4953 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.175209 4953 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.175213 4953 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.175217 4953 feature_gate.go:330] unrecognized feature gate: Example Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.175220 4953 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.175224 4953 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.175227 4953 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.175231 4953 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.175235 4953 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.175238 4953 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.175242 4953 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.175245 4953 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.175249 4953 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.175253 4953 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.175256 4953 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.175260 4953 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.175263 4953 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.175267 4953 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.175271 4953 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.175274 4953 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.175278 4953 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.175281 4953 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.175285 4953 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.175288 4953 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.175292 4953 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.175296 4953 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.175299 4953 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.175303 4953 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.175306 4953 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.175310 4953 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.175314 4953 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.175319 4953 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.175323 4953 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.175327 4953 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.175331 4953 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.175335 4953 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.175341 4953 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.175438 4953 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.175445 4953 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.175451 4953 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.175455 4953 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.175459 4953 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.175463 4953 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.175467 4953 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.175471 4953 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.175475 4953 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.175480 4953 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.175484 4953 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.175489 4953 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.175493 4953 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.175497 4953 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.175501 4953 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.175505 4953 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.175508 4953 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.175512 4953 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.175516 4953 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.175519 4953 feature_gate.go:330] unrecognized feature gate: Example Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.175523 4953 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.175526 4953 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.175530 4953 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.175534 4953 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.175537 4953 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.175541 4953 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.175545 4953 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.175549 4953 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.175553 4953 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.175556 4953 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.175560 4953 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.175563 4953 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.175567 4953 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.175585 4953 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.175589 4953 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.175593 4953 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.175597 4953 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.175600 4953 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.175604 4953 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.175607 4953 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.175611 4953 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.175615 4953 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.175619 4953 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.175623 4953 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.175627 4953 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.175630 4953 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.175634 4953 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.175637 4953 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.175641 4953 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.175644 4953 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.175648 4953 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.175652 4953 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.175655 4953 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.175659 4953 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.175663 4953 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.175666 4953 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.175670 4953 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.175674 4953 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.175679 4953 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.175683 4953 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.175687 4953 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.175690 4953 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.175694 4953 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.175698 4953 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.175701 4953 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.175705 4953 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.175708 4953 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.175712 4953 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.175716 4953 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.175719 4953 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.175723 4953 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.175729 4953 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.211235 4953 server.go:940] "Client rotation is on, will bootstrap in background" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.214453 4953 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.214563 4953 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.215220 4953 server.go:997] "Starting client certificate rotation" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.215256 4953 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.215922 4953 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-18 12:46:19.166042921 +0000 UTC Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.216616 4953 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 170h34m56.949749754s for next certificate rotation Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.234418 4953 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.235767 4953 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.246485 4953 log.go:25] "Validated CRI v1 runtime API" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.267987 4953 log.go:25] "Validated CRI v1 image API" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.269462 4953 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.272439 4953 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2025-12-11-10-06-45-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.272469 4953 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.289353 4953 manager.go:217] Machine: {Timestamp:2025-12-11 10:11:22.287810337 +0000 UTC m=+0.311669400 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654124544 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:28c30a59-aa99-484b-82a7-0daea6b2659e BootID:5fa37296-71b7-4540-87a3-260b8ecb76f4 Filesystems:[{Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365408768 Type:vfs Inodes:821633 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108169 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827060224 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:61:34:a7 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:61:34:a7 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:f8:1a:8e Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:5a:12:aa Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:16:82:5d Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:39:72:8e Speed:-1 Mtu:1496} {Name:ens7.23 MacAddress:52:54:00:b3:06:5d Speed:-1 Mtu:1496} {Name:eth10 MacAddress:4e:0d:1b:64:72:30 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:02:fa:30:25:73:88 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654124544 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.289643 4953 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.289824 4953 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.290392 4953 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.290601 4953 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.290646 4953 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.290858 4953 topology_manager.go:138] "Creating topology manager with none policy" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.290869 4953 container_manager_linux.go:303] "Creating device plugin manager" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.291082 4953 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.291114 4953 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.291344 4953 state_mem.go:36] "Initialized new in-memory state store" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.291466 4953 server.go:1245] "Using root directory" path="/var/lib/kubelet" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.292168 4953 kubelet.go:418] "Attempting to sync node with API server" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.292193 4953 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.292217 4953 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.292231 4953 kubelet.go:324] "Adding apiserver pod source" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.292258 4953 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.300353 4953 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.300943 4953 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.320600 4953 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.321732 4953 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.134:6443: connect: connection refused Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.321746 4953 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.134:6443: connect: connection refused Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.321876 4953 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.321922 4953 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Dec 11 10:11:22 crc kubenswrapper[4953]: E1211 10:11:22.321920 4953 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.134:6443: connect: connection refused" logger="UnhandledError" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.321937 4953 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Dec 11 10:11:22 crc kubenswrapper[4953]: E1211 10:11:22.321873 4953 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.134:6443: connect: connection refused" logger="UnhandledError" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.322102 4953 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.322133 4953 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.322147 4953 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.322161 4953 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.322184 4953 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.322201 4953 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.322214 4953 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.322231 4953 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.322244 4953 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.322764 4953 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.323720 4953 server.go:1280] "Started kubelet" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.323812 4953 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.134:6443: connect: connection refused Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.324026 4953 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.324033 4953 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.324603 4953 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 11 10:11:22 crc systemd[1]: Started Kubernetes Kubelet. Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.337831 4953 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.337876 4953 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.338119 4953 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 00:07:40.198668933 +0000 UTC Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.338171 4953 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 85h56m17.860500402s for next certificate rotation Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.338223 4953 volume_manager.go:287] "The desired_state_of_world populator starts" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.338238 4953 volume_manager.go:289] "Starting Kubelet Volume Manager" Dec 11 10:11:22 crc kubenswrapper[4953]: E1211 10:11:22.338239 4953 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.338324 4953 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 11 10:11:22 crc kubenswrapper[4953]: E1211 10:11:22.338271 4953 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.129.56.134:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.1880217ec75db3dd default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 10:11:22.323637213 +0000 UTC m=+0.347496286,LastTimestamp:2025-12-11 10:11:22.323637213 +0000 UTC m=+0.347496286,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.338809 4953 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.134:6443: connect: connection refused Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.338881 4953 server.go:460] "Adding debug handlers to kubelet server" Dec 11 10:11:22 crc kubenswrapper[4953]: E1211 10:11:22.338891 4953 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.134:6443: connect: connection refused" logger="UnhandledError" Dec 11 10:11:22 crc kubenswrapper[4953]: E1211 10:11:22.339112 4953 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.134:6443: connect: connection refused" interval="200ms" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.344560 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.344618 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.344630 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.344639 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.344647 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.344656 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.344667 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.344676 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.344686 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.344694 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.344703 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.344713 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.344721 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.344731 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.344740 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.344748 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.344757 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.344767 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.344776 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.344785 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.344795 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.344804 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.344826 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.344835 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.344843 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.344852 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.344863 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.344873 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.344882 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.344891 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.344900 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.344909 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.344917 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.344937 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.344951 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.344965 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.344978 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.344987 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.344996 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.345010 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.345019 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.345033 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.345047 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.345055 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.345064 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.345072 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.345081 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.345094 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.345122 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.345134 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.345142 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.345150 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.345162 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.345172 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.345182 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.345191 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.345218 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.345227 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.345236 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.345252 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.345261 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.345271 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.345279 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.345289 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.345298 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.345307 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.345316 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.345325 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.345334 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.345343 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.345352 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.345361 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.345369 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.345378 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.345387 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.345396 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.345404 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.345414 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.345423 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.345431 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.345440 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.345450 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.345458 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.345467 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.345475 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.345484 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.345493 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.345504 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.345622 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.345634 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.345644 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.345660 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.345668 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.345678 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.345689 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.345698 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.345706 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.345715 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.345727 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.345744 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.345752 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.345762 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.345770 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.345778 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.345791 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.345802 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.345812 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.345822 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.345832 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.345843 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.345852 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.345862 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.345872 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.345882 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.345892 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.345900 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.345909 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.345918 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.345926 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.345934 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.345943 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.345951 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.345959 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.345968 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.345976 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.345983 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.345992 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.346000 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.346008 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.346051 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.346060 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.346240 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.346262 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.346272 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.346290 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.351709 4953 factory.go:55] Registering systemd factory Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.351747 4953 factory.go:221] Registration of the systemd container factory successfully Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.352107 4953 factory.go:153] Registering CRI-O factory Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.352141 4953 factory.go:221] Registration of the crio container factory successfully Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.352146 4953 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.352210 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.352227 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.352246 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.352267 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.352293 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.352308 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.352321 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.352340 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.352361 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.352377 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.352395 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.352410 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.352427 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.352444 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.352468 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.352496 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.352511 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.352532 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.352249 4953 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.352603 4953 factory.go:103] Registering Raw factory Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.352626 4953 manager.go:1196] Started watching for new ooms in manager Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.352741 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.352773 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.352791 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.352804 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.352815 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.352829 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.352841 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.352857 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.352869 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.352881 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.352895 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.352908 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.352922 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.352932 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.352943 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.353058 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.353095 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.353118 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.353131 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.353149 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.353161 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.353174 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.353189 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.353200 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.353215 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.353241 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.353254 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.353272 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.353312 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.353327 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.353345 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.353357 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.353372 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.353384 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.353395 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.353412 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.353429 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.353443 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.353459 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.353470 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.353479 4953 manager.go:319] Starting recovery of all containers Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.353485 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.353499 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.353512 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.353523 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.353534 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.353549 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.353560 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.353593 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.353605 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.353615 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.353629 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.353639 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.353651 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.353660 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.353670 4953 reconstruct.go:97] "Volume reconstruction finished" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.353678 4953 reconciler.go:26] "Reconciler: start to sync state" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.377615 4953 manager.go:324] Recovery completed Dec 11 10:11:22 crc kubenswrapper[4953]: E1211 10:11:22.442623 4953 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.454189 4953 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.457086 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.457225 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.457301 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.458098 4953 cpu_manager.go:225] "Starting CPU manager" policy="none" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.458114 4953 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.458148 4953 state_mem.go:36] "Initialized new in-memory state store" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.470510 4953 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.471928 4953 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.471984 4953 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.472015 4953 kubelet.go:2335] "Starting kubelet main sync loop" Dec 11 10:11:22 crc kubenswrapper[4953]: E1211 10:11:22.472159 4953 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 11 10:11:22 crc kubenswrapper[4953]: W1211 10:11:22.501603 4953 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.134:6443: connect: connection refused Dec 11 10:11:22 crc kubenswrapper[4953]: E1211 10:11:22.501735 4953 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.134:6443: connect: connection refused" logger="UnhandledError" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.510977 4953 policy_none.go:49] "None policy: Start" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.512122 4953 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.512153 4953 state_mem.go:35] "Initializing new in-memory state store" Dec 11 10:11:22 crc kubenswrapper[4953]: E1211 10:11:22.540092 4953 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.134:6443: connect: connection refused" interval="400ms" Dec 11 10:11:22 crc kubenswrapper[4953]: E1211 10:11:22.543648 4953 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 10:11:22 crc kubenswrapper[4953]: E1211 10:11:22.573274 4953 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.573858 4953 manager.go:334] "Starting Device Plugin manager" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.573929 4953 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.573945 4953 server.go:79] "Starting device plugin registration server" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.574424 4953 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.574441 4953 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.574626 4953 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.574699 4953 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.574719 4953 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 11 10:11:22 crc kubenswrapper[4953]: E1211 10:11:22.581142 4953 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.675626 4953 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.677311 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.677354 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.677363 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.677386 4953 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 11 10:11:22 crc kubenswrapper[4953]: E1211 10:11:22.677974 4953 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.134:6443: connect: connection refused" node="crc" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.774189 4953 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc"] Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.774360 4953 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.776110 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.776149 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.776161 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.776336 4953 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.776786 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.776842 4953 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.777264 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.777321 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.777332 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.777482 4953 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.777657 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.777717 4953 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.778753 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.778777 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.778787 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.778873 4953 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.779274 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.779307 4953 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.779697 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.779715 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.779723 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.781598 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.781662 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.781678 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.781950 4953 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.783195 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.783259 4953 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.785932 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.786046 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.786061 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.795842 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.795890 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.795925 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.796194 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.796228 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.796237 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.796330 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.796361 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.796372 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.796699 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.796741 4953 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.801856 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.801898 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.801911 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.860293 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.860351 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.860457 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.860482 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.860545 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.860643 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.860678 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.860762 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.860793 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.860816 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.860865 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.860919 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.860992 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.861018 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.861042 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.878459 4953 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.879829 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.879874 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.879883 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.879910 4953 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 11 10:11:22 crc kubenswrapper[4953]: E1211 10:11:22.880391 4953 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.134:6443: connect: connection refused" node="crc" Dec 11 10:11:22 crc kubenswrapper[4953]: E1211 10:11:22.941309 4953 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.134:6443: connect: connection refused" interval="800ms" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.962561 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.962650 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.962677 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.962702 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.962727 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.962749 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.962780 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.962801 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.962823 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.962826 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.962846 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.962905 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.962923 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.962930 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.962847 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.962889 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.962844 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.963048 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.963082 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.962946 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.963101 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.963124 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.963143 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.963105 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.963186 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.963201 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.963245 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.962954 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.962910 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 10:11:22 crc kubenswrapper[4953]: I1211 10:11:22.963298 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 11 10:11:23 crc kubenswrapper[4953]: I1211 10:11:23.114059 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 10:11:23 crc kubenswrapper[4953]: I1211 10:11:23.131449 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 11 10:11:23 crc kubenswrapper[4953]: I1211 10:11:23.153755 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 11 10:11:23 crc kubenswrapper[4953]: W1211 10:11:23.154933 4953 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-2f57568d7e9dc35a62cb83afb3556b98c7daaad6f6d912f543aa077d0e7da9f2 WatchSource:0}: Error finding container 2f57568d7e9dc35a62cb83afb3556b98c7daaad6f6d912f543aa077d0e7da9f2: Status 404 returned error can't find the container with id 2f57568d7e9dc35a62cb83afb3556b98c7daaad6f6d912f543aa077d0e7da9f2 Dec 11 10:11:23 crc kubenswrapper[4953]: W1211 10:11:23.158241 4953 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-c67d1a8e078316a5b9e1fd9efc67ec8cffa45443940a6741054b618a6b965011 WatchSource:0}: Error finding container c67d1a8e078316a5b9e1fd9efc67ec8cffa45443940a6741054b618a6b965011: Status 404 returned error can't find the container with id c67d1a8e078316a5b9e1fd9efc67ec8cffa45443940a6741054b618a6b965011 Dec 11 10:11:23 crc kubenswrapper[4953]: I1211 10:11:23.161919 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 10:11:23 crc kubenswrapper[4953]: I1211 10:11:23.167160 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 11 10:11:23 crc kubenswrapper[4953]: W1211 10:11:23.167839 4953 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-7f28ee609b5c1a417155f16e17c45f4f7f6d8eb1bedd37774938a488d47512c7 WatchSource:0}: Error finding container 7f28ee609b5c1a417155f16e17c45f4f7f6d8eb1bedd37774938a488d47512c7: Status 404 returned error can't find the container with id 7f28ee609b5c1a417155f16e17c45f4f7f6d8eb1bedd37774938a488d47512c7 Dec 11 10:11:23 crc kubenswrapper[4953]: W1211 10:11:23.173554 4953 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-e3ec2334e02d67d8a914755f1fadf87b4b4858f1f5400fe2249a290a0ee22d70 WatchSource:0}: Error finding container e3ec2334e02d67d8a914755f1fadf87b4b4858f1f5400fe2249a290a0ee22d70: Status 404 returned error can't find the container with id e3ec2334e02d67d8a914755f1fadf87b4b4858f1f5400fe2249a290a0ee22d70 Dec 11 10:11:23 crc kubenswrapper[4953]: W1211 10:11:23.184762 4953 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-00e37ca17a80943fd0e744cda1862944741f092a1da39791148438971ad8cda1 WatchSource:0}: Error finding container 00e37ca17a80943fd0e744cda1862944741f092a1da39791148438971ad8cda1: Status 404 returned error can't find the container with id 00e37ca17a80943fd0e744cda1862944741f092a1da39791148438971ad8cda1 Dec 11 10:11:23 crc kubenswrapper[4953]: I1211 10:11:23.281391 4953 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 10:11:23 crc kubenswrapper[4953]: I1211 10:11:23.282795 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:11:23 crc kubenswrapper[4953]: I1211 10:11:23.282832 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:11:23 crc kubenswrapper[4953]: I1211 10:11:23.282843 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:11:23 crc kubenswrapper[4953]: I1211 10:11:23.282864 4953 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 11 10:11:23 crc kubenswrapper[4953]: E1211 10:11:23.283239 4953 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.134:6443: connect: connection refused" node="crc" Dec 11 10:11:23 crc kubenswrapper[4953]: I1211 10:11:23.325308 4953 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.134:6443: connect: connection refused Dec 11 10:11:23 crc kubenswrapper[4953]: I1211 10:11:23.476346 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"7f28ee609b5c1a417155f16e17c45f4f7f6d8eb1bedd37774938a488d47512c7"} Dec 11 10:11:23 crc kubenswrapper[4953]: I1211 10:11:23.477341 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"c67d1a8e078316a5b9e1fd9efc67ec8cffa45443940a6741054b618a6b965011"} Dec 11 10:11:23 crc kubenswrapper[4953]: I1211 10:11:23.478145 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"2f57568d7e9dc35a62cb83afb3556b98c7daaad6f6d912f543aa077d0e7da9f2"} Dec 11 10:11:23 crc kubenswrapper[4953]: I1211 10:11:23.478960 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"00e37ca17a80943fd0e744cda1862944741f092a1da39791148438971ad8cda1"} Dec 11 10:11:23 crc kubenswrapper[4953]: I1211 10:11:23.479652 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"e3ec2334e02d67d8a914755f1fadf87b4b4858f1f5400fe2249a290a0ee22d70"} Dec 11 10:11:23 crc kubenswrapper[4953]: W1211 10:11:23.557672 4953 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.134:6443: connect: connection refused Dec 11 10:11:23 crc kubenswrapper[4953]: E1211 10:11:23.558121 4953 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.134:6443: connect: connection refused" logger="UnhandledError" Dec 11 10:11:23 crc kubenswrapper[4953]: E1211 10:11:23.742428 4953 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.134:6443: connect: connection refused" interval="1.6s" Dec 11 10:11:23 crc kubenswrapper[4953]: W1211 10:11:23.798743 4953 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.134:6443: connect: connection refused Dec 11 10:11:23 crc kubenswrapper[4953]: E1211 10:11:23.798843 4953 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.134:6443: connect: connection refused" logger="UnhandledError" Dec 11 10:11:23 crc kubenswrapper[4953]: W1211 10:11:23.801657 4953 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.134:6443: connect: connection refused Dec 11 10:11:23 crc kubenswrapper[4953]: E1211 10:11:23.801708 4953 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.134:6443: connect: connection refused" logger="UnhandledError" Dec 11 10:11:23 crc kubenswrapper[4953]: W1211 10:11:23.825689 4953 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.134:6443: connect: connection refused Dec 11 10:11:23 crc kubenswrapper[4953]: E1211 10:11:23.825811 4953 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.134:6443: connect: connection refused" logger="UnhandledError" Dec 11 10:11:24 crc kubenswrapper[4953]: I1211 10:11:24.084266 4953 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 10:11:24 crc kubenswrapper[4953]: I1211 10:11:24.085802 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:11:24 crc kubenswrapper[4953]: I1211 10:11:24.085856 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:11:24 crc kubenswrapper[4953]: I1211 10:11:24.085868 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:11:24 crc kubenswrapper[4953]: I1211 10:11:24.085892 4953 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 11 10:11:24 crc kubenswrapper[4953]: E1211 10:11:24.086499 4953 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.134:6443: connect: connection refused" node="crc" Dec 11 10:11:24 crc kubenswrapper[4953]: I1211 10:11:24.324734 4953 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.134:6443: connect: connection refused Dec 11 10:11:24 crc kubenswrapper[4953]: I1211 10:11:24.483217 4953 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="55c2f4c5d0c92c19f5eecf1971eb6cff623518a305068c8384aaa10f68201431" exitCode=0 Dec 11 10:11:24 crc kubenswrapper[4953]: I1211 10:11:24.483328 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"55c2f4c5d0c92c19f5eecf1971eb6cff623518a305068c8384aaa10f68201431"} Dec 11 10:11:24 crc kubenswrapper[4953]: I1211 10:11:24.483474 4953 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 10:11:24 crc kubenswrapper[4953]: I1211 10:11:24.484447 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:11:24 crc kubenswrapper[4953]: I1211 10:11:24.484635 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:11:24 crc kubenswrapper[4953]: I1211 10:11:24.484675 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:11:24 crc kubenswrapper[4953]: I1211 10:11:24.485530 4953 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 10:11:24 crc kubenswrapper[4953]: I1211 10:11:24.485512 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"c4536100178c4611eea8cfe2b15d79f55fdd32b7004de523cc2694bd656ce5be"} Dec 11 10:11:24 crc kubenswrapper[4953]: I1211 10:11:24.485421 4953 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="c4536100178c4611eea8cfe2b15d79f55fdd32b7004de523cc2694bd656ce5be" exitCode=0 Dec 11 10:11:24 crc kubenswrapper[4953]: I1211 10:11:24.486372 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:11:24 crc kubenswrapper[4953]: I1211 10:11:24.486417 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:11:24 crc kubenswrapper[4953]: I1211 10:11:24.486427 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:11:24 crc kubenswrapper[4953]: I1211 10:11:24.488122 4953 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 10:11:24 crc kubenswrapper[4953]: I1211 10:11:24.489004 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:11:24 crc kubenswrapper[4953]: I1211 10:11:24.489054 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:11:24 crc kubenswrapper[4953]: I1211 10:11:24.489063 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:11:24 crc kubenswrapper[4953]: I1211 10:11:24.489066 4953 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="145c63ec5d9aa482290fb3b6e2dc891fc95675fc0124836381f31f6535eb4574" exitCode=0 Dec 11 10:11:24 crc kubenswrapper[4953]: I1211 10:11:24.489127 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"145c63ec5d9aa482290fb3b6e2dc891fc95675fc0124836381f31f6535eb4574"} Dec 11 10:11:24 crc kubenswrapper[4953]: I1211 10:11:24.489214 4953 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 10:11:24 crc kubenswrapper[4953]: I1211 10:11:24.490323 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:11:24 crc kubenswrapper[4953]: I1211 10:11:24.490349 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:11:24 crc kubenswrapper[4953]: I1211 10:11:24.490359 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:11:24 crc kubenswrapper[4953]: I1211 10:11:24.491812 4953 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="1b17d140523000135ca46bbc525af1160b82222469a9ca408985ab27c2514f82" exitCode=0 Dec 11 10:11:24 crc kubenswrapper[4953]: I1211 10:11:24.491847 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"1b17d140523000135ca46bbc525af1160b82222469a9ca408985ab27c2514f82"} Dec 11 10:11:24 crc kubenswrapper[4953]: I1211 10:11:24.491882 4953 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 10:11:24 crc kubenswrapper[4953]: I1211 10:11:24.492864 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:11:24 crc kubenswrapper[4953]: I1211 10:11:24.492887 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:11:24 crc kubenswrapper[4953]: I1211 10:11:24.492899 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:11:24 crc kubenswrapper[4953]: I1211 10:11:24.500176 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"2b22d8239ad9f5511dc6ae773c7ea181c4e194b0847b58332e716953d9deb9cd"} Dec 11 10:11:24 crc kubenswrapper[4953]: I1211 10:11:24.500215 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"b5c40bd3d558c5cff3d458a0b5a993371c3e8b6afc0035a64a21ffc0cc6c2357"} Dec 11 10:11:24 crc kubenswrapper[4953]: I1211 10:11:24.500227 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"ec306b9048e81de45ce4e5ae1f564ab611980d56edf94f34c48cba7299dd754e"} Dec 11 10:11:24 crc kubenswrapper[4953]: I1211 10:11:24.500236 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"7453febb17d4aadef8c87c8d256a0339b441e2bed33a20a3f7cf88b4d0ce5a83"} Dec 11 10:11:24 crc kubenswrapper[4953]: I1211 10:11:24.500356 4953 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 10:11:24 crc kubenswrapper[4953]: I1211 10:11:24.501239 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:11:24 crc kubenswrapper[4953]: I1211 10:11:24.501283 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:11:24 crc kubenswrapper[4953]: I1211 10:11:24.501294 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:11:25 crc kubenswrapper[4953]: I1211 10:11:25.325507 4953 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.134:6443: connect: connection refused Dec 11 10:11:25 crc kubenswrapper[4953]: E1211 10:11:25.344118 4953 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.134:6443: connect: connection refused" interval="3.2s" Dec 11 10:11:25 crc kubenswrapper[4953]: W1211 10:11:25.437530 4953 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.134:6443: connect: connection refused Dec 11 10:11:25 crc kubenswrapper[4953]: E1211 10:11:25.437683 4953 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.134:6443: connect: connection refused" logger="UnhandledError" Dec 11 10:11:25 crc kubenswrapper[4953]: I1211 10:11:25.503872 4953 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="ca1334428660ef6ad1e7823f2498f401572385894df84f2614cba812e61c6703" exitCode=0 Dec 11 10:11:25 crc kubenswrapper[4953]: I1211 10:11:25.503974 4953 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 10:11:25 crc kubenswrapper[4953]: I1211 10:11:25.503974 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"ca1334428660ef6ad1e7823f2498f401572385894df84f2614cba812e61c6703"} Dec 11 10:11:25 crc kubenswrapper[4953]: W1211 10:11:25.504558 4953 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.134:6443: connect: connection refused Dec 11 10:11:25 crc kubenswrapper[4953]: E1211 10:11:25.504707 4953 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.134:6443: connect: connection refused" logger="UnhandledError" Dec 11 10:11:25 crc kubenswrapper[4953]: I1211 10:11:25.505813 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:11:25 crc kubenswrapper[4953]: I1211 10:11:25.505862 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:11:25 crc kubenswrapper[4953]: I1211 10:11:25.505880 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:11:25 crc kubenswrapper[4953]: I1211 10:11:25.507308 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"89487ecc0b25583d92a2adb537e660618a1f0477d9b0ca805c7d5cc120a38ef5"} Dec 11 10:11:25 crc kubenswrapper[4953]: I1211 10:11:25.507372 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"5850c59617cbc5cbf3d86246bfb8d7645964fdb32f406648e47de3d2e1dcca39"} Dec 11 10:11:25 crc kubenswrapper[4953]: I1211 10:11:25.507385 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"afbf1d478a1ccbd17c29483adf2e39e60be93dfde72d96dd4c45ee2b81c7db7f"} Dec 11 10:11:25 crc kubenswrapper[4953]: I1211 10:11:25.508934 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"976b60a31862ddd53a1343b6fc3d27137f731775f54572f0c6e202fe6d7db1de"} Dec 11 10:11:25 crc kubenswrapper[4953]: I1211 10:11:25.509032 4953 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 10:11:25 crc kubenswrapper[4953]: I1211 10:11:25.537364 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:11:25 crc kubenswrapper[4953]: I1211 10:11:25.537415 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:11:25 crc kubenswrapper[4953]: I1211 10:11:25.537435 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:11:25 crc kubenswrapper[4953]: I1211 10:11:25.547040 4953 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 10:11:25 crc kubenswrapper[4953]: I1211 10:11:25.547981 4953 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 10:11:25 crc kubenswrapper[4953]: I1211 10:11:25.548610 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"6e43d812b41951ea02ea6aeaf53d101e762a3bc0513865818ff2dcc6506a24d1"} Dec 11 10:11:25 crc kubenswrapper[4953]: I1211 10:11:25.548665 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"7ec7f5911594475d4a03216b385df264254e50cb55ef7eee3d2ac0a88e8ef1af"} Dec 11 10:11:25 crc kubenswrapper[4953]: I1211 10:11:25.548687 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"42af1d5ca92f02433468753b3f0f0cb74ef360928733d71e4316fb8ed77aea63"} Dec 11 10:11:25 crc kubenswrapper[4953]: I1211 10:11:25.549231 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:11:25 crc kubenswrapper[4953]: I1211 10:11:25.549263 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:11:25 crc kubenswrapper[4953]: I1211 10:11:25.549275 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:11:25 crc kubenswrapper[4953]: I1211 10:11:25.549281 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:11:25 crc kubenswrapper[4953]: I1211 10:11:25.549316 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:11:25 crc kubenswrapper[4953]: I1211 10:11:25.549332 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:11:25 crc kubenswrapper[4953]: I1211 10:11:25.686776 4953 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 10:11:25 crc kubenswrapper[4953]: I1211 10:11:25.687798 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:11:25 crc kubenswrapper[4953]: I1211 10:11:25.687851 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:11:25 crc kubenswrapper[4953]: I1211 10:11:25.687864 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:11:25 crc kubenswrapper[4953]: I1211 10:11:25.687894 4953 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 11 10:11:25 crc kubenswrapper[4953]: E1211 10:11:25.688389 4953 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.134:6443: connect: connection refused" node="crc" Dec 11 10:11:25 crc kubenswrapper[4953]: I1211 10:11:25.929164 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 10:11:26 crc kubenswrapper[4953]: I1211 10:11:26.031389 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 10:11:26 crc kubenswrapper[4953]: I1211 10:11:26.325469 4953 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.134:6443: connect: connection refused Dec 11 10:11:26 crc kubenswrapper[4953]: W1211 10:11:26.450747 4953 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.134:6443: connect: connection refused Dec 11 10:11:26 crc kubenswrapper[4953]: E1211 10:11:26.450835 4953 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.134:6443: connect: connection refused" logger="UnhandledError" Dec 11 10:11:26 crc kubenswrapper[4953]: I1211 10:11:26.553298 4953 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="f3c4d5a11951bd1f654969b2dcc10aaa0efae1846b56fef558005b0eb34398f4" exitCode=0 Dec 11 10:11:26 crc kubenswrapper[4953]: I1211 10:11:26.553395 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"f3c4d5a11951bd1f654969b2dcc10aaa0efae1846b56fef558005b0eb34398f4"} Dec 11 10:11:26 crc kubenswrapper[4953]: I1211 10:11:26.553560 4953 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 10:11:26 crc kubenswrapper[4953]: I1211 10:11:26.554943 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:11:26 crc kubenswrapper[4953]: I1211 10:11:26.554986 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:11:26 crc kubenswrapper[4953]: I1211 10:11:26.555002 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:11:26 crc kubenswrapper[4953]: I1211 10:11:26.565138 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"a91255550d88dd1963fef1112d90d2c1e779fc3e2dd1e7c824640879b8c6a58e"} Dec 11 10:11:26 crc kubenswrapper[4953]: I1211 10:11:26.565200 4953 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 10:11:26 crc kubenswrapper[4953]: I1211 10:11:26.565281 4953 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 10:11:26 crc kubenswrapper[4953]: I1211 10:11:26.565333 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"d2348bd7a336966cd91aa6ba1cf71771e7fd111085acbb0481adee82d7a6e109"} Dec 11 10:11:26 crc kubenswrapper[4953]: I1211 10:11:26.565372 4953 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 10:11:26 crc kubenswrapper[4953]: I1211 10:11:26.565429 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 11 10:11:26 crc kubenswrapper[4953]: I1211 10:11:26.565287 4953 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 10:11:26 crc kubenswrapper[4953]: I1211 10:11:26.566148 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:11:26 crc kubenswrapper[4953]: I1211 10:11:26.566177 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:11:26 crc kubenswrapper[4953]: I1211 10:11:26.566178 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:11:26 crc kubenswrapper[4953]: I1211 10:11:26.566211 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:11:26 crc kubenswrapper[4953]: I1211 10:11:26.566226 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:11:26 crc kubenswrapper[4953]: I1211 10:11:26.566188 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:11:26 crc kubenswrapper[4953]: I1211 10:11:26.566870 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:11:26 crc kubenswrapper[4953]: I1211 10:11:26.566940 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:11:26 crc kubenswrapper[4953]: I1211 10:11:26.566965 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:11:26 crc kubenswrapper[4953]: I1211 10:11:26.567677 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:11:26 crc kubenswrapper[4953]: I1211 10:11:26.567745 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:11:26 crc kubenswrapper[4953]: I1211 10:11:26.567772 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:11:26 crc kubenswrapper[4953]: W1211 10:11:26.599710 4953 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.134:6443: connect: connection refused Dec 11 10:11:26 crc kubenswrapper[4953]: E1211 10:11:26.599831 4953 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.134:6443: connect: connection refused" logger="UnhandledError" Dec 11 10:11:26 crc kubenswrapper[4953]: E1211 10:11:26.766388 4953 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.129.56.134:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.1880217ec75db3dd default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 10:11:22.323637213 +0000 UTC m=+0.347496286,LastTimestamp:2025-12-11 10:11:22.323637213 +0000 UTC m=+0.347496286,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 10:11:27 crc kubenswrapper[4953]: I1211 10:11:27.349468 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 10:11:27 crc kubenswrapper[4953]: I1211 10:11:27.574840 4953 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 11 10:11:27 crc kubenswrapper[4953]: I1211 10:11:27.574880 4953 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 10:11:27 crc kubenswrapper[4953]: I1211 10:11:27.574896 4953 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 10:11:27 crc kubenswrapper[4953]: I1211 10:11:27.574841 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"659fc1d0a3f05b3ffca595f856170f6aa25b3a76952b8d7ce95e964ba07945eb"} Dec 11 10:11:27 crc kubenswrapper[4953]: I1211 10:11:27.574984 4953 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 10:11:27 crc kubenswrapper[4953]: I1211 10:11:27.574985 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"c0c17143aae40b8649cff4e5f7d27ff19ac785631f95cc452827c261fe9c4dc4"} Dec 11 10:11:27 crc kubenswrapper[4953]: I1211 10:11:27.575016 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"d6648fb362956fd7345bfe3de590dcfbe5a6c0ef212bab82ba2548be5b7620cf"} Dec 11 10:11:27 crc kubenswrapper[4953]: I1211 10:11:27.575942 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:11:27 crc kubenswrapper[4953]: I1211 10:11:27.575972 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:11:27 crc kubenswrapper[4953]: I1211 10:11:27.575983 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:11:27 crc kubenswrapper[4953]: I1211 10:11:27.576018 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:11:27 crc kubenswrapper[4953]: I1211 10:11:27.576088 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:11:27 crc kubenswrapper[4953]: I1211 10:11:27.576102 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:11:27 crc kubenswrapper[4953]: I1211 10:11:27.576652 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:11:27 crc kubenswrapper[4953]: I1211 10:11:27.576683 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:11:27 crc kubenswrapper[4953]: I1211 10:11:27.576696 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:11:28 crc kubenswrapper[4953]: I1211 10:11:28.534148 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 10:11:28 crc kubenswrapper[4953]: I1211 10:11:28.583954 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"e132fc9cac7da5a9985fa70a41db8bb9ae921c178c7b0f4883c48f6bf2135c77"} Dec 11 10:11:28 crc kubenswrapper[4953]: I1211 10:11:28.584020 4953 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 11 10:11:28 crc kubenswrapper[4953]: I1211 10:11:28.584030 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"cc4b3d49997cc52910540c62528b8e8cc895b4746caa425ab7e108a3516b6eca"} Dec 11 10:11:28 crc kubenswrapper[4953]: I1211 10:11:28.584069 4953 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 10:11:28 crc kubenswrapper[4953]: I1211 10:11:28.584089 4953 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 10:11:28 crc kubenswrapper[4953]: I1211 10:11:28.584157 4953 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 10:11:28 crc kubenswrapper[4953]: I1211 10:11:28.585639 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:11:28 crc kubenswrapper[4953]: I1211 10:11:28.585677 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:11:28 crc kubenswrapper[4953]: I1211 10:11:28.585690 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:11:28 crc kubenswrapper[4953]: I1211 10:11:28.585700 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:11:28 crc kubenswrapper[4953]: I1211 10:11:28.585718 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:11:28 crc kubenswrapper[4953]: I1211 10:11:28.585740 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:11:28 crc kubenswrapper[4953]: I1211 10:11:28.585750 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:11:28 crc kubenswrapper[4953]: I1211 10:11:28.585791 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:11:28 crc kubenswrapper[4953]: I1211 10:11:28.585805 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:11:28 crc kubenswrapper[4953]: I1211 10:11:28.889022 4953 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 10:11:28 crc kubenswrapper[4953]: I1211 10:11:28.890386 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:11:28 crc kubenswrapper[4953]: I1211 10:11:28.890419 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:11:28 crc kubenswrapper[4953]: I1211 10:11:28.890429 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:11:28 crc kubenswrapper[4953]: I1211 10:11:28.890450 4953 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 11 10:11:28 crc kubenswrapper[4953]: I1211 10:11:28.929019 4953 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 11 10:11:28 crc kubenswrapper[4953]: I1211 10:11:28.929131 4953 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 10:11:28 crc kubenswrapper[4953]: I1211 10:11:28.982752 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 10:11:29 crc kubenswrapper[4953]: I1211 10:11:29.270257 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Dec 11 10:11:29 crc kubenswrapper[4953]: I1211 10:11:29.585951 4953 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 11 10:11:29 crc kubenswrapper[4953]: I1211 10:11:29.586001 4953 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 10:11:29 crc kubenswrapper[4953]: I1211 10:11:29.586022 4953 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 10:11:29 crc kubenswrapper[4953]: I1211 10:11:29.587031 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:11:29 crc kubenswrapper[4953]: I1211 10:11:29.587061 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:11:29 crc kubenswrapper[4953]: I1211 10:11:29.587073 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:11:29 crc kubenswrapper[4953]: I1211 10:11:29.587139 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:11:29 crc kubenswrapper[4953]: I1211 10:11:29.587165 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:11:29 crc kubenswrapper[4953]: I1211 10:11:29.587174 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:11:30 crc kubenswrapper[4953]: I1211 10:11:30.588555 4953 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 10:11:30 crc kubenswrapper[4953]: I1211 10:11:30.590323 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:11:30 crc kubenswrapper[4953]: I1211 10:11:30.590416 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:11:30 crc kubenswrapper[4953]: I1211 10:11:30.590439 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:11:30 crc kubenswrapper[4953]: I1211 10:11:30.896168 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 10:11:30 crc kubenswrapper[4953]: I1211 10:11:30.896431 4953 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 10:11:30 crc kubenswrapper[4953]: I1211 10:11:30.898053 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:11:30 crc kubenswrapper[4953]: I1211 10:11:30.898159 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:11:30 crc kubenswrapper[4953]: I1211 10:11:30.898187 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:11:30 crc kubenswrapper[4953]: I1211 10:11:30.952187 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Dec 11 10:11:31 crc kubenswrapper[4953]: I1211 10:11:31.590833 4953 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 10:11:31 crc kubenswrapper[4953]: I1211 10:11:31.597406 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:11:31 crc kubenswrapper[4953]: I1211 10:11:31.597462 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:11:31 crc kubenswrapper[4953]: I1211 10:11:31.597474 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:11:32 crc kubenswrapper[4953]: E1211 10:11:32.581679 4953 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 11 10:11:33 crc kubenswrapper[4953]: I1211 10:11:33.569827 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 10:11:33 crc kubenswrapper[4953]: I1211 10:11:33.570078 4953 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 10:11:33 crc kubenswrapper[4953]: I1211 10:11:33.571311 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:11:33 crc kubenswrapper[4953]: I1211 10:11:33.571359 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:11:33 crc kubenswrapper[4953]: I1211 10:11:33.571372 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:11:33 crc kubenswrapper[4953]: I1211 10:11:33.574079 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 10:11:33 crc kubenswrapper[4953]: I1211 10:11:33.595561 4953 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 10:11:33 crc kubenswrapper[4953]: I1211 10:11:33.596795 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:11:33 crc kubenswrapper[4953]: I1211 10:11:33.596835 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:11:33 crc kubenswrapper[4953]: I1211 10:11:33.596853 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:11:33 crc kubenswrapper[4953]: I1211 10:11:33.599312 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 10:11:34 crc kubenswrapper[4953]: I1211 10:11:34.658795 4953 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 10:11:34 crc kubenswrapper[4953]: I1211 10:11:34.659907 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:11:34 crc kubenswrapper[4953]: I1211 10:11:34.659970 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:11:34 crc kubenswrapper[4953]: I1211 10:11:34.659985 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:11:37 crc kubenswrapper[4953]: I1211 10:11:37.326202 4953 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Dec 11 10:11:37 crc kubenswrapper[4953]: I1211 10:11:37.669044 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 11 10:11:37 crc kubenswrapper[4953]: I1211 10:11:37.670827 4953 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="a91255550d88dd1963fef1112d90d2c1e779fc3e2dd1e7c824640879b8c6a58e" exitCode=255 Dec 11 10:11:37 crc kubenswrapper[4953]: I1211 10:11:37.670895 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"a91255550d88dd1963fef1112d90d2c1e779fc3e2dd1e7c824640879b8c6a58e"} Dec 11 10:11:37 crc kubenswrapper[4953]: I1211 10:11:37.671056 4953 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 10:11:37 crc kubenswrapper[4953]: I1211 10:11:37.673583 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:11:37 crc kubenswrapper[4953]: I1211 10:11:37.673605 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:11:37 crc kubenswrapper[4953]: I1211 10:11:37.673613 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:11:37 crc kubenswrapper[4953]: I1211 10:11:37.674013 4953 scope.go:117] "RemoveContainer" containerID="a91255550d88dd1963fef1112d90d2c1e779fc3e2dd1e7c824640879b8c6a58e" Dec 11 10:11:37 crc kubenswrapper[4953]: I1211 10:11:37.933336 4953 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 11 10:11:37 crc kubenswrapper[4953]: I1211 10:11:37.933445 4953 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Dec 11 10:11:38 crc kubenswrapper[4953]: I1211 10:11:38.038441 4953 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found]","reason":"Forbidden","details":{},"code":403} Dec 11 10:11:38 crc kubenswrapper[4953]: I1211 10:11:38.038502 4953 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Dec 11 10:11:38 crc kubenswrapper[4953]: I1211 10:11:38.675827 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 11 10:11:38 crc kubenswrapper[4953]: I1211 10:11:38.677708 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"6b38e6fc7946d99ff7570627e9bfd01e9f5e029ad3f3e2cda276461f222d7950"} Dec 11 10:11:38 crc kubenswrapper[4953]: I1211 10:11:38.677870 4953 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 10:11:38 crc kubenswrapper[4953]: I1211 10:11:38.678787 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:11:38 crc kubenswrapper[4953]: I1211 10:11:38.678827 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:11:38 crc kubenswrapper[4953]: I1211 10:11:38.678839 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:11:38 crc kubenswrapper[4953]: I1211 10:11:38.930226 4953 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 11 10:11:38 crc kubenswrapper[4953]: I1211 10:11:38.930301 4953 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 11 10:11:39 crc kubenswrapper[4953]: I1211 10:11:39.125133 4953 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Dec 11 10:11:39 crc kubenswrapper[4953]: [+]log ok Dec 11 10:11:39 crc kubenswrapper[4953]: [+]etcd ok Dec 11 10:11:39 crc kubenswrapper[4953]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Dec 11 10:11:39 crc kubenswrapper[4953]: [+]poststarthook/openshift.io-api-request-count-filter ok Dec 11 10:11:39 crc kubenswrapper[4953]: [+]poststarthook/openshift.io-startkubeinformers ok Dec 11 10:11:39 crc kubenswrapper[4953]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Dec 11 10:11:39 crc kubenswrapper[4953]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Dec 11 10:11:39 crc kubenswrapper[4953]: [+]poststarthook/start-apiserver-admission-initializer ok Dec 11 10:11:39 crc kubenswrapper[4953]: [+]poststarthook/generic-apiserver-start-informers ok Dec 11 10:11:39 crc kubenswrapper[4953]: [+]poststarthook/priority-and-fairness-config-consumer ok Dec 11 10:11:39 crc kubenswrapper[4953]: [+]poststarthook/priority-and-fairness-filter ok Dec 11 10:11:39 crc kubenswrapper[4953]: [+]poststarthook/storage-object-count-tracker-hook ok Dec 11 10:11:39 crc kubenswrapper[4953]: [+]poststarthook/start-apiextensions-informers ok Dec 11 10:11:39 crc kubenswrapper[4953]: [+]poststarthook/start-apiextensions-controllers ok Dec 11 10:11:39 crc kubenswrapper[4953]: [+]poststarthook/crd-informer-synced ok Dec 11 10:11:39 crc kubenswrapper[4953]: [+]poststarthook/start-system-namespaces-controller ok Dec 11 10:11:39 crc kubenswrapper[4953]: [+]poststarthook/start-cluster-authentication-info-controller ok Dec 11 10:11:39 crc kubenswrapper[4953]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Dec 11 10:11:39 crc kubenswrapper[4953]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Dec 11 10:11:39 crc kubenswrapper[4953]: [+]poststarthook/start-legacy-token-tracking-controller ok Dec 11 10:11:39 crc kubenswrapper[4953]: [+]poststarthook/start-service-ip-repair-controllers ok Dec 11 10:11:39 crc kubenswrapper[4953]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Dec 11 10:11:39 crc kubenswrapper[4953]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Dec 11 10:11:39 crc kubenswrapper[4953]: [+]poststarthook/priority-and-fairness-config-producer ok Dec 11 10:11:39 crc kubenswrapper[4953]: [+]poststarthook/bootstrap-controller ok Dec 11 10:11:39 crc kubenswrapper[4953]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Dec 11 10:11:39 crc kubenswrapper[4953]: [+]poststarthook/start-kube-aggregator-informers ok Dec 11 10:11:39 crc kubenswrapper[4953]: [+]poststarthook/apiservice-status-local-available-controller ok Dec 11 10:11:39 crc kubenswrapper[4953]: [+]poststarthook/apiservice-status-remote-available-controller ok Dec 11 10:11:39 crc kubenswrapper[4953]: [+]poststarthook/apiservice-registration-controller ok Dec 11 10:11:39 crc kubenswrapper[4953]: [+]poststarthook/apiservice-wait-for-first-sync ok Dec 11 10:11:39 crc kubenswrapper[4953]: [+]poststarthook/apiservice-discovery-controller ok Dec 11 10:11:39 crc kubenswrapper[4953]: [+]poststarthook/kube-apiserver-autoregistration ok Dec 11 10:11:39 crc kubenswrapper[4953]: [+]autoregister-completion ok Dec 11 10:11:39 crc kubenswrapper[4953]: [+]poststarthook/apiservice-openapi-controller ok Dec 11 10:11:39 crc kubenswrapper[4953]: [+]poststarthook/apiservice-openapiv3-controller ok Dec 11 10:11:39 crc kubenswrapper[4953]: livez check failed Dec 11 10:11:39 crc kubenswrapper[4953]: I1211 10:11:39.125208 4953 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 10:11:40 crc kubenswrapper[4953]: I1211 10:11:40.897064 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 10:11:40 crc kubenswrapper[4953]: I1211 10:11:40.897317 4953 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 10:11:40 crc kubenswrapper[4953]: I1211 10:11:40.898685 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:11:40 crc kubenswrapper[4953]: I1211 10:11:40.898728 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:11:40 crc kubenswrapper[4953]: I1211 10:11:40.898737 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:11:40 crc kubenswrapper[4953]: I1211 10:11:40.981216 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Dec 11 10:11:40 crc kubenswrapper[4953]: I1211 10:11:40.981349 4953 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 10:11:40 crc kubenswrapper[4953]: I1211 10:11:40.982812 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:11:40 crc kubenswrapper[4953]: I1211 10:11:40.982852 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:11:40 crc kubenswrapper[4953]: I1211 10:11:40.982862 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:11:40 crc kubenswrapper[4953]: I1211 10:11:40.999025 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Dec 11 10:11:41 crc kubenswrapper[4953]: I1211 10:11:41.685615 4953 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 10:11:41 crc kubenswrapper[4953]: I1211 10:11:41.686604 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:11:41 crc kubenswrapper[4953]: I1211 10:11:41.686631 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:11:41 crc kubenswrapper[4953]: I1211 10:11:41.686646 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:11:42 crc kubenswrapper[4953]: E1211 10:11:42.582757 4953 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 11 10:11:42 crc kubenswrapper[4953]: E1211 10:11:42.946880 4953 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Dec 11 10:11:42 crc kubenswrapper[4953]: I1211 10:11:42.960593 4953 trace.go:236] Trace[1539619887]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (11-Dec-2025 10:11:28.882) (total time: 14077ms): Dec 11 10:11:42 crc kubenswrapper[4953]: Trace[1539619887]: ---"Objects listed" error: 14077ms (10:11:42.960) Dec 11 10:11:42 crc kubenswrapper[4953]: Trace[1539619887]: [14.077618748s] [14.077618748s] END Dec 11 10:11:42 crc kubenswrapper[4953]: I1211 10:11:42.960628 4953 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Dec 11 10:11:42 crc kubenswrapper[4953]: I1211 10:11:42.960971 4953 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Dec 11 10:11:42 crc kubenswrapper[4953]: I1211 10:11:42.962606 4953 trace.go:236] Trace[1914785477]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (11-Dec-2025 10:11:30.157) (total time: 12805ms): Dec 11 10:11:42 crc kubenswrapper[4953]: Trace[1914785477]: ---"Objects listed" error: 12805ms (10:11:42.962) Dec 11 10:11:42 crc kubenswrapper[4953]: Trace[1914785477]: [12.805253318s] [12.805253318s] END Dec 11 10:11:42 crc kubenswrapper[4953]: I1211 10:11:42.962630 4953 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Dec 11 10:11:42 crc kubenswrapper[4953]: I1211 10:11:42.963476 4953 trace.go:236] Trace[195659031]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (11-Dec-2025 10:11:31.690) (total time: 11273ms): Dec 11 10:11:42 crc kubenswrapper[4953]: Trace[195659031]: ---"Objects listed" error: 11272ms (10:11:42.963) Dec 11 10:11:42 crc kubenswrapper[4953]: Trace[195659031]: [11.273017677s] [11.273017677s] END Dec 11 10:11:42 crc kubenswrapper[4953]: I1211 10:11:42.963497 4953 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Dec 11 10:11:42 crc kubenswrapper[4953]: E1211 10:11:42.966995 4953 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Dec 11 10:11:42 crc kubenswrapper[4953]: I1211 10:11:42.970535 4953 trace.go:236] Trace[128070104]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (11-Dec-2025 10:11:31.451) (total time: 11519ms): Dec 11 10:11:42 crc kubenswrapper[4953]: Trace[128070104]: ---"Objects listed" error: 11519ms (10:11:42.970) Dec 11 10:11:42 crc kubenswrapper[4953]: Trace[128070104]: [11.519221028s] [11.519221028s] END Dec 11 10:11:42 crc kubenswrapper[4953]: I1211 10:11:42.970567 4953 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.307384 4953 apiserver.go:52] "Watching apiserver" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.309773 4953 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.310072 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf"] Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.310446 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.310502 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.310584 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 10:11:43 crc kubenswrapper[4953]: E1211 10:11:43.311084 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.311334 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 11 10:11:43 crc kubenswrapper[4953]: E1211 10:11:43.311661 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.312074 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.312685 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 10:11:43 crc kubenswrapper[4953]: E1211 10:11:43.312737 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.314671 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.314691 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.314793 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.314827 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.314879 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.315502 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.315585 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.316112 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.316380 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.338106 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.339614 4953 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.349897 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.358934 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.363778 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.363820 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.363840 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.363859 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.363875 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.363891 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.363907 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.363924 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.363973 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.363993 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.364007 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.364022 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.364039 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.364054 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.364084 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.364100 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.364114 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.364132 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.364146 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.364164 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.364191 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.364208 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.364223 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.364240 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.364257 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.364275 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.364291 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.364335 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.364351 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.364379 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.364396 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.364412 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.364441 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.364457 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.364473 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.364495 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.364521 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.364537 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.364595 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.364619 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.364638 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.364661 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.364676 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.364691 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.364707 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.364722 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.364737 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.364751 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.364767 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.364783 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.364834 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.364856 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.364876 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.364891 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.364929 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.364944 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.364961 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.364978 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.364994 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.365010 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.365026 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.365040 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.365082 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.365102 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.365118 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.365241 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.365263 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.365282 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.365302 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.365319 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.365338 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.365358 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.365376 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.365395 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.365411 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.365426 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.365442 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.365457 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.365474 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.365489 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.365504 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.365521 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.365537 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.365551 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.365582 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.365601 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.365619 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.365614 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.365639 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.365663 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.365684 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.365681 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.365700 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.365716 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.365733 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.365748 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.365754 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.365757 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.365764 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.365856 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.365888 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.365914 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.365920 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.365935 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.365959 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.365981 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.366002 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.366022 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.366039 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.366056 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.366148 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.366177 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.366202 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.366222 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.366238 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.366254 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.366274 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.366310 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.366333 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.366349 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.366368 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.366384 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.366399 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.366415 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.366431 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.366449 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.366466 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.366484 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.366501 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.366517 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.366533 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.366550 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.366589 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.366611 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.366628 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.366646 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.366666 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.366684 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.366712 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.366730 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.366749 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.366767 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.366784 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.366802 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.366818 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.366852 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.366880 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.366898 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.366915 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.366935 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.366952 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.366968 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.366984 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.367002 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.367019 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.367036 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.367054 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.367072 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.367090 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.367108 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.367136 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.367159 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.367177 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.367195 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.367212 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.367242 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.367270 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.367299 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.367323 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.367345 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.367366 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.367387 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.367409 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.367431 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.367451 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.367468 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.367485 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.367507 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.367524 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.367541 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.367558 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.367592 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.367609 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.367639 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.367658 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.367679 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.367696 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.367713 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.367731 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.367749 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.367766 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.367783 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.367800 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.367817 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.367835 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.367851 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.367867 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.367886 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.367903 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.367920 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.367958 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.367985 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.368008 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.368028 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.368046 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.368067 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.368085 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.368101 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.368121 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.368143 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.368161 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.368180 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.368197 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.368216 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.368263 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.368276 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.368295 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.368316 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.368330 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.377903 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.366003 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.378551 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.366075 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.366074 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.366247 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.366281 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.366379 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.366455 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.366505 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.366538 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.366663 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.366684 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.366747 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.366828 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.366853 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.367083 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.367385 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.367388 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.367657 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.367730 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.367871 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.368088 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.368317 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.368615 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.368863 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.368872 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.368884 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.369000 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.369163 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.369554 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.369775 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.369920 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.370194 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.370688 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.370937 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.371057 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.371081 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.371624 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.372181 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.372908 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.372943 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.373070 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.373173 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.373286 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.373301 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.373516 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.373656 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.373827 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.373896 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.374186 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.374237 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.374404 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.374746 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.374796 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.375125 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.375466 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.375609 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.375715 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.375905 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.375943 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.375962 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.376285 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.376415 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.376569 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.376831 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.377212 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.377347 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.377854 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.377863 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.378470 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.378985 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.379842 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.380125 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.380212 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.380226 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.380450 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.380810 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.381635 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.381817 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.381815 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.379328 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: E1211 10:11:43.382081 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 10:11:43.882044718 +0000 UTC m=+21.905903831 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.382092 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.382110 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.378943 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.382366 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.382374 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.382524 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.382908 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.383633 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.383645 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.384156 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.384201 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.384238 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.384267 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.384421 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.384525 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.384628 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.384822 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.385001 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.385012 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.385226 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.385665 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.385684 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.385697 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.385711 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.385858 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.386167 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.386516 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.386518 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.386524 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.387285 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.387489 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.387639 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.387806 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.387827 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.387977 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.388173 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.388318 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.388481 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.388743 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: E1211 10:11:43.388843 4953 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 11 10:11:43 crc kubenswrapper[4953]: E1211 10:11:43.388894 4953 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 11 10:11:43 crc kubenswrapper[4953]: E1211 10:11:43.388922 4953 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 10:11:43 crc kubenswrapper[4953]: E1211 10:11:43.389002 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-11 10:11:43.888976092 +0000 UTC m=+21.912835125 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 10:11:43 crc kubenswrapper[4953]: E1211 10:11:43.389010 4953 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.388867 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: E1211 10:11:43.389089 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-11 10:11:43.889068064 +0000 UTC m=+21.912927187 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.389172 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.389301 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.389344 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.389743 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.390050 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.390072 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.390083 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.390162 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.390321 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.390429 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.390773 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.390909 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.391171 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.391627 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.391759 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.391787 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.391810 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.391855 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.391991 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.392252 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.392284 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.392349 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.393482 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.393652 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.394015 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.394063 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.394260 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.394409 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.394418 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.394441 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.394527 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.394612 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.395886 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.392617 4953 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.396435 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.396470 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.397036 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.397722 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.397919 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.392582 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.398112 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.470954 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.471281 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.472190 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.472256 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.472822 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.473358 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.473499 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.473823 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.473938 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.474327 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.474479 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.474540 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.474723 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.474995 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.474822 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.475662 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.475693 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.477805 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.478059 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.478258 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: E1211 10:11:43.478317 4953 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 11 10:11:43 crc kubenswrapper[4953]: E1211 10:11:43.478428 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-11 10:11:43.978392612 +0000 UTC m=+22.002251645 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.478555 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.478657 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.478738 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.478845 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.478803 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.479019 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.479106 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.479434 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.479683 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.479786 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.479803 4953 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.479816 4953 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.479829 4953 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.479838 4953 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.479848 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.479858 4953 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.479870 4953 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.479880 4953 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.479890 4953 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.479899 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.479910 4953 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.479919 4953 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.479934 4953 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.479946 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.479864 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.479961 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.479970 4953 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.480001 4953 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.480041 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.480056 4953 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.480068 4953 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.480079 4953 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.480094 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.480105 4953 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.480116 4953 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.480128 4953 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.480142 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.480154 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.480165 4953 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.480177 4953 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.480192 4953 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.480203 4953 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.480213 4953 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.480227 4953 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.480238 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.480248 4953 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.480258 4953 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.480271 4953 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.480282 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.480292 4953 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.480303 4953 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.480318 4953 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.480331 4953 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.480343 4953 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.480354 4953 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.480376 4953 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.480390 4953 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.480401 4953 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.480415 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.480426 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.480441 4953 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.480453 4953 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.480467 4953 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.480479 4953 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.480491 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.480502 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.480519 4953 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.480530 4953 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.480542 4953 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.480558 4953 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.480587 4953 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.480602 4953 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.480660 4953 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.480678 4953 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.480691 4953 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.480704 4953 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.480716 4953 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.480731 4953 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.480743 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.480756 4953 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.480774 4953 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.480790 4953 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.480802 4953 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.480815 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.480832 4953 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.480844 4953 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.480856 4953 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.480868 4953 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.480884 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.480897 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.480910 4953 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.480922 4953 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.480937 4953 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.480949 4953 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.480962 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.480979 4953 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.480992 4953 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.481004 4953 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.481017 4953 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.481031 4953 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.481043 4953 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.481053 4953 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.481067 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.481082 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.481093 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.481106 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.481117 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.481132 4953 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.481144 4953 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.481170 4953 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.481187 4953 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.481199 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.481213 4953 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.481235 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.481252 4953 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.481265 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.481278 4953 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.481293 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.481310 4953 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.481322 4953 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.481335 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.481348 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.481364 4953 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.481376 4953 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.481388 4953 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.481402 4953 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.481414 4953 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.481426 4953 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.481439 4953 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.481455 4953 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.481468 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.481480 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.481495 4953 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.481513 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.481526 4953 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.481538 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.481555 4953 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.481596 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.481612 4953 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.481625 4953 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.481642 4953 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.481660 4953 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.481673 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.481687 4953 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.481704 4953 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.481717 4953 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.481729 4953 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.481742 4953 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.481758 4953 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.481771 4953 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.481784 4953 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.481804 4953 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.481817 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.481830 4953 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.481845 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.481862 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.481875 4953 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.481886 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.481899 4953 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.481917 4953 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.481930 4953 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.481943 4953 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.481961 4953 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.481975 4953 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.481990 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.482005 4953 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.482043 4953 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.482059 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.482074 4953 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.482088 4953 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.482106 4953 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.482120 4953 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.482133 4953 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.482146 4953 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.482163 4953 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.482176 4953 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.482189 4953 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.482215 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.482206 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.482271 4953 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.482289 4953 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.482299 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.482312 4953 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.482322 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.482334 4953 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.482351 4953 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.482361 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.482375 4953 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.482402 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.483261 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 11 10:11:43 crc kubenswrapper[4953]: E1211 10:11:43.495269 4953 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 11 10:11:43 crc kubenswrapper[4953]: E1211 10:11:43.495307 4953 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 11 10:11:43 crc kubenswrapper[4953]: E1211 10:11:43.495330 4953 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 10:11:43 crc kubenswrapper[4953]: E1211 10:11:43.495386 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-11 10:11:43.995367132 +0000 UTC m=+22.019226165 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.495515 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.499418 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.503688 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.505011 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.515487 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.535534 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.579259 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.583495 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.583563 4953 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.583595 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.583606 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.583615 4953 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.583625 4953 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.583695 4953 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.583706 4953 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.583717 4953 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.583728 4953 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.583739 4953 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.583748 4953 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.583765 4953 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.583809 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.593079 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.623214 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.635413 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.713366 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.717321 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"263b0c0371666f28ff0dea1f13d44c4326c1caea4b5dcc92a6f93e5092c451a8"} Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.718979 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"617db9cd75de534eded7eeab8ba47feab2ed1105ca12694247e56246c2ac05d1"} Dec 11 10:11:43 crc kubenswrapper[4953]: W1211 10:11:43.741288 4953 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-06b5290443a7f9024fe871a85776b6190e17d74450f05857f228ab08278cd929 WatchSource:0}: Error finding container 06b5290443a7f9024fe871a85776b6190e17d74450f05857f228ab08278cd929: Status 404 returned error can't find the container with id 06b5290443a7f9024fe871a85776b6190e17d74450f05857f228ab08278cd929 Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.915205 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.915305 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 10:11:43 crc kubenswrapper[4953]: I1211 10:11:43.915427 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 10:11:43 crc kubenswrapper[4953]: E1211 10:11:43.915491 4953 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 11 10:11:43 crc kubenswrapper[4953]: E1211 10:11:43.915608 4953 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 11 10:11:43 crc kubenswrapper[4953]: E1211 10:11:43.915634 4953 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 11 10:11:43 crc kubenswrapper[4953]: E1211 10:11:43.915648 4953 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 10:11:44 crc kubenswrapper[4953]: E1211 10:11:43.915624 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-11 10:11:44.915560283 +0000 UTC m=+22.939419316 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 11 10:11:44 crc kubenswrapper[4953]: E1211 10:11:44.203725 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-11 10:11:45.203682053 +0000 UTC m=+23.227541086 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 10:11:44 crc kubenswrapper[4953]: E1211 10:11:44.203754 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 10:11:45.203738474 +0000 UTC m=+23.227597507 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:11:44 crc kubenswrapper[4953]: I1211 10:11:44.203850 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 10:11:44 crc kubenswrapper[4953]: I1211 10:11:44.203896 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 10:11:44 crc kubenswrapper[4953]: E1211 10:11:44.204033 4953 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 11 10:11:44 crc kubenswrapper[4953]: E1211 10:11:44.204083 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-11 10:11:45.204075215 +0000 UTC m=+23.227934248 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 11 10:11:44 crc kubenswrapper[4953]: E1211 10:11:44.204101 4953 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 11 10:11:44 crc kubenswrapper[4953]: E1211 10:11:44.204153 4953 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 11 10:11:44 crc kubenswrapper[4953]: E1211 10:11:44.204166 4953 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 10:11:44 crc kubenswrapper[4953]: E1211 10:11:44.204236 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-11 10:11:45.20422136 +0000 UTC m=+23.228080393 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 10:11:44 crc kubenswrapper[4953]: I1211 10:11:44.294200 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 10:11:44 crc kubenswrapper[4953]: I1211 10:11:44.301011 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 10:11:44 crc kubenswrapper[4953]: I1211 10:11:44.413383 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 11 10:11:44 crc kubenswrapper[4953]: I1211 10:11:44.418030 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 10:11:44 crc kubenswrapper[4953]: I1211 10:11:44.473441 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 10:11:44 crc kubenswrapper[4953]: E1211 10:11:44.473653 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 10:11:44 crc kubenswrapper[4953]: I1211 10:11:44.511488 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Dec 11 10:11:44 crc kubenswrapper[4953]: I1211 10:11:44.512876 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Dec 11 10:11:44 crc kubenswrapper[4953]: I1211 10:11:44.513318 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 10:11:44 crc kubenswrapper[4953]: I1211 10:11:44.514816 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Dec 11 10:11:44 crc kubenswrapper[4953]: I1211 10:11:44.515804 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Dec 11 10:11:44 crc kubenswrapper[4953]: I1211 10:11:44.517508 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Dec 11 10:11:44 crc kubenswrapper[4953]: I1211 10:11:44.518279 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Dec 11 10:11:44 crc kubenswrapper[4953]: I1211 10:11:44.519225 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Dec 11 10:11:44 crc kubenswrapper[4953]: I1211 10:11:44.520679 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Dec 11 10:11:44 crc kubenswrapper[4953]: I1211 10:11:44.521517 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Dec 11 10:11:44 crc kubenswrapper[4953]: I1211 10:11:44.523046 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Dec 11 10:11:44 crc kubenswrapper[4953]: I1211 10:11:44.523898 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Dec 11 10:11:44 crc kubenswrapper[4953]: I1211 10:11:44.524720 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Dec 11 10:11:44 crc kubenswrapper[4953]: I1211 10:11:44.525332 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Dec 11 10:11:44 crc kubenswrapper[4953]: I1211 10:11:44.526727 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Dec 11 10:11:44 crc kubenswrapper[4953]: I1211 10:11:44.527342 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Dec 11 10:11:44 crc kubenswrapper[4953]: I1211 10:11:44.528612 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Dec 11 10:11:44 crc kubenswrapper[4953]: I1211 10:11:44.530366 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Dec 11 10:11:44 crc kubenswrapper[4953]: I1211 10:11:44.534020 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Dec 11 10:11:44 crc kubenswrapper[4953]: I1211 10:11:44.534949 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Dec 11 10:11:44 crc kubenswrapper[4953]: I1211 10:11:44.535610 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Dec 11 10:11:44 crc kubenswrapper[4953]: I1211 10:11:44.536868 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Dec 11 10:11:44 crc kubenswrapper[4953]: I1211 10:11:44.537678 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Dec 11 10:11:44 crc kubenswrapper[4953]: I1211 10:11:44.538309 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Dec 11 10:11:44 crc kubenswrapper[4953]: I1211 10:11:44.539531 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 10:11:44 crc kubenswrapper[4953]: I1211 10:11:44.539785 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Dec 11 10:11:44 crc kubenswrapper[4953]: I1211 10:11:44.542172 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Dec 11 10:11:44 crc kubenswrapper[4953]: I1211 10:11:44.543140 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Dec 11 10:11:44 crc kubenswrapper[4953]: I1211 10:11:44.545168 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Dec 11 10:11:44 crc kubenswrapper[4953]: I1211 10:11:44.545940 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Dec 11 10:11:44 crc kubenswrapper[4953]: I1211 10:11:44.547814 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Dec 11 10:11:44 crc kubenswrapper[4953]: I1211 10:11:44.548672 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Dec 11 10:11:44 crc kubenswrapper[4953]: I1211 10:11:44.549353 4953 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Dec 11 10:11:44 crc kubenswrapper[4953]: I1211 10:11:44.549506 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Dec 11 10:11:44 crc kubenswrapper[4953]: I1211 10:11:44.552371 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Dec 11 10:11:44 crc kubenswrapper[4953]: I1211 10:11:44.553630 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 10:11:44 crc kubenswrapper[4953]: I1211 10:11:44.554451 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Dec 11 10:11:44 crc kubenswrapper[4953]: I1211 10:11:44.555181 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Dec 11 10:11:44 crc kubenswrapper[4953]: I1211 10:11:44.557410 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Dec 11 10:11:44 crc kubenswrapper[4953]: I1211 10:11:44.559265 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Dec 11 10:11:44 crc kubenswrapper[4953]: I1211 10:11:44.559919 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Dec 11 10:11:44 crc kubenswrapper[4953]: I1211 10:11:44.561255 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Dec 11 10:11:44 crc kubenswrapper[4953]: I1211 10:11:44.561986 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Dec 11 10:11:44 crc kubenswrapper[4953]: I1211 10:11:44.563023 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Dec 11 10:11:44 crc kubenswrapper[4953]: I1211 10:11:44.563699 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Dec 11 10:11:44 crc kubenswrapper[4953]: I1211 10:11:44.564866 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Dec 11 10:11:44 crc kubenswrapper[4953]: I1211 10:11:44.565787 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Dec 11 10:11:44 crc kubenswrapper[4953]: I1211 10:11:44.566099 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 10:11:44 crc kubenswrapper[4953]: I1211 10:11:44.570182 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Dec 11 10:11:44 crc kubenswrapper[4953]: I1211 10:11:44.570808 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Dec 11 10:11:44 crc kubenswrapper[4953]: I1211 10:11:44.571844 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Dec 11 10:11:44 crc kubenswrapper[4953]: I1211 10:11:44.572850 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Dec 11 10:11:44 crc kubenswrapper[4953]: I1211 10:11:44.573911 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Dec 11 10:11:44 crc kubenswrapper[4953]: I1211 10:11:44.574454 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Dec 11 10:11:44 crc kubenswrapper[4953]: I1211 10:11:44.575374 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Dec 11 10:11:44 crc kubenswrapper[4953]: I1211 10:11:44.575964 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Dec 11 10:11:44 crc kubenswrapper[4953]: I1211 10:11:44.576607 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Dec 11 10:11:44 crc kubenswrapper[4953]: I1211 10:11:44.577586 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Dec 11 10:11:44 crc kubenswrapper[4953]: I1211 10:11:44.578457 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 10:11:44 crc kubenswrapper[4953]: I1211 10:11:44.582614 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-7cgmm"] Dec 11 10:11:44 crc kubenswrapper[4953]: I1211 10:11:44.583310 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-7cgmm" Dec 11 10:11:44 crc kubenswrapper[4953]: I1211 10:11:44.587077 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Dec 11 10:11:44 crc kubenswrapper[4953]: I1211 10:11:44.587117 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Dec 11 10:11:44 crc kubenswrapper[4953]: I1211 10:11:44.587399 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Dec 11 10:11:44 crc kubenswrapper[4953]: I1211 10:11:44.589100 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Dec 11 10:11:44 crc kubenswrapper[4953]: I1211 10:11:44.670856 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 10:11:44 crc kubenswrapper[4953]: I1211 10:11:44.671421 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/a5c6cf50-ac15-4b65-98a3-24d5b55a9f5f-serviceca\") pod \"node-ca-7cgmm\" (UID: \"a5c6cf50-ac15-4b65-98a3-24d5b55a9f5f\") " pod="openshift-image-registry/node-ca-7cgmm" Dec 11 10:11:44 crc kubenswrapper[4953]: I1211 10:11:44.671453 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrv98\" (UniqueName: \"kubernetes.io/projected/a5c6cf50-ac15-4b65-98a3-24d5b55a9f5f-kube-api-access-wrv98\") pod \"node-ca-7cgmm\" (UID: \"a5c6cf50-ac15-4b65-98a3-24d5b55a9f5f\") " pod="openshift-image-registry/node-ca-7cgmm" Dec 11 10:11:44 crc kubenswrapper[4953]: I1211 10:11:44.671482 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a5c6cf50-ac15-4b65-98a3-24d5b55a9f5f-host\") pod \"node-ca-7cgmm\" (UID: \"a5c6cf50-ac15-4b65-98a3-24d5b55a9f5f\") " pod="openshift-image-registry/node-ca-7cgmm" Dec 11 10:11:44 crc kubenswrapper[4953]: I1211 10:11:44.688944 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 10:11:44 crc kubenswrapper[4953]: I1211 10:11:44.723340 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"7e38e7bec81ab11b9afe5c592d5c57aa1c0527e5e4031265a00a99ef8cb3c6dc"} Dec 11 10:11:44 crc kubenswrapper[4953]: I1211 10:11:44.723396 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"da0ab06260b0bf565e089d1d1a78ae71e0ce94f0d5e867393dafc543f9014367"} Dec 11 10:11:44 crc kubenswrapper[4953]: I1211 10:11:44.725191 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"eb9312a7af4fcd14d64411afec83b7315dbe399254aab23665cccfa0b04a62db"} Dec 11 10:11:44 crc kubenswrapper[4953]: I1211 10:11:44.727272 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"06b5290443a7f9024fe871a85776b6190e17d74450f05857f228ab08278cd929"} Dec 11 10:11:44 crc kubenswrapper[4953]: I1211 10:11:44.772567 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/a5c6cf50-ac15-4b65-98a3-24d5b55a9f5f-serviceca\") pod \"node-ca-7cgmm\" (UID: \"a5c6cf50-ac15-4b65-98a3-24d5b55a9f5f\") " pod="openshift-image-registry/node-ca-7cgmm" Dec 11 10:11:44 crc kubenswrapper[4953]: I1211 10:11:44.772654 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wrv98\" (UniqueName: \"kubernetes.io/projected/a5c6cf50-ac15-4b65-98a3-24d5b55a9f5f-kube-api-access-wrv98\") pod \"node-ca-7cgmm\" (UID: \"a5c6cf50-ac15-4b65-98a3-24d5b55a9f5f\") " pod="openshift-image-registry/node-ca-7cgmm" Dec 11 10:11:44 crc kubenswrapper[4953]: I1211 10:11:44.772686 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a5c6cf50-ac15-4b65-98a3-24d5b55a9f5f-host\") pod \"node-ca-7cgmm\" (UID: \"a5c6cf50-ac15-4b65-98a3-24d5b55a9f5f\") " pod="openshift-image-registry/node-ca-7cgmm" Dec 11 10:11:44 crc kubenswrapper[4953]: I1211 10:11:44.773916 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a5c6cf50-ac15-4b65-98a3-24d5b55a9f5f-host\") pod \"node-ca-7cgmm\" (UID: \"a5c6cf50-ac15-4b65-98a3-24d5b55a9f5f\") " pod="openshift-image-registry/node-ca-7cgmm" Dec 11 10:11:44 crc kubenswrapper[4953]: I1211 10:11:44.775164 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/a5c6cf50-ac15-4b65-98a3-24d5b55a9f5f-serviceca\") pod \"node-ca-7cgmm\" (UID: \"a5c6cf50-ac15-4b65-98a3-24d5b55a9f5f\") " pod="openshift-image-registry/node-ca-7cgmm" Dec 11 10:11:44 crc kubenswrapper[4953]: E1211 10:11:44.784316 4953 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-apiserver-crc\" already exists" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 10:11:44 crc kubenswrapper[4953]: I1211 10:11:44.800602 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 10:11:44 crc kubenswrapper[4953]: I1211 10:11:44.819495 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrv98\" (UniqueName: \"kubernetes.io/projected/a5c6cf50-ac15-4b65-98a3-24d5b55a9f5f-kube-api-access-wrv98\") pod \"node-ca-7cgmm\" (UID: \"a5c6cf50-ac15-4b65-98a3-24d5b55a9f5f\") " pod="openshift-image-registry/node-ca-7cgmm" Dec 11 10:11:44 crc kubenswrapper[4953]: I1211 10:11:44.826173 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 10:11:44 crc kubenswrapper[4953]: I1211 10:11:44.837233 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 10:11:44 crc kubenswrapper[4953]: I1211 10:11:44.966390 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 10:11:44 crc kubenswrapper[4953]: I1211 10:11:44.974173 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 10:11:44 crc kubenswrapper[4953]: E1211 10:11:44.974340 4953 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 11 10:11:44 crc kubenswrapper[4953]: E1211 10:11:44.974421 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-11 10:11:46.974388906 +0000 UTC m=+24.998247959 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 11 10:11:45 crc kubenswrapper[4953]: I1211 10:11:45.033261 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-7cgmm" Dec 11 10:11:45 crc kubenswrapper[4953]: I1211 10:11:45.033545 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7f8ca70-14ac-499f-9a73-c03f1cb9d3f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afbf1d478a1ccbd17c29483adf2e39e60be93dfde72d96dd4c45ee2b81c7db7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89487ecc0b25583d92a2adb537e660618a1f0477d9b0ca805c7d5cc120a38ef5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5850c59617cbc5cbf3d86246bfb8d7645964fdb32f406648e47de3d2e1dcca39\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b38e6fc7946d99ff7570627e9bfd01e9f5e029ad3f3e2cda276461f222d7950\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a91255550d88dd1963fef1112d90d2c1e779fc3e2dd1e7c824640879b8c6a58e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T10:11:37Z\\\",\\\"message\\\":\\\"W1211 10:11:26.311312 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1211 10:11:26.312053 1 crypto.go:601] Generating new CA for check-endpoints-signer@1765447886 cert, and key in /tmp/serving-cert-3652440615/serving-signer.crt, /tmp/serving-cert-3652440615/serving-signer.key\\\\nI1211 10:11:26.711906 1 observer_polling.go:159] Starting file observer\\\\nW1211 10:11:26.714018 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1211 10:11:26.714220 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1211 10:11:26.715195 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3652440615/tls.crt::/tmp/serving-cert-3652440615/tls.key\\\\\\\"\\\\nF1211 10:11:37.220702 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:26Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2348bd7a336966cd91aa6ba1cf71771e7fd111085acbb0481adee82d7a6e109\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4536100178c4611eea8cfe2b15d79f55fdd32b7004de523cc2694bd656ce5be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4536100178c4611eea8cfe2b15d79f55fdd32b7004de523cc2694bd656ce5be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 10:11:45 crc kubenswrapper[4953]: I1211 10:11:45.050037 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7f8ca70-14ac-499f-9a73-c03f1cb9d3f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afbf1d478a1ccbd17c29483adf2e39e60be93dfde72d96dd4c45ee2b81c7db7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89487ecc0b25583d92a2adb537e660618a1f0477d9b0ca805c7d5cc120a38ef5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5850c59617cbc5cbf3d86246bfb8d7645964fdb32f406648e47de3d2e1dcca39\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b38e6fc7946d99ff7570627e9bfd01e9f5e029ad3f3e2cda276461f222d7950\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a91255550d88dd1963fef1112d90d2c1e779fc3e2dd1e7c824640879b8c6a58e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T10:11:37Z\\\",\\\"message\\\":\\\"W1211 10:11:26.311312 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1211 10:11:26.312053 1 crypto.go:601] Generating new CA for check-endpoints-signer@1765447886 cert, and key in /tmp/serving-cert-3652440615/serving-signer.crt, /tmp/serving-cert-3652440615/serving-signer.key\\\\nI1211 10:11:26.711906 1 observer_polling.go:159] Starting file observer\\\\nW1211 10:11:26.714018 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1211 10:11:26.714220 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1211 10:11:26.715195 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3652440615/tls.crt::/tmp/serving-cert-3652440615/tls.key\\\\\\\"\\\\nF1211 10:11:37.220702 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:26Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2348bd7a336966cd91aa6ba1cf71771e7fd111085acbb0481adee82d7a6e109\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4536100178c4611eea8cfe2b15d79f55fdd32b7004de523cc2694bd656ce5be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4536100178c4611eea8cfe2b15d79f55fdd32b7004de523cc2694bd656ce5be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 10:11:45 crc kubenswrapper[4953]: I1211 10:11:45.094312 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-ps59j"] Dec 11 10:11:45 crc kubenswrapper[4953]: I1211 10:11:45.094742 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-ps59j" Dec 11 10:11:45 crc kubenswrapper[4953]: I1211 10:11:45.102198 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Dec 11 10:11:45 crc kubenswrapper[4953]: I1211 10:11:45.102590 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Dec 11 10:11:45 crc kubenswrapper[4953]: I1211 10:11:45.102764 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-q2898"] Dec 11 10:11:45 crc kubenswrapper[4953]: I1211 10:11:45.102874 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 10:11:45 crc kubenswrapper[4953]: I1211 10:11:45.103028 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Dec 11 10:11:45 crc kubenswrapper[4953]: I1211 10:11:45.103246 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-q2898" Dec 11 10:11:45 crc kubenswrapper[4953]: I1211 10:11:45.200651 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ed741fb7-1326-48b7-a713-17c9f0243eac-proxy-tls\") pod \"machine-config-daemon-q2898\" (UID: \"ed741fb7-1326-48b7-a713-17c9f0243eac\") " pod="openshift-machine-config-operator/machine-config-daemon-q2898" Dec 11 10:11:45 crc kubenswrapper[4953]: I1211 10:11:45.200693 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ed741fb7-1326-48b7-a713-17c9f0243eac-mcd-auth-proxy-config\") pod \"machine-config-daemon-q2898\" (UID: \"ed741fb7-1326-48b7-a713-17c9f0243eac\") " pod="openshift-machine-config-operator/machine-config-daemon-q2898" Dec 11 10:11:45 crc kubenswrapper[4953]: I1211 10:11:45.200728 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/ed741fb7-1326-48b7-a713-17c9f0243eac-rootfs\") pod \"machine-config-daemon-q2898\" (UID: \"ed741fb7-1326-48b7-a713-17c9f0243eac\") " pod="openshift-machine-config-operator/machine-config-daemon-q2898" Dec 11 10:11:45 crc kubenswrapper[4953]: I1211 10:11:45.200744 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5z9nk\" (UniqueName: \"kubernetes.io/projected/ed741fb7-1326-48b7-a713-17c9f0243eac-kube-api-access-5z9nk\") pod \"machine-config-daemon-q2898\" (UID: \"ed741fb7-1326-48b7-a713-17c9f0243eac\") " pod="openshift-machine-config-operator/machine-config-daemon-q2898" Dec 11 10:11:45 crc kubenswrapper[4953]: I1211 10:11:45.201201 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Dec 11 10:11:45 crc kubenswrapper[4953]: I1211 10:11:45.201442 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Dec 11 10:11:45 crc kubenswrapper[4953]: I1211 10:11:45.201646 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Dec 11 10:11:45 crc kubenswrapper[4953]: I1211 10:11:45.202859 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Dec 11 10:11:45 crc kubenswrapper[4953]: I1211 10:11:45.229864 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Dec 11 10:11:45 crc kubenswrapper[4953]: I1211 10:11:45.391301 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 10:11:45 crc kubenswrapper[4953]: I1211 10:11:45.391384 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5z9nk\" (UniqueName: \"kubernetes.io/projected/ed741fb7-1326-48b7-a713-17c9f0243eac-kube-api-access-5z9nk\") pod \"machine-config-daemon-q2898\" (UID: \"ed741fb7-1326-48b7-a713-17c9f0243eac\") " pod="openshift-machine-config-operator/machine-config-daemon-q2898" Dec 11 10:11:45 crc kubenswrapper[4953]: I1211 10:11:45.391409 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/ed9da9e3-3f97-49f6-9774-3c2f06987b9d-hosts-file\") pod \"node-resolver-ps59j\" (UID: \"ed9da9e3-3f97-49f6-9774-3c2f06987b9d\") " pod="openshift-dns/node-resolver-ps59j" Dec 11 10:11:45 crc kubenswrapper[4953]: I1211 10:11:45.391431 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 10:11:45 crc kubenswrapper[4953]: I1211 10:11:45.391450 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 10:11:45 crc kubenswrapper[4953]: I1211 10:11:45.391475 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vngds\" (UniqueName: \"kubernetes.io/projected/ed9da9e3-3f97-49f6-9774-3c2f06987b9d-kube-api-access-vngds\") pod \"node-resolver-ps59j\" (UID: \"ed9da9e3-3f97-49f6-9774-3c2f06987b9d\") " pod="openshift-dns/node-resolver-ps59j" Dec 11 10:11:45 crc kubenswrapper[4953]: I1211 10:11:45.391493 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ed741fb7-1326-48b7-a713-17c9f0243eac-proxy-tls\") pod \"machine-config-daemon-q2898\" (UID: \"ed741fb7-1326-48b7-a713-17c9f0243eac\") " pod="openshift-machine-config-operator/machine-config-daemon-q2898" Dec 11 10:11:45 crc kubenswrapper[4953]: I1211 10:11:45.391508 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ed741fb7-1326-48b7-a713-17c9f0243eac-mcd-auth-proxy-config\") pod \"machine-config-daemon-q2898\" (UID: \"ed741fb7-1326-48b7-a713-17c9f0243eac\") " pod="openshift-machine-config-operator/machine-config-daemon-q2898" Dec 11 10:11:45 crc kubenswrapper[4953]: I1211 10:11:45.391531 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 10:11:45 crc kubenswrapper[4953]: I1211 10:11:45.391547 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/ed741fb7-1326-48b7-a713-17c9f0243eac-rootfs\") pod \"machine-config-daemon-q2898\" (UID: \"ed741fb7-1326-48b7-a713-17c9f0243eac\") " pod="openshift-machine-config-operator/machine-config-daemon-q2898" Dec 11 10:11:45 crc kubenswrapper[4953]: I1211 10:11:45.391612 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/ed741fb7-1326-48b7-a713-17c9f0243eac-rootfs\") pod \"machine-config-daemon-q2898\" (UID: \"ed741fb7-1326-48b7-a713-17c9f0243eac\") " pod="openshift-machine-config-operator/machine-config-daemon-q2898" Dec 11 10:11:45 crc kubenswrapper[4953]: E1211 10:11:45.391815 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 10:11:47.391800355 +0000 UTC m=+25.415659388 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:11:45 crc kubenswrapper[4953]: I1211 10:11:45.391927 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e38e7bec81ab11b9afe5c592d5c57aa1c0527e5e4031265a00a99ef8cb3c6dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da0ab06260b0bf565e089d1d1a78ae71e0ce94f0d5e867393dafc543f9014367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 10:11:45 crc kubenswrapper[4953]: E1211 10:11:45.392014 4953 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 11 10:11:45 crc kubenswrapper[4953]: E1211 10:11:45.392063 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-11 10:11:47.392052343 +0000 UTC m=+25.415911376 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 11 10:11:45 crc kubenswrapper[4953]: E1211 10:11:45.392144 4953 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 11 10:11:45 crc kubenswrapper[4953]: E1211 10:11:45.392171 4953 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 11 10:11:45 crc kubenswrapper[4953]: E1211 10:11:45.392191 4953 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 10:11:45 crc kubenswrapper[4953]: E1211 10:11:45.392231 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-11 10:11:47.392221049 +0000 UTC m=+25.416080082 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 10:11:45 crc kubenswrapper[4953]: E1211 10:11:45.392297 4953 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 11 10:11:45 crc kubenswrapper[4953]: E1211 10:11:45.392311 4953 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 11 10:11:45 crc kubenswrapper[4953]: E1211 10:11:45.392319 4953 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 10:11:45 crc kubenswrapper[4953]: E1211 10:11:45.392363 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-11 10:11:47.392347453 +0000 UTC m=+25.416206486 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 10:11:45 crc kubenswrapper[4953]: I1211 10:11:45.393478 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ed741fb7-1326-48b7-a713-17c9f0243eac-mcd-auth-proxy-config\") pod \"machine-config-daemon-q2898\" (UID: \"ed741fb7-1326-48b7-a713-17c9f0243eac\") " pod="openshift-machine-config-operator/machine-config-daemon-q2898" Dec 11 10:11:45 crc kubenswrapper[4953]: I1211 10:11:45.505467 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/ed9da9e3-3f97-49f6-9774-3c2f06987b9d-hosts-file\") pod \"node-resolver-ps59j\" (UID: \"ed9da9e3-3f97-49f6-9774-3c2f06987b9d\") " pod="openshift-dns/node-resolver-ps59j" Dec 11 10:11:45 crc kubenswrapper[4953]: I1211 10:11:45.505556 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vngds\" (UniqueName: \"kubernetes.io/projected/ed9da9e3-3f97-49f6-9774-3c2f06987b9d-kube-api-access-vngds\") pod \"node-resolver-ps59j\" (UID: \"ed9da9e3-3f97-49f6-9774-3c2f06987b9d\") " pod="openshift-dns/node-resolver-ps59j" Dec 11 10:11:45 crc kubenswrapper[4953]: I1211 10:11:45.505911 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/ed9da9e3-3f97-49f6-9774-3c2f06987b9d-hosts-file\") pod \"node-resolver-ps59j\" (UID: \"ed9da9e3-3f97-49f6-9774-3c2f06987b9d\") " pod="openshift-dns/node-resolver-ps59j" Dec 11 10:11:45 crc kubenswrapper[4953]: I1211 10:11:45.506053 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 10:11:45 crc kubenswrapper[4953]: E1211 10:11:45.506147 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 10:11:45 crc kubenswrapper[4953]: I1211 10:11:45.506242 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 10:11:45 crc kubenswrapper[4953]: E1211 10:11:45.506305 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 10:11:45 crc kubenswrapper[4953]: I1211 10:11:45.510395 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ed741fb7-1326-48b7-a713-17c9f0243eac-proxy-tls\") pod \"machine-config-daemon-q2898\" (UID: \"ed741fb7-1326-48b7-a713-17c9f0243eac\") " pod="openshift-machine-config-operator/machine-config-daemon-q2898" Dec 11 10:11:45 crc kubenswrapper[4953]: I1211 10:11:45.513534 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5z9nk\" (UniqueName: \"kubernetes.io/projected/ed741fb7-1326-48b7-a713-17c9f0243eac-kube-api-access-5z9nk\") pod \"machine-config-daemon-q2898\" (UID: \"ed741fb7-1326-48b7-a713-17c9f0243eac\") " pod="openshift-machine-config-operator/machine-config-daemon-q2898" Dec 11 10:11:45 crc kubenswrapper[4953]: I1211 10:11:45.534458 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-q2898" Dec 11 10:11:45 crc kubenswrapper[4953]: I1211 10:11:45.538188 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vngds\" (UniqueName: \"kubernetes.io/projected/ed9da9e3-3f97-49f6-9774-3c2f06987b9d-kube-api-access-vngds\") pod \"node-resolver-ps59j\" (UID: \"ed9da9e3-3f97-49f6-9774-3c2f06987b9d\") " pod="openshift-dns/node-resolver-ps59j" Dec 11 10:11:45 crc kubenswrapper[4953]: I1211 10:11:45.605673 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 10:11:45 crc kubenswrapper[4953]: I1211 10:11:45.698146 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb9312a7af4fcd14d64411afec83b7315dbe399254aab23665cccfa0b04a62db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 10:11:45 crc kubenswrapper[4953]: I1211 10:11:45.730955 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q2898" event={"ID":"ed741fb7-1326-48b7-a713-17c9f0243eac","Type":"ContainerStarted","Data":"5588258704b9884435804927bf1cee6cff68de0a18dc01853197acfb462115d5"} Dec 11 10:11:45 crc kubenswrapper[4953]: I1211 10:11:45.732275 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-7cgmm" event={"ID":"a5c6cf50-ac15-4b65-98a3-24d5b55a9f5f","Type":"ContainerStarted","Data":"84633a2b4dff3df6a0ddec5acd265e2c37362902b5edc09a207f1aeda55ebb05"} Dec 11 10:11:45 crc kubenswrapper[4953]: I1211 10:11:45.815776 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:45Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:45 crc kubenswrapper[4953]: I1211 10:11:45.826028 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-ps59j" Dec 11 10:11:45 crc kubenswrapper[4953]: I1211 10:11:45.832182 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-h4dvx"] Dec 11 10:11:45 crc kubenswrapper[4953]: I1211 10:11:45.832516 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-h4dvx" Dec 11 10:11:45 crc kubenswrapper[4953]: I1211 10:11:45.837120 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-x6f57"] Dec 11 10:11:45 crc kubenswrapper[4953]: I1211 10:11:45.838001 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-pqtrx"] Dec 11 10:11:45 crc kubenswrapper[4953]: I1211 10:11:45.838646 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-pqtrx" Dec 11 10:11:45 crc kubenswrapper[4953]: I1211 10:11:45.839077 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-x6f57" Dec 11 10:11:45 crc kubenswrapper[4953]: I1211 10:11:45.843125 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Dec 11 10:11:45 crc kubenswrapper[4953]: I1211 10:11:45.843643 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Dec 11 10:11:45 crc kubenswrapper[4953]: I1211 10:11:45.845543 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Dec 11 10:11:45 crc kubenswrapper[4953]: I1211 10:11:45.845737 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Dec 11 10:11:45 crc kubenswrapper[4953]: I1211 10:11:45.845838 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Dec 11 10:11:45 crc kubenswrapper[4953]: I1211 10:11:45.845965 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Dec 11 10:11:45 crc kubenswrapper[4953]: I1211 10:11:45.854839 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Dec 11 10:11:45 crc kubenswrapper[4953]: I1211 10:11:45.865607 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Dec 11 10:11:45 crc kubenswrapper[4953]: I1211 10:11:45.866821 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Dec 11 10:11:45 crc kubenswrapper[4953]: I1211 10:11:45.866977 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Dec 11 10:11:45 crc kubenswrapper[4953]: I1211 10:11:45.867309 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Dec 11 10:11:45 crc kubenswrapper[4953]: I1211 10:11:45.868632 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Dec 11 10:11:45 crc kubenswrapper[4953]: I1211 10:11:45.868768 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Dec 11 10:11:45 crc kubenswrapper[4953]: I1211 10:11:45.871186 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Dec 11 10:11:46 crc kubenswrapper[4953]: I1211 10:11:46.160901 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:46Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:46 crc kubenswrapper[4953]: I1211 10:11:46.161520 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 10:11:46 crc kubenswrapper[4953]: I1211 10:11:46.215455 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 10:11:46 crc kubenswrapper[4953]: I1211 10:11:46.219101 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7cgmm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5c6cf50-ac15-4b65-98a3-24d5b55a9f5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrv98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7cgmm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:46Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:46 crc kubenswrapper[4953]: I1211 10:11:46.236736 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb9312a7af4fcd14d64411afec83b7315dbe399254aab23665cccfa0b04a62db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:46Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:46 crc kubenswrapper[4953]: I1211 10:11:46.239152 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d80d6bd6-dd9c-433e-93cb-2be48e4cea72-system-cni-dir\") pod \"multus-additional-cni-plugins-pqtrx\" (UID: \"d80d6bd6-dd9c-433e-93cb-2be48e4cea72\") " pod="openshift-multus/multus-additional-cni-plugins-pqtrx" Dec 11 10:11:46 crc kubenswrapper[4953]: I1211 10:11:46.239203 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/644e1d40-ab80-469e-94b4-540e52b8e2c0-host-var-lib-cni-bin\") pod \"multus-h4dvx\" (UID: \"644e1d40-ab80-469e-94b4-540e52b8e2c0\") " pod="openshift-multus/multus-h4dvx" Dec 11 10:11:46 crc kubenswrapper[4953]: I1211 10:11:46.239229 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/644e1d40-ab80-469e-94b4-540e52b8e2c0-host-run-multus-certs\") pod \"multus-h4dvx\" (UID: \"644e1d40-ab80-469e-94b4-540e52b8e2c0\") " pod="openshift-multus/multus-h4dvx" Dec 11 10:11:46 crc kubenswrapper[4953]: I1211 10:11:46.239251 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/644e1d40-ab80-469e-94b4-540e52b8e2c0-etc-kubernetes\") pod \"multus-h4dvx\" (UID: \"644e1d40-ab80-469e-94b4-540e52b8e2c0\") " pod="openshift-multus/multus-h4dvx" Dec 11 10:11:46 crc kubenswrapper[4953]: I1211 10:11:46.239286 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c09d8243-6693-433e-bce1-8a99e5e37b95-host-run-ovn-kubernetes\") pod \"ovnkube-node-x6f57\" (UID: \"c09d8243-6693-433e-bce1-8a99e5e37b95\") " pod="openshift-ovn-kubernetes/ovnkube-node-x6f57" Dec 11 10:11:46 crc kubenswrapper[4953]: I1211 10:11:46.239307 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/644e1d40-ab80-469e-94b4-540e52b8e2c0-multus-cni-dir\") pod \"multus-h4dvx\" (UID: \"644e1d40-ab80-469e-94b4-540e52b8e2c0\") " pod="openshift-multus/multus-h4dvx" Dec 11 10:11:46 crc kubenswrapper[4953]: I1211 10:11:46.239326 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/644e1d40-ab80-469e-94b4-540e52b8e2c0-host-var-lib-kubelet\") pod \"multus-h4dvx\" (UID: \"644e1d40-ab80-469e-94b4-540e52b8e2c0\") " pod="openshift-multus/multus-h4dvx" Dec 11 10:11:46 crc kubenswrapper[4953]: I1211 10:11:46.239348 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c09d8243-6693-433e-bce1-8a99e5e37b95-host-slash\") pod \"ovnkube-node-x6f57\" (UID: \"c09d8243-6693-433e-bce1-8a99e5e37b95\") " pod="openshift-ovn-kubernetes/ovnkube-node-x6f57" Dec 11 10:11:46 crc kubenswrapper[4953]: I1211 10:11:46.239370 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbwwr\" (UniqueName: \"kubernetes.io/projected/644e1d40-ab80-469e-94b4-540e52b8e2c0-kube-api-access-lbwwr\") pod \"multus-h4dvx\" (UID: \"644e1d40-ab80-469e-94b4-540e52b8e2c0\") " pod="openshift-multus/multus-h4dvx" Dec 11 10:11:46 crc kubenswrapper[4953]: I1211 10:11:46.239391 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/d80d6bd6-dd9c-433e-93cb-2be48e4cea72-cni-binary-copy\") pod \"multus-additional-cni-plugins-pqtrx\" (UID: \"d80d6bd6-dd9c-433e-93cb-2be48e4cea72\") " pod="openshift-multus/multus-additional-cni-plugins-pqtrx" Dec 11 10:11:46 crc kubenswrapper[4953]: I1211 10:11:46.239412 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/d80d6bd6-dd9c-433e-93cb-2be48e4cea72-cnibin\") pod \"multus-additional-cni-plugins-pqtrx\" (UID: \"d80d6bd6-dd9c-433e-93cb-2be48e4cea72\") " pod="openshift-multus/multus-additional-cni-plugins-pqtrx" Dec 11 10:11:46 crc kubenswrapper[4953]: I1211 10:11:46.239433 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c09d8243-6693-433e-bce1-8a99e5e37b95-env-overrides\") pod \"ovnkube-node-x6f57\" (UID: \"c09d8243-6693-433e-bce1-8a99e5e37b95\") " pod="openshift-ovn-kubernetes/ovnkube-node-x6f57" Dec 11 10:11:46 crc kubenswrapper[4953]: I1211 10:11:46.239454 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c09d8243-6693-433e-bce1-8a99e5e37b95-log-socket\") pod \"ovnkube-node-x6f57\" (UID: \"c09d8243-6693-433e-bce1-8a99e5e37b95\") " pod="openshift-ovn-kubernetes/ovnkube-node-x6f57" Dec 11 10:11:46 crc kubenswrapper[4953]: I1211 10:11:46.239483 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c09d8243-6693-433e-bce1-8a99e5e37b95-host-cni-bin\") pod \"ovnkube-node-x6f57\" (UID: \"c09d8243-6693-433e-bce1-8a99e5e37b95\") " pod="openshift-ovn-kubernetes/ovnkube-node-x6f57" Dec 11 10:11:46 crc kubenswrapper[4953]: I1211 10:11:46.239505 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/644e1d40-ab80-469e-94b4-540e52b8e2c0-multus-daemon-config\") pod \"multus-h4dvx\" (UID: \"644e1d40-ab80-469e-94b4-540e52b8e2c0\") " pod="openshift-multus/multus-h4dvx" Dec 11 10:11:46 crc kubenswrapper[4953]: I1211 10:11:46.239526 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/644e1d40-ab80-469e-94b4-540e52b8e2c0-cnibin\") pod \"multus-h4dvx\" (UID: \"644e1d40-ab80-469e-94b4-540e52b8e2c0\") " pod="openshift-multus/multus-h4dvx" Dec 11 10:11:46 crc kubenswrapper[4953]: I1211 10:11:46.239550 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c09d8243-6693-433e-bce1-8a99e5e37b95-node-log\") pod \"ovnkube-node-x6f57\" (UID: \"c09d8243-6693-433e-bce1-8a99e5e37b95\") " pod="openshift-ovn-kubernetes/ovnkube-node-x6f57" Dec 11 10:11:46 crc kubenswrapper[4953]: I1211 10:11:46.239595 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c09d8243-6693-433e-bce1-8a99e5e37b95-host-cni-netd\") pod \"ovnkube-node-x6f57\" (UID: \"c09d8243-6693-433e-bce1-8a99e5e37b95\") " pod="openshift-ovn-kubernetes/ovnkube-node-x6f57" Dec 11 10:11:46 crc kubenswrapper[4953]: I1211 10:11:46.239630 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c09d8243-6693-433e-bce1-8a99e5e37b95-etc-openvswitch\") pod \"ovnkube-node-x6f57\" (UID: \"c09d8243-6693-433e-bce1-8a99e5e37b95\") " pod="openshift-ovn-kubernetes/ovnkube-node-x6f57" Dec 11 10:11:46 crc kubenswrapper[4953]: I1211 10:11:46.239653 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/644e1d40-ab80-469e-94b4-540e52b8e2c0-os-release\") pod \"multus-h4dvx\" (UID: \"644e1d40-ab80-469e-94b4-540e52b8e2c0\") " pod="openshift-multus/multus-h4dvx" Dec 11 10:11:46 crc kubenswrapper[4953]: I1211 10:11:46.239672 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/644e1d40-ab80-469e-94b4-540e52b8e2c0-host-run-netns\") pod \"multus-h4dvx\" (UID: \"644e1d40-ab80-469e-94b4-540e52b8e2c0\") " pod="openshift-multus/multus-h4dvx" Dec 11 10:11:46 crc kubenswrapper[4953]: I1211 10:11:46.239695 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c09d8243-6693-433e-bce1-8a99e5e37b95-systemd-units\") pod \"ovnkube-node-x6f57\" (UID: \"c09d8243-6693-433e-bce1-8a99e5e37b95\") " pod="openshift-ovn-kubernetes/ovnkube-node-x6f57" Dec 11 10:11:46 crc kubenswrapper[4953]: I1211 10:11:46.239717 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c09d8243-6693-433e-bce1-8a99e5e37b95-ovnkube-script-lib\") pod \"ovnkube-node-x6f57\" (UID: \"c09d8243-6693-433e-bce1-8a99e5e37b95\") " pod="openshift-ovn-kubernetes/ovnkube-node-x6f57" Dec 11 10:11:46 crc kubenswrapper[4953]: I1211 10:11:46.239735 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/d80d6bd6-dd9c-433e-93cb-2be48e4cea72-os-release\") pod \"multus-additional-cni-plugins-pqtrx\" (UID: \"d80d6bd6-dd9c-433e-93cb-2be48e4cea72\") " pod="openshift-multus/multus-additional-cni-plugins-pqtrx" Dec 11 10:11:46 crc kubenswrapper[4953]: I1211 10:11:46.239749 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/644e1d40-ab80-469e-94b4-540e52b8e2c0-cni-binary-copy\") pod \"multus-h4dvx\" (UID: \"644e1d40-ab80-469e-94b4-540e52b8e2c0\") " pod="openshift-multus/multus-h4dvx" Dec 11 10:11:46 crc kubenswrapper[4953]: I1211 10:11:46.239783 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/644e1d40-ab80-469e-94b4-540e52b8e2c0-multus-conf-dir\") pod \"multus-h4dvx\" (UID: \"644e1d40-ab80-469e-94b4-540e52b8e2c0\") " pod="openshift-multus/multus-h4dvx" Dec 11 10:11:46 crc kubenswrapper[4953]: I1211 10:11:46.239804 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c09d8243-6693-433e-bce1-8a99e5e37b95-var-lib-openvswitch\") pod \"ovnkube-node-x6f57\" (UID: \"c09d8243-6693-433e-bce1-8a99e5e37b95\") " pod="openshift-ovn-kubernetes/ovnkube-node-x6f57" Dec 11 10:11:46 crc kubenswrapper[4953]: I1211 10:11:46.239825 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c09d8243-6693-433e-bce1-8a99e5e37b95-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-x6f57\" (UID: \"c09d8243-6693-433e-bce1-8a99e5e37b95\") " pod="openshift-ovn-kubernetes/ovnkube-node-x6f57" Dec 11 10:11:46 crc kubenswrapper[4953]: I1211 10:11:46.239847 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c09d8243-6693-433e-bce1-8a99e5e37b95-ovn-node-metrics-cert\") pod \"ovnkube-node-x6f57\" (UID: \"c09d8243-6693-433e-bce1-8a99e5e37b95\") " pod="openshift-ovn-kubernetes/ovnkube-node-x6f57" Dec 11 10:11:46 crc kubenswrapper[4953]: I1211 10:11:46.239867 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78fl9\" (UniqueName: \"kubernetes.io/projected/c09d8243-6693-433e-bce1-8a99e5e37b95-kube-api-access-78fl9\") pod \"ovnkube-node-x6f57\" (UID: \"c09d8243-6693-433e-bce1-8a99e5e37b95\") " pod="openshift-ovn-kubernetes/ovnkube-node-x6f57" Dec 11 10:11:46 crc kubenswrapper[4953]: I1211 10:11:46.239921 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c09d8243-6693-433e-bce1-8a99e5e37b95-run-ovn\") pod \"ovnkube-node-x6f57\" (UID: \"c09d8243-6693-433e-bce1-8a99e5e37b95\") " pod="openshift-ovn-kubernetes/ovnkube-node-x6f57" Dec 11 10:11:46 crc kubenswrapper[4953]: I1211 10:11:46.239971 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c09d8243-6693-433e-bce1-8a99e5e37b95-ovnkube-config\") pod \"ovnkube-node-x6f57\" (UID: \"c09d8243-6693-433e-bce1-8a99e5e37b95\") " pod="openshift-ovn-kubernetes/ovnkube-node-x6f57" Dec 11 10:11:46 crc kubenswrapper[4953]: I1211 10:11:46.239991 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/d80d6bd6-dd9c-433e-93cb-2be48e4cea72-tuning-conf-dir\") pod \"multus-additional-cni-plugins-pqtrx\" (UID: \"d80d6bd6-dd9c-433e-93cb-2be48e4cea72\") " pod="openshift-multus/multus-additional-cni-plugins-pqtrx" Dec 11 10:11:46 crc kubenswrapper[4953]: I1211 10:11:46.240018 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/644e1d40-ab80-469e-94b4-540e52b8e2c0-system-cni-dir\") pod \"multus-h4dvx\" (UID: \"644e1d40-ab80-469e-94b4-540e52b8e2c0\") " pod="openshift-multus/multus-h4dvx" Dec 11 10:11:46 crc kubenswrapper[4953]: I1211 10:11:46.240052 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c09d8243-6693-433e-bce1-8a99e5e37b95-run-openvswitch\") pod \"ovnkube-node-x6f57\" (UID: \"c09d8243-6693-433e-bce1-8a99e5e37b95\") " pod="openshift-ovn-kubernetes/ovnkube-node-x6f57" Dec 11 10:11:46 crc kubenswrapper[4953]: I1211 10:11:46.240073 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f54nc\" (UniqueName: \"kubernetes.io/projected/d80d6bd6-dd9c-433e-93cb-2be48e4cea72-kube-api-access-f54nc\") pod \"multus-additional-cni-plugins-pqtrx\" (UID: \"d80d6bd6-dd9c-433e-93cb-2be48e4cea72\") " pod="openshift-multus/multus-additional-cni-plugins-pqtrx" Dec 11 10:11:46 crc kubenswrapper[4953]: I1211 10:11:46.240103 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/644e1d40-ab80-469e-94b4-540e52b8e2c0-multus-socket-dir-parent\") pod \"multus-h4dvx\" (UID: \"644e1d40-ab80-469e-94b4-540e52b8e2c0\") " pod="openshift-multus/multus-h4dvx" Dec 11 10:11:46 crc kubenswrapper[4953]: I1211 10:11:46.240125 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/644e1d40-ab80-469e-94b4-540e52b8e2c0-host-var-lib-cni-multus\") pod \"multus-h4dvx\" (UID: \"644e1d40-ab80-469e-94b4-540e52b8e2c0\") " pod="openshift-multus/multus-h4dvx" Dec 11 10:11:46 crc kubenswrapper[4953]: I1211 10:11:46.240146 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c09d8243-6693-433e-bce1-8a99e5e37b95-host-run-netns\") pod \"ovnkube-node-x6f57\" (UID: \"c09d8243-6693-433e-bce1-8a99e5e37b95\") " pod="openshift-ovn-kubernetes/ovnkube-node-x6f57" Dec 11 10:11:46 crc kubenswrapper[4953]: I1211 10:11:46.240167 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/d80d6bd6-dd9c-433e-93cb-2be48e4cea72-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-pqtrx\" (UID: \"d80d6bd6-dd9c-433e-93cb-2be48e4cea72\") " pod="openshift-multus/multus-additional-cni-plugins-pqtrx" Dec 11 10:11:46 crc kubenswrapper[4953]: I1211 10:11:46.240187 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/644e1d40-ab80-469e-94b4-540e52b8e2c0-host-run-k8s-cni-cncf-io\") pod \"multus-h4dvx\" (UID: \"644e1d40-ab80-469e-94b4-540e52b8e2c0\") " pod="openshift-multus/multus-h4dvx" Dec 11 10:11:46 crc kubenswrapper[4953]: I1211 10:11:46.240318 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/644e1d40-ab80-469e-94b4-540e52b8e2c0-hostroot\") pod \"multus-h4dvx\" (UID: \"644e1d40-ab80-469e-94b4-540e52b8e2c0\") " pod="openshift-multus/multus-h4dvx" Dec 11 10:11:46 crc kubenswrapper[4953]: I1211 10:11:46.240386 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c09d8243-6693-433e-bce1-8a99e5e37b95-host-kubelet\") pod \"ovnkube-node-x6f57\" (UID: \"c09d8243-6693-433e-bce1-8a99e5e37b95\") " pod="openshift-ovn-kubernetes/ovnkube-node-x6f57" Dec 11 10:11:46 crc kubenswrapper[4953]: I1211 10:11:46.240408 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c09d8243-6693-433e-bce1-8a99e5e37b95-run-systemd\") pod \"ovnkube-node-x6f57\" (UID: \"c09d8243-6693-433e-bce1-8a99e5e37b95\") " pod="openshift-ovn-kubernetes/ovnkube-node-x6f57" Dec 11 10:11:46 crc kubenswrapper[4953]: I1211 10:11:46.250169 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Dec 11 10:11:46 crc kubenswrapper[4953]: I1211 10:11:46.294471 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q2898" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed741fb7-1326-48b7-a713-17c9f0243eac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5z9nk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5z9nk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q2898\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:46Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:46 crc kubenswrapper[4953]: I1211 10:11:46.341056 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/644e1d40-ab80-469e-94b4-540e52b8e2c0-system-cni-dir\") pod \"multus-h4dvx\" (UID: \"644e1d40-ab80-469e-94b4-540e52b8e2c0\") " pod="openshift-multus/multus-h4dvx" Dec 11 10:11:46 crc kubenswrapper[4953]: I1211 10:11:46.341096 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c09d8243-6693-433e-bce1-8a99e5e37b95-run-openvswitch\") pod \"ovnkube-node-x6f57\" (UID: \"c09d8243-6693-433e-bce1-8a99e5e37b95\") " pod="openshift-ovn-kubernetes/ovnkube-node-x6f57" Dec 11 10:11:46 crc kubenswrapper[4953]: I1211 10:11:46.341112 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c09d8243-6693-433e-bce1-8a99e5e37b95-host-run-netns\") pod \"ovnkube-node-x6f57\" (UID: \"c09d8243-6693-433e-bce1-8a99e5e37b95\") " pod="openshift-ovn-kubernetes/ovnkube-node-x6f57" Dec 11 10:11:46 crc kubenswrapper[4953]: I1211 10:11:46.341127 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f54nc\" (UniqueName: \"kubernetes.io/projected/d80d6bd6-dd9c-433e-93cb-2be48e4cea72-kube-api-access-f54nc\") pod \"multus-additional-cni-plugins-pqtrx\" (UID: \"d80d6bd6-dd9c-433e-93cb-2be48e4cea72\") " pod="openshift-multus/multus-additional-cni-plugins-pqtrx" Dec 11 10:11:46 crc kubenswrapper[4953]: I1211 10:11:46.341144 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/644e1d40-ab80-469e-94b4-540e52b8e2c0-multus-socket-dir-parent\") pod \"multus-h4dvx\" (UID: \"644e1d40-ab80-469e-94b4-540e52b8e2c0\") " pod="openshift-multus/multus-h4dvx" Dec 11 10:11:46 crc kubenswrapper[4953]: I1211 10:11:46.341158 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/644e1d40-ab80-469e-94b4-540e52b8e2c0-host-var-lib-cni-multus\") pod \"multus-h4dvx\" (UID: \"644e1d40-ab80-469e-94b4-540e52b8e2c0\") " pod="openshift-multus/multus-h4dvx" Dec 11 10:11:46 crc kubenswrapper[4953]: I1211 10:11:46.341184 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c09d8243-6693-433e-bce1-8a99e5e37b95-host-kubelet\") pod \"ovnkube-node-x6f57\" (UID: \"c09d8243-6693-433e-bce1-8a99e5e37b95\") " pod="openshift-ovn-kubernetes/ovnkube-node-x6f57" Dec 11 10:11:46 crc kubenswrapper[4953]: I1211 10:11:46.341198 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c09d8243-6693-433e-bce1-8a99e5e37b95-run-systemd\") pod \"ovnkube-node-x6f57\" (UID: \"c09d8243-6693-433e-bce1-8a99e5e37b95\") " pod="openshift-ovn-kubernetes/ovnkube-node-x6f57" Dec 11 10:11:46 crc kubenswrapper[4953]: I1211 10:11:46.341215 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/d80d6bd6-dd9c-433e-93cb-2be48e4cea72-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-pqtrx\" (UID: \"d80d6bd6-dd9c-433e-93cb-2be48e4cea72\") " pod="openshift-multus/multus-additional-cni-plugins-pqtrx" Dec 11 10:11:46 crc kubenswrapper[4953]: I1211 10:11:46.341231 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/644e1d40-ab80-469e-94b4-540e52b8e2c0-host-run-k8s-cni-cncf-io\") pod \"multus-h4dvx\" (UID: \"644e1d40-ab80-469e-94b4-540e52b8e2c0\") " pod="openshift-multus/multus-h4dvx" Dec 11 10:11:46 crc kubenswrapper[4953]: I1211 10:11:46.341245 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/644e1d40-ab80-469e-94b4-540e52b8e2c0-hostroot\") pod \"multus-h4dvx\" (UID: \"644e1d40-ab80-469e-94b4-540e52b8e2c0\") " pod="openshift-multus/multus-h4dvx" Dec 11 10:11:46 crc kubenswrapper[4953]: I1211 10:11:46.341266 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d80d6bd6-dd9c-433e-93cb-2be48e4cea72-system-cni-dir\") pod \"multus-additional-cni-plugins-pqtrx\" (UID: \"d80d6bd6-dd9c-433e-93cb-2be48e4cea72\") " pod="openshift-multus/multus-additional-cni-plugins-pqtrx" Dec 11 10:11:46 crc kubenswrapper[4953]: I1211 10:11:46.341270 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/644e1d40-ab80-469e-94b4-540e52b8e2c0-system-cni-dir\") pod \"multus-h4dvx\" (UID: \"644e1d40-ab80-469e-94b4-540e52b8e2c0\") " pod="openshift-multus/multus-h4dvx" Dec 11 10:11:46 crc kubenswrapper[4953]: I1211 10:11:46.341278 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c09d8243-6693-433e-bce1-8a99e5e37b95-run-openvswitch\") pod \"ovnkube-node-x6f57\" (UID: \"c09d8243-6693-433e-bce1-8a99e5e37b95\") " pod="openshift-ovn-kubernetes/ovnkube-node-x6f57" Dec 11 10:11:46 crc kubenswrapper[4953]: I1211 10:11:46.341281 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/644e1d40-ab80-469e-94b4-540e52b8e2c0-host-var-lib-cni-bin\") pod \"multus-h4dvx\" (UID: \"644e1d40-ab80-469e-94b4-540e52b8e2c0\") " pod="openshift-multus/multus-h4dvx" Dec 11 10:11:46 crc kubenswrapper[4953]: I1211 10:11:46.341301 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/644e1d40-ab80-469e-94b4-540e52b8e2c0-host-var-lib-cni-bin\") pod \"multus-h4dvx\" (UID: \"644e1d40-ab80-469e-94b4-540e52b8e2c0\") " pod="openshift-multus/multus-h4dvx" Dec 11 10:11:46 crc kubenswrapper[4953]: I1211 10:11:46.341319 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c09d8243-6693-433e-bce1-8a99e5e37b95-host-kubelet\") pod \"ovnkube-node-x6f57\" (UID: \"c09d8243-6693-433e-bce1-8a99e5e37b95\") " pod="openshift-ovn-kubernetes/ovnkube-node-x6f57" Dec 11 10:11:46 crc kubenswrapper[4953]: I1211 10:11:46.341237 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c09d8243-6693-433e-bce1-8a99e5e37b95-host-run-netns\") pod \"ovnkube-node-x6f57\" (UID: \"c09d8243-6693-433e-bce1-8a99e5e37b95\") " pod="openshift-ovn-kubernetes/ovnkube-node-x6f57" Dec 11 10:11:46 crc kubenswrapper[4953]: I1211 10:11:46.341359 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/644e1d40-ab80-469e-94b4-540e52b8e2c0-host-var-lib-cni-multus\") pod \"multus-h4dvx\" (UID: \"644e1d40-ab80-469e-94b4-540e52b8e2c0\") " pod="openshift-multus/multus-h4dvx" Dec 11 10:11:46 crc kubenswrapper[4953]: I1211 10:11:46.341378 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d80d6bd6-dd9c-433e-93cb-2be48e4cea72-system-cni-dir\") pod \"multus-additional-cni-plugins-pqtrx\" (UID: \"d80d6bd6-dd9c-433e-93cb-2be48e4cea72\") " pod="openshift-multus/multus-additional-cni-plugins-pqtrx" Dec 11 10:11:46 crc kubenswrapper[4953]: I1211 10:11:46.341396 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/644e1d40-ab80-469e-94b4-540e52b8e2c0-host-run-k8s-cni-cncf-io\") pod \"multus-h4dvx\" (UID: \"644e1d40-ab80-469e-94b4-540e52b8e2c0\") " pod="openshift-multus/multus-h4dvx" Dec 11 10:11:46 crc kubenswrapper[4953]: I1211 10:11:46.341404 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/644e1d40-ab80-469e-94b4-540e52b8e2c0-host-run-multus-certs\") pod \"multus-h4dvx\" (UID: \"644e1d40-ab80-469e-94b4-540e52b8e2c0\") " pod="openshift-multus/multus-h4dvx" Dec 11 10:11:46 crc kubenswrapper[4953]: I1211 10:11:46.341421 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c09d8243-6693-433e-bce1-8a99e5e37b95-run-systemd\") pod \"ovnkube-node-x6f57\" (UID: \"c09d8243-6693-433e-bce1-8a99e5e37b95\") " pod="openshift-ovn-kubernetes/ovnkube-node-x6f57" Dec 11 10:11:46 crc kubenswrapper[4953]: I1211 10:11:46.341442 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/644e1d40-ab80-469e-94b4-540e52b8e2c0-hostroot\") pod \"multus-h4dvx\" (UID: \"644e1d40-ab80-469e-94b4-540e52b8e2c0\") " pod="openshift-multus/multus-h4dvx" Dec 11 10:11:46 crc kubenswrapper[4953]: I1211 10:11:46.341463 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/644e1d40-ab80-469e-94b4-540e52b8e2c0-etc-kubernetes\") pod \"multus-h4dvx\" (UID: \"644e1d40-ab80-469e-94b4-540e52b8e2c0\") " pod="openshift-multus/multus-h4dvx" Dec 11 10:11:46 crc kubenswrapper[4953]: I1211 10:11:46.341461 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/644e1d40-ab80-469e-94b4-540e52b8e2c0-multus-socket-dir-parent\") pod \"multus-h4dvx\" (UID: \"644e1d40-ab80-469e-94b4-540e52b8e2c0\") " pod="openshift-multus/multus-h4dvx" Dec 11 10:11:46 crc kubenswrapper[4953]: I1211 10:11:46.341480 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/644e1d40-ab80-469e-94b4-540e52b8e2c0-host-run-multus-certs\") pod \"multus-h4dvx\" (UID: \"644e1d40-ab80-469e-94b4-540e52b8e2c0\") " pod="openshift-multus/multus-h4dvx" Dec 11 10:11:46 crc kubenswrapper[4953]: I1211 10:11:46.341535 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/644e1d40-ab80-469e-94b4-540e52b8e2c0-etc-kubernetes\") pod \"multus-h4dvx\" (UID: \"644e1d40-ab80-469e-94b4-540e52b8e2c0\") " pod="openshift-multus/multus-h4dvx" Dec 11 10:11:46 crc kubenswrapper[4953]: I1211 10:11:46.341595 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c09d8243-6693-433e-bce1-8a99e5e37b95-host-run-ovn-kubernetes\") pod \"ovnkube-node-x6f57\" (UID: \"c09d8243-6693-433e-bce1-8a99e5e37b95\") " pod="openshift-ovn-kubernetes/ovnkube-node-x6f57" Dec 11 10:11:46 crc kubenswrapper[4953]: I1211 10:11:46.341620 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c09d8243-6693-433e-bce1-8a99e5e37b95-host-slash\") pod \"ovnkube-node-x6f57\" (UID: \"c09d8243-6693-433e-bce1-8a99e5e37b95\") " pod="openshift-ovn-kubernetes/ovnkube-node-x6f57" Dec 11 10:11:46 crc kubenswrapper[4953]: I1211 10:11:46.341639 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/644e1d40-ab80-469e-94b4-540e52b8e2c0-multus-cni-dir\") pod \"multus-h4dvx\" (UID: \"644e1d40-ab80-469e-94b4-540e52b8e2c0\") " pod="openshift-multus/multus-h4dvx" Dec 11 10:11:46 crc kubenswrapper[4953]: I1211 10:11:46.341659 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/644e1d40-ab80-469e-94b4-540e52b8e2c0-host-var-lib-kubelet\") pod \"multus-h4dvx\" (UID: \"644e1d40-ab80-469e-94b4-540e52b8e2c0\") " pod="openshift-multus/multus-h4dvx" Dec 11 10:11:46 crc kubenswrapper[4953]: I1211 10:11:46.341683 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/d80d6bd6-dd9c-433e-93cb-2be48e4cea72-cni-binary-copy\") pod \"multus-additional-cni-plugins-pqtrx\" (UID: \"d80d6bd6-dd9c-433e-93cb-2be48e4cea72\") " pod="openshift-multus/multus-additional-cni-plugins-pqtrx" Dec 11 10:11:46 crc kubenswrapper[4953]: I1211 10:11:46.341706 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbwwr\" (UniqueName: \"kubernetes.io/projected/644e1d40-ab80-469e-94b4-540e52b8e2c0-kube-api-access-lbwwr\") pod \"multus-h4dvx\" (UID: \"644e1d40-ab80-469e-94b4-540e52b8e2c0\") " pod="openshift-multus/multus-h4dvx" Dec 11 10:11:46 crc kubenswrapper[4953]: I1211 10:11:46.341711 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c09d8243-6693-433e-bce1-8a99e5e37b95-host-run-ovn-kubernetes\") pod \"ovnkube-node-x6f57\" (UID: \"c09d8243-6693-433e-bce1-8a99e5e37b95\") " pod="openshift-ovn-kubernetes/ovnkube-node-x6f57" Dec 11 10:11:46 crc kubenswrapper[4953]: I1211 10:11:46.341729 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/d80d6bd6-dd9c-433e-93cb-2be48e4cea72-cnibin\") pod \"multus-additional-cni-plugins-pqtrx\" (UID: \"d80d6bd6-dd9c-433e-93cb-2be48e4cea72\") " pod="openshift-multus/multus-additional-cni-plugins-pqtrx" Dec 11 10:11:46 crc kubenswrapper[4953]: I1211 10:11:46.341741 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/644e1d40-ab80-469e-94b4-540e52b8e2c0-host-var-lib-kubelet\") pod \"multus-h4dvx\" (UID: \"644e1d40-ab80-469e-94b4-540e52b8e2c0\") " pod="openshift-multus/multus-h4dvx" Dec 11 10:11:46 crc kubenswrapper[4953]: I1211 10:11:46.341760 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/d80d6bd6-dd9c-433e-93cb-2be48e4cea72-cnibin\") pod \"multus-additional-cni-plugins-pqtrx\" (UID: \"d80d6bd6-dd9c-433e-93cb-2be48e4cea72\") " pod="openshift-multus/multus-additional-cni-plugins-pqtrx" Dec 11 10:11:46 crc kubenswrapper[4953]: I1211 10:11:46.341756 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/644e1d40-ab80-469e-94b4-540e52b8e2c0-multus-cni-dir\") pod \"multus-h4dvx\" (UID: \"644e1d40-ab80-469e-94b4-540e52b8e2c0\") " pod="openshift-multus/multus-h4dvx" Dec 11 10:11:46 crc kubenswrapper[4953]: I1211 10:11:46.341793 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c09d8243-6693-433e-bce1-8a99e5e37b95-log-socket\") pod \"ovnkube-node-x6f57\" (UID: \"c09d8243-6693-433e-bce1-8a99e5e37b95\") " pod="openshift-ovn-kubernetes/ovnkube-node-x6f57" Dec 11 10:11:46 crc kubenswrapper[4953]: I1211 10:11:46.341765 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c09d8243-6693-433e-bce1-8a99e5e37b95-log-socket\") pod \"ovnkube-node-x6f57\" (UID: \"c09d8243-6693-433e-bce1-8a99e5e37b95\") " pod="openshift-ovn-kubernetes/ovnkube-node-x6f57" Dec 11 10:11:46 crc kubenswrapper[4953]: I1211 10:11:46.341681 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c09d8243-6693-433e-bce1-8a99e5e37b95-host-slash\") pod \"ovnkube-node-x6f57\" (UID: \"c09d8243-6693-433e-bce1-8a99e5e37b95\") " pod="openshift-ovn-kubernetes/ovnkube-node-x6f57" Dec 11 10:11:46 crc kubenswrapper[4953]: I1211 10:11:46.341831 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c09d8243-6693-433e-bce1-8a99e5e37b95-host-cni-bin\") pod \"ovnkube-node-x6f57\" (UID: \"c09d8243-6693-433e-bce1-8a99e5e37b95\") " pod="openshift-ovn-kubernetes/ovnkube-node-x6f57" Dec 11 10:11:46 crc kubenswrapper[4953]: I1211 10:11:46.341856 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c09d8243-6693-433e-bce1-8a99e5e37b95-env-overrides\") pod \"ovnkube-node-x6f57\" (UID: \"c09d8243-6693-433e-bce1-8a99e5e37b95\") " pod="openshift-ovn-kubernetes/ovnkube-node-x6f57" Dec 11 10:11:46 crc kubenswrapper[4953]: I1211 10:11:46.341891 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c09d8243-6693-433e-bce1-8a99e5e37b95-host-cni-bin\") pod \"ovnkube-node-x6f57\" (UID: \"c09d8243-6693-433e-bce1-8a99e5e37b95\") " pod="openshift-ovn-kubernetes/ovnkube-node-x6f57" Dec 11 10:11:46 crc kubenswrapper[4953]: I1211 10:11:46.341906 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/644e1d40-ab80-469e-94b4-540e52b8e2c0-multus-daemon-config\") pod \"multus-h4dvx\" (UID: \"644e1d40-ab80-469e-94b4-540e52b8e2c0\") " pod="openshift-multus/multus-h4dvx" Dec 11 10:11:46 crc kubenswrapper[4953]: I1211 10:11:46.341934 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c09d8243-6693-433e-bce1-8a99e5e37b95-node-log\") pod \"ovnkube-node-x6f57\" (UID: \"c09d8243-6693-433e-bce1-8a99e5e37b95\") " pod="openshift-ovn-kubernetes/ovnkube-node-x6f57" Dec 11 10:11:46 crc kubenswrapper[4953]: I1211 10:11:46.341967 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c09d8243-6693-433e-bce1-8a99e5e37b95-host-cni-netd\") pod \"ovnkube-node-x6f57\" (UID: \"c09d8243-6693-433e-bce1-8a99e5e37b95\") " pod="openshift-ovn-kubernetes/ovnkube-node-x6f57" Dec 11 10:11:46 crc kubenswrapper[4953]: I1211 10:11:46.341995 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/644e1d40-ab80-469e-94b4-540e52b8e2c0-cnibin\") pod \"multus-h4dvx\" (UID: \"644e1d40-ab80-469e-94b4-540e52b8e2c0\") " pod="openshift-multus/multus-h4dvx" Dec 11 10:11:46 crc kubenswrapper[4953]: I1211 10:11:46.342022 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c09d8243-6693-433e-bce1-8a99e5e37b95-systemd-units\") pod \"ovnkube-node-x6f57\" (UID: \"c09d8243-6693-433e-bce1-8a99e5e37b95\") " pod="openshift-ovn-kubernetes/ovnkube-node-x6f57" Dec 11 10:11:46 crc kubenswrapper[4953]: I1211 10:11:46.342043 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c09d8243-6693-433e-bce1-8a99e5e37b95-etc-openvswitch\") pod \"ovnkube-node-x6f57\" (UID: \"c09d8243-6693-433e-bce1-8a99e5e37b95\") " pod="openshift-ovn-kubernetes/ovnkube-node-x6f57" Dec 11 10:11:46 crc kubenswrapper[4953]: I1211 10:11:46.342038 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c09d8243-6693-433e-bce1-8a99e5e37b95-host-cni-netd\") pod \"ovnkube-node-x6f57\" (UID: \"c09d8243-6693-433e-bce1-8a99e5e37b95\") " pod="openshift-ovn-kubernetes/ovnkube-node-x6f57" Dec 11 10:11:46 crc kubenswrapper[4953]: I1211 10:11:46.342065 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/644e1d40-ab80-469e-94b4-540e52b8e2c0-os-release\") pod \"multus-h4dvx\" (UID: \"644e1d40-ab80-469e-94b4-540e52b8e2c0\") " pod="openshift-multus/multus-h4dvx" Dec 11 10:11:46 crc kubenswrapper[4953]: I1211 10:11:46.341995 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c09d8243-6693-433e-bce1-8a99e5e37b95-node-log\") pod \"ovnkube-node-x6f57\" (UID: \"c09d8243-6693-433e-bce1-8a99e5e37b95\") " pod="openshift-ovn-kubernetes/ovnkube-node-x6f57" Dec 11 10:11:46 crc kubenswrapper[4953]: I1211 10:11:46.342086 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/644e1d40-ab80-469e-94b4-540e52b8e2c0-host-run-netns\") pod \"multus-h4dvx\" (UID: \"644e1d40-ab80-469e-94b4-540e52b8e2c0\") " pod="openshift-multus/multus-h4dvx" Dec 11 10:11:46 crc kubenswrapper[4953]: I1211 10:11:46.342103 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c09d8243-6693-433e-bce1-8a99e5e37b95-systemd-units\") pod \"ovnkube-node-x6f57\" (UID: \"c09d8243-6693-433e-bce1-8a99e5e37b95\") " pod="openshift-ovn-kubernetes/ovnkube-node-x6f57" Dec 11 10:11:46 crc kubenswrapper[4953]: I1211 10:11:46.342106 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c09d8243-6693-433e-bce1-8a99e5e37b95-etc-openvswitch\") pod \"ovnkube-node-x6f57\" (UID: \"c09d8243-6693-433e-bce1-8a99e5e37b95\") " pod="openshift-ovn-kubernetes/ovnkube-node-x6f57" Dec 11 10:11:46 crc kubenswrapper[4953]: I1211 10:11:46.342111 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c09d8243-6693-433e-bce1-8a99e5e37b95-var-lib-openvswitch\") pod \"ovnkube-node-x6f57\" (UID: \"c09d8243-6693-433e-bce1-8a99e5e37b95\") " pod="openshift-ovn-kubernetes/ovnkube-node-x6f57" Dec 11 10:11:46 crc kubenswrapper[4953]: I1211 10:11:46.342150 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/644e1d40-ab80-469e-94b4-540e52b8e2c0-host-run-netns\") pod \"multus-h4dvx\" (UID: \"644e1d40-ab80-469e-94b4-540e52b8e2c0\") " pod="openshift-multus/multus-h4dvx" Dec 11 10:11:46 crc kubenswrapper[4953]: I1211 10:11:46.342184 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c09d8243-6693-433e-bce1-8a99e5e37b95-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-x6f57\" (UID: \"c09d8243-6693-433e-bce1-8a99e5e37b95\") " pod="openshift-ovn-kubernetes/ovnkube-node-x6f57" Dec 11 10:11:46 crc kubenswrapper[4953]: I1211 10:11:46.342208 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c09d8243-6693-433e-bce1-8a99e5e37b95-ovnkube-script-lib\") pod \"ovnkube-node-x6f57\" (UID: \"c09d8243-6693-433e-bce1-8a99e5e37b95\") " pod="openshift-ovn-kubernetes/ovnkube-node-x6f57" Dec 11 10:11:46 crc kubenswrapper[4953]: I1211 10:11:46.342144 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c09d8243-6693-433e-bce1-8a99e5e37b95-var-lib-openvswitch\") pod \"ovnkube-node-x6f57\" (UID: \"c09d8243-6693-433e-bce1-8a99e5e37b95\") " pod="openshift-ovn-kubernetes/ovnkube-node-x6f57" Dec 11 10:11:46 crc kubenswrapper[4953]: I1211 10:11:46.342232 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/d80d6bd6-dd9c-433e-93cb-2be48e4cea72-os-release\") pod \"multus-additional-cni-plugins-pqtrx\" (UID: \"d80d6bd6-dd9c-433e-93cb-2be48e4cea72\") " pod="openshift-multus/multus-additional-cni-plugins-pqtrx" Dec 11 10:11:46 crc kubenswrapper[4953]: I1211 10:11:46.342252 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c09d8243-6693-433e-bce1-8a99e5e37b95-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-x6f57\" (UID: \"c09d8243-6693-433e-bce1-8a99e5e37b95\") " pod="openshift-ovn-kubernetes/ovnkube-node-x6f57" Dec 11 10:11:46 crc kubenswrapper[4953]: I1211 10:11:46.342256 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/644e1d40-ab80-469e-94b4-540e52b8e2c0-cni-binary-copy\") pod \"multus-h4dvx\" (UID: \"644e1d40-ab80-469e-94b4-540e52b8e2c0\") " pod="openshift-multus/multus-h4dvx" Dec 11 10:11:46 crc kubenswrapper[4953]: I1211 10:11:46.342296 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/644e1d40-ab80-469e-94b4-540e52b8e2c0-multus-conf-dir\") pod \"multus-h4dvx\" (UID: \"644e1d40-ab80-469e-94b4-540e52b8e2c0\") " pod="openshift-multus/multus-h4dvx" Dec 11 10:11:46 crc kubenswrapper[4953]: I1211 10:11:46.342335 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c09d8243-6693-433e-bce1-8a99e5e37b95-run-ovn\") pod \"ovnkube-node-x6f57\" (UID: \"c09d8243-6693-433e-bce1-8a99e5e37b95\") " pod="openshift-ovn-kubernetes/ovnkube-node-x6f57" Dec 11 10:11:46 crc kubenswrapper[4953]: I1211 10:11:46.342355 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c09d8243-6693-433e-bce1-8a99e5e37b95-ovn-node-metrics-cert\") pod \"ovnkube-node-x6f57\" (UID: \"c09d8243-6693-433e-bce1-8a99e5e37b95\") " pod="openshift-ovn-kubernetes/ovnkube-node-x6f57" Dec 11 10:11:46 crc kubenswrapper[4953]: I1211 10:11:46.342372 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-78fl9\" (UniqueName: \"kubernetes.io/projected/c09d8243-6693-433e-bce1-8a99e5e37b95-kube-api-access-78fl9\") pod \"ovnkube-node-x6f57\" (UID: \"c09d8243-6693-433e-bce1-8a99e5e37b95\") " pod="openshift-ovn-kubernetes/ovnkube-node-x6f57" Dec 11 10:11:46 crc kubenswrapper[4953]: I1211 10:11:46.342388 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c09d8243-6693-433e-bce1-8a99e5e37b95-ovnkube-config\") pod \"ovnkube-node-x6f57\" (UID: \"c09d8243-6693-433e-bce1-8a99e5e37b95\") " pod="openshift-ovn-kubernetes/ovnkube-node-x6f57" Dec 11 10:11:46 crc kubenswrapper[4953]: I1211 10:11:46.342403 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/d80d6bd6-dd9c-433e-93cb-2be48e4cea72-tuning-conf-dir\") pod \"multus-additional-cni-plugins-pqtrx\" (UID: \"d80d6bd6-dd9c-433e-93cb-2be48e4cea72\") " pod="openshift-multus/multus-additional-cni-plugins-pqtrx" Dec 11 10:11:46 crc kubenswrapper[4953]: I1211 10:11:46.342522 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c09d8243-6693-433e-bce1-8a99e5e37b95-env-overrides\") pod \"ovnkube-node-x6f57\" (UID: \"c09d8243-6693-433e-bce1-8a99e5e37b95\") " pod="openshift-ovn-kubernetes/ovnkube-node-x6f57" Dec 11 10:11:46 crc kubenswrapper[4953]: I1211 10:11:46.342518 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/644e1d40-ab80-469e-94b4-540e52b8e2c0-os-release\") pod \"multus-h4dvx\" (UID: \"644e1d40-ab80-469e-94b4-540e52b8e2c0\") " pod="openshift-multus/multus-h4dvx" Dec 11 10:11:46 crc kubenswrapper[4953]: I1211 10:11:46.342562 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/d80d6bd6-dd9c-433e-93cb-2be48e4cea72-os-release\") pod \"multus-additional-cni-plugins-pqtrx\" (UID: \"d80d6bd6-dd9c-433e-93cb-2be48e4cea72\") " pod="openshift-multus/multus-additional-cni-plugins-pqtrx" Dec 11 10:11:46 crc kubenswrapper[4953]: I1211 10:11:46.342780 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/644e1d40-ab80-469e-94b4-540e52b8e2c0-multus-daemon-config\") pod \"multus-h4dvx\" (UID: \"644e1d40-ab80-469e-94b4-540e52b8e2c0\") " pod="openshift-multus/multus-h4dvx" Dec 11 10:11:46 crc kubenswrapper[4953]: I1211 10:11:46.342800 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/d80d6bd6-dd9c-433e-93cb-2be48e4cea72-tuning-conf-dir\") pod \"multus-additional-cni-plugins-pqtrx\" (UID: \"d80d6bd6-dd9c-433e-93cb-2be48e4cea72\") " pod="openshift-multus/multus-additional-cni-plugins-pqtrx" Dec 11 10:11:46 crc kubenswrapper[4953]: I1211 10:11:46.342840 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/644e1d40-ab80-469e-94b4-540e52b8e2c0-multus-conf-dir\") pod \"multus-h4dvx\" (UID: \"644e1d40-ab80-469e-94b4-540e52b8e2c0\") " pod="openshift-multus/multus-h4dvx" Dec 11 10:11:46 crc kubenswrapper[4953]: I1211 10:11:46.342848 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c09d8243-6693-433e-bce1-8a99e5e37b95-run-ovn\") pod \"ovnkube-node-x6f57\" (UID: \"c09d8243-6693-433e-bce1-8a99e5e37b95\") " pod="openshift-ovn-kubernetes/ovnkube-node-x6f57" Dec 11 10:11:46 crc kubenswrapper[4953]: I1211 10:11:46.342929 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/644e1d40-ab80-469e-94b4-540e52b8e2c0-cnibin\") pod \"multus-h4dvx\" (UID: \"644e1d40-ab80-469e-94b4-540e52b8e2c0\") " pod="openshift-multus/multus-h4dvx" Dec 11 10:11:46 crc kubenswrapper[4953]: I1211 10:11:46.342915 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c09d8243-6693-433e-bce1-8a99e5e37b95-ovnkube-script-lib\") pod \"ovnkube-node-x6f57\" (UID: \"c09d8243-6693-433e-bce1-8a99e5e37b95\") " pod="openshift-ovn-kubernetes/ovnkube-node-x6f57" Dec 11 10:11:46 crc kubenswrapper[4953]: I1211 10:11:46.342959 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/644e1d40-ab80-469e-94b4-540e52b8e2c0-cni-binary-copy\") pod \"multus-h4dvx\" (UID: \"644e1d40-ab80-469e-94b4-540e52b8e2c0\") " pod="openshift-multus/multus-h4dvx" Dec 11 10:11:46 crc kubenswrapper[4953]: I1211 10:11:46.343281 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c09d8243-6693-433e-bce1-8a99e5e37b95-ovnkube-config\") pod \"ovnkube-node-x6f57\" (UID: \"c09d8243-6693-433e-bce1-8a99e5e37b95\") " pod="openshift-ovn-kubernetes/ovnkube-node-x6f57" Dec 11 10:11:46 crc kubenswrapper[4953]: I1211 10:11:46.343284 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/d80d6bd6-dd9c-433e-93cb-2be48e4cea72-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-pqtrx\" (UID: \"d80d6bd6-dd9c-433e-93cb-2be48e4cea72\") " pod="openshift-multus/multus-additional-cni-plugins-pqtrx" Dec 11 10:11:46 crc kubenswrapper[4953]: I1211 10:11:46.343316 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/d80d6bd6-dd9c-433e-93cb-2be48e4cea72-cni-binary-copy\") pod \"multus-additional-cni-plugins-pqtrx\" (UID: \"d80d6bd6-dd9c-433e-93cb-2be48e4cea72\") " pod="openshift-multus/multus-additional-cni-plugins-pqtrx" Dec 11 10:11:46 crc kubenswrapper[4953]: I1211 10:11:46.345566 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c09d8243-6693-433e-bce1-8a99e5e37b95-ovn-node-metrics-cert\") pod \"ovnkube-node-x6f57\" (UID: \"c09d8243-6693-433e-bce1-8a99e5e37b95\") " pod="openshift-ovn-kubernetes/ovnkube-node-x6f57" Dec 11 10:11:46 crc kubenswrapper[4953]: I1211 10:11:46.391894 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x6f57" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c09d8243-6693-433e-bce1-8a99e5e37b95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x6f57\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:46Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:46 crc kubenswrapper[4953]: I1211 10:11:46.409893 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e38e7bec81ab11b9afe5c592d5c57aa1c0527e5e4031265a00a99ef8cb3c6dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da0ab06260b0bf565e089d1d1a78ae71e0ce94f0d5e867393dafc543f9014367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:46Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:46 crc kubenswrapper[4953]: I1211 10:11:46.414096 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f54nc\" (UniqueName: \"kubernetes.io/projected/d80d6bd6-dd9c-433e-93cb-2be48e4cea72-kube-api-access-f54nc\") pod \"multus-additional-cni-plugins-pqtrx\" (UID: \"d80d6bd6-dd9c-433e-93cb-2be48e4cea72\") " pod="openshift-multus/multus-additional-cni-plugins-pqtrx" Dec 11 10:11:46 crc kubenswrapper[4953]: I1211 10:11:46.421700 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-78fl9\" (UniqueName: \"kubernetes.io/projected/c09d8243-6693-433e-bce1-8a99e5e37b95-kube-api-access-78fl9\") pod \"ovnkube-node-x6f57\" (UID: \"c09d8243-6693-433e-bce1-8a99e5e37b95\") " pod="openshift-ovn-kubernetes/ovnkube-node-x6f57" Dec 11 10:11:46 crc kubenswrapper[4953]: I1211 10:11:46.426224 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbwwr\" (UniqueName: \"kubernetes.io/projected/644e1d40-ab80-469e-94b4-540e52b8e2c0-kube-api-access-lbwwr\") pod \"multus-h4dvx\" (UID: \"644e1d40-ab80-469e-94b4-540e52b8e2c0\") " pod="openshift-multus/multus-h4dvx" Dec 11 10:11:46 crc kubenswrapper[4953]: I1211 10:11:46.428388 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:46Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:46 crc kubenswrapper[4953]: I1211 10:11:46.561273 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-h4dvx" Dec 11 10:11:46 crc kubenswrapper[4953]: I1211 10:11:46.562745 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-pqtrx" Dec 11 10:11:46 crc kubenswrapper[4953]: I1211 10:11:46.563241 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-x6f57" Dec 11 10:11:46 crc kubenswrapper[4953]: I1211 10:11:46.563775 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 10:11:46 crc kubenswrapper[4953]: E1211 10:11:46.563860 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 10:11:46 crc kubenswrapper[4953]: I1211 10:11:46.581240 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:46Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:46 crc kubenswrapper[4953]: I1211 10:11:46.901540 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7cgmm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5c6cf50-ac15-4b65-98a3-24d5b55a9f5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrv98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7cgmm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:46Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:46 crc kubenswrapper[4953]: W1211 10:11:46.906738 4953 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc09d8243_6693_433e_bce1_8a99e5e37b95.slice/crio-1de50d676eb0b99c7d8a715b183ed3da13b81401140b684ae7ae1967be20b7c9 WatchSource:0}: Error finding container 1de50d676eb0b99c7d8a715b183ed3da13b81401140b684ae7ae1967be20b7c9: Status 404 returned error can't find the container with id 1de50d676eb0b99c7d8a715b183ed3da13b81401140b684ae7ae1967be20b7c9 Dec 11 10:11:46 crc kubenswrapper[4953]: I1211 10:11:46.953120 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-7cgmm" event={"ID":"a5c6cf50-ac15-4b65-98a3-24d5b55a9f5f","Type":"ContainerStarted","Data":"15e8c3b294febaab8650ca738b055222b11b0f3502da927fb9bb1f2f30b97c06"} Dec 11 10:11:46 crc kubenswrapper[4953]: I1211 10:11:46.958835 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ps59j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed9da9e3-3f97-49f6-9774-3c2f06987b9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vngds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ps59j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:46Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:47 crc kubenswrapper[4953]: I1211 10:11:46.961849 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q2898" event={"ID":"ed741fb7-1326-48b7-a713-17c9f0243eac","Type":"ContainerStarted","Data":"c91690c6fc715e967f98fc731db9ff317a21946b0903480ee2534f5e71ae7ca0"} Dec 11 10:11:47 crc kubenswrapper[4953]: I1211 10:11:46.961892 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q2898" event={"ID":"ed741fb7-1326-48b7-a713-17c9f0243eac","Type":"ContainerStarted","Data":"bd6810974250266a6a2efbea13db5cb6f52a4bbdec05955f7b9f58e55d7a8c4a"} Dec 11 10:11:47 crc kubenswrapper[4953]: I1211 10:11:46.963701 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-pqtrx" event={"ID":"d80d6bd6-dd9c-433e-93cb-2be48e4cea72","Type":"ContainerStarted","Data":"2b66dfdff048e062a4a044ec5cb52c90c1cebdd7c5fe3728fd8bb2f46169808a"} Dec 11 10:11:47 crc kubenswrapper[4953]: I1211 10:11:46.969986 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-ps59j" event={"ID":"ed9da9e3-3f97-49f6-9774-3c2f06987b9d","Type":"ContainerStarted","Data":"c8b7289e76184818bc11ef0e99cd573244647de790af79ac277a91ebf305bc1f"} Dec 11 10:11:47 crc kubenswrapper[4953]: I1211 10:11:46.970018 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-ps59j" event={"ID":"ed9da9e3-3f97-49f6-9774-3c2f06987b9d","Type":"ContainerStarted","Data":"6230b58036fd343508f79ccc628f9b2bb7b6e74dc5ab8cd49ae3361e6bb08488"} Dec 11 10:11:47 crc kubenswrapper[4953]: I1211 10:11:47.012531 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 10:11:47 crc kubenswrapper[4953]: E1211 10:11:47.012828 4953 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 11 10:11:47 crc kubenswrapper[4953]: E1211 10:11:47.013624 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-11 10:11:51.013537349 +0000 UTC m=+29.037396452 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 11 10:11:47 crc kubenswrapper[4953]: I1211 10:11:47.421009 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pqtrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d80d6bd6-dd9c-433e-93cb-2be48e4cea72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pqtrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:47Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:47 crc kubenswrapper[4953]: I1211 10:11:47.421363 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 10:11:47 crc kubenswrapper[4953]: I1211 10:11:47.421456 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 10:11:47 crc kubenswrapper[4953]: I1211 10:11:47.421480 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 10:11:47 crc kubenswrapper[4953]: I1211 10:11:47.421507 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 10:11:47 crc kubenswrapper[4953]: E1211 10:11:47.421591 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 10:11:51.421549432 +0000 UTC m=+29.445408465 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:11:47 crc kubenswrapper[4953]: E1211 10:11:47.421638 4953 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 11 10:11:47 crc kubenswrapper[4953]: E1211 10:11:47.421653 4953 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 11 10:11:47 crc kubenswrapper[4953]: E1211 10:11:47.421663 4953 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 10:11:47 crc kubenswrapper[4953]: E1211 10:11:47.421696 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-11 10:11:51.421685136 +0000 UTC m=+29.445544169 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 10:11:47 crc kubenswrapper[4953]: E1211 10:11:47.421702 4953 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 11 10:11:47 crc kubenswrapper[4953]: E1211 10:11:47.421721 4953 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 11 10:11:47 crc kubenswrapper[4953]: E1211 10:11:47.421734 4953 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 10:11:47 crc kubenswrapper[4953]: E1211 10:11:47.421741 4953 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 11 10:11:47 crc kubenswrapper[4953]: E1211 10:11:47.421761 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-11 10:11:51.421755889 +0000 UTC m=+29.445614922 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 11 10:11:47 crc kubenswrapper[4953]: E1211 10:11:47.421777 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-11 10:11:51.421767719 +0000 UTC m=+29.445626852 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 10:11:47 crc kubenswrapper[4953]: I1211 10:11:47.472297 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 10:11:47 crc kubenswrapper[4953]: I1211 10:11:47.472318 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 10:11:47 crc kubenswrapper[4953]: E1211 10:11:47.473116 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 10:11:47 crc kubenswrapper[4953]: E1211 10:11:47.473214 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 10:11:47 crc kubenswrapper[4953]: I1211 10:11:47.593260 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7f8ca70-14ac-499f-9a73-c03f1cb9d3f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afbf1d478a1ccbd17c29483adf2e39e60be93dfde72d96dd4c45ee2b81c7db7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89487ecc0b25583d92a2adb537e660618a1f0477d9b0ca805c7d5cc120a38ef5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5850c59617cbc5cbf3d86246bfb8d7645964fdb32f406648e47de3d2e1dcca39\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b38e6fc7946d99ff7570627e9bfd01e9f5e029ad3f3e2cda276461f222d7950\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a91255550d88dd1963fef1112d90d2c1e779fc3e2dd1e7c824640879b8c6a58e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T10:11:37Z\\\",\\\"message\\\":\\\"W1211 10:11:26.311312 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1211 10:11:26.312053 1 crypto.go:601] Generating new CA for check-endpoints-signer@1765447886 cert, and key in /tmp/serving-cert-3652440615/serving-signer.crt, /tmp/serving-cert-3652440615/serving-signer.key\\\\nI1211 10:11:26.711906 1 observer_polling.go:159] Starting file observer\\\\nW1211 10:11:26.714018 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1211 10:11:26.714220 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1211 10:11:26.715195 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3652440615/tls.crt::/tmp/serving-cert-3652440615/tls.key\\\\\\\"\\\\nF1211 10:11:37.220702 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:26Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2348bd7a336966cd91aa6ba1cf71771e7fd111085acbb0481adee82d7a6e109\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4536100178c4611eea8cfe2b15d79f55fdd32b7004de523cc2694bd656ce5be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4536100178c4611eea8cfe2b15d79f55fdd32b7004de523cc2694bd656ce5be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:47Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:47 crc kubenswrapper[4953]: I1211 10:11:47.930211 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:47Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:48 crc kubenswrapper[4953]: I1211 10:11:48.027544 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-h4dvx" event={"ID":"644e1d40-ab80-469e-94b4-540e52b8e2c0","Type":"ContainerStarted","Data":"5f734acf34a05a9425f305c809775bae58615ae1d5f89e3b519e54d7e7abb8bc"} Dec 11 10:11:48 crc kubenswrapper[4953]: I1211 10:11:48.027606 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-h4dvx" event={"ID":"644e1d40-ab80-469e-94b4-540e52b8e2c0","Type":"ContainerStarted","Data":"929d2935c056bae2f0ab44730fbfb1c0369411d1edf7ce09b06282b8ac910251"} Dec 11 10:11:48 crc kubenswrapper[4953]: I1211 10:11:48.028687 4953 generic.go:334] "Generic (PLEG): container finished" podID="c09d8243-6693-433e-bce1-8a99e5e37b95" containerID="c84dce3b218bcc630289f0bff286bad09d4481e1787ef2c048799bfc6c97108f" exitCode=0 Dec 11 10:11:48 crc kubenswrapper[4953]: I1211 10:11:48.028760 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x6f57" event={"ID":"c09d8243-6693-433e-bce1-8a99e5e37b95","Type":"ContainerDied","Data":"c84dce3b218bcc630289f0bff286bad09d4481e1787ef2c048799bfc6c97108f"} Dec 11 10:11:48 crc kubenswrapper[4953]: I1211 10:11:48.028792 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x6f57" event={"ID":"c09d8243-6693-433e-bce1-8a99e5e37b95","Type":"ContainerStarted","Data":"1de50d676eb0b99c7d8a715b183ed3da13b81401140b684ae7ae1967be20b7c9"} Dec 11 10:11:48 crc kubenswrapper[4953]: I1211 10:11:48.038851 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-pqtrx" event={"ID":"d80d6bd6-dd9c-433e-93cb-2be48e4cea72","Type":"ContainerStarted","Data":"8c456b7fb076e4e1f8ebe23ee26c07f2038800afe76c0e183ea82aebebe5e20d"} Dec 11 10:11:48 crc kubenswrapper[4953]: I1211 10:11:48.173717 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:48Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:48 crc kubenswrapper[4953]: I1211 10:11:48.237151 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h4dvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"644e1d40-ab80-469e-94b4-540e52b8e2c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbwwr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h4dvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:48Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:48 crc kubenswrapper[4953]: I1211 10:11:48.272974 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e38e7bec81ab11b9afe5c592d5c57aa1c0527e5e4031265a00a99ef8cb3c6dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da0ab06260b0bf565e089d1d1a78ae71e0ce94f0d5e867393dafc543f9014367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:48Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:48 crc kubenswrapper[4953]: I1211 10:11:48.329073 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7cgmm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5c6cf50-ac15-4b65-98a3-24d5b55a9f5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e8c3b294febaab8650ca738b055222b11b0f3502da927fb9bb1f2f30b97c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrv98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7cgmm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:48Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:48 crc kubenswrapper[4953]: I1211 10:11:48.355843 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ps59j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed9da9e3-3f97-49f6-9774-3c2f06987b9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8b7289e76184818bc11ef0e99cd573244647de790af79ac277a91ebf305bc1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vngds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ps59j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:48Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:48 crc kubenswrapper[4953]: I1211 10:11:48.375635 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pqtrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d80d6bd6-dd9c-433e-93cb-2be48e4cea72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c456b7fb076e4e1f8ebe23ee26c07f2038800afe76c0e183ea82aebebe5e20d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pqtrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:48Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:48 crc kubenswrapper[4953]: I1211 10:11:48.411416 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:48Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:48 crc kubenswrapper[4953]: I1211 10:11:48.429371 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:48Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:48 crc kubenswrapper[4953]: I1211 10:11:48.445317 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7f8ca70-14ac-499f-9a73-c03f1cb9d3f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afbf1d478a1ccbd17c29483adf2e39e60be93dfde72d96dd4c45ee2b81c7db7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89487ecc0b25583d92a2adb537e660618a1f0477d9b0ca805c7d5cc120a38ef5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5850c59617cbc5cbf3d86246bfb8d7645964fdb32f406648e47de3d2e1dcca39\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b38e6fc7946d99ff7570627e9bfd01e9f5e029ad3f3e2cda276461f222d7950\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a91255550d88dd1963fef1112d90d2c1e779fc3e2dd1e7c824640879b8c6a58e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T10:11:37Z\\\",\\\"message\\\":\\\"W1211 10:11:26.311312 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1211 10:11:26.312053 1 crypto.go:601] Generating new CA for check-endpoints-signer@1765447886 cert, and key in /tmp/serving-cert-3652440615/serving-signer.crt, /tmp/serving-cert-3652440615/serving-signer.key\\\\nI1211 10:11:26.711906 1 observer_polling.go:159] Starting file observer\\\\nW1211 10:11:26.714018 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1211 10:11:26.714220 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1211 10:11:26.715195 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3652440615/tls.crt::/tmp/serving-cert-3652440615/tls.key\\\\\\\"\\\\nF1211 10:11:37.220702 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:26Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2348bd7a336966cd91aa6ba1cf71771e7fd111085acbb0481adee82d7a6e109\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4536100178c4611eea8cfe2b15d79f55fdd32b7004de523cc2694bd656ce5be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4536100178c4611eea8cfe2b15d79f55fdd32b7004de523cc2694bd656ce5be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:48Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:48 crc kubenswrapper[4953]: I1211 10:11:48.458435 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:48Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:48 crc kubenswrapper[4953]: I1211 10:11:48.471249 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:48Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:48 crc kubenswrapper[4953]: I1211 10:11:48.472599 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 10:11:48 crc kubenswrapper[4953]: E1211 10:11:48.472867 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 10:11:48 crc kubenswrapper[4953]: I1211 10:11:48.490990 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h4dvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"644e1d40-ab80-469e-94b4-540e52b8e2c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f734acf34a05a9425f305c809775bae58615ae1d5f89e3b519e54d7e7abb8bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbwwr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h4dvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:48Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:48 crc kubenswrapper[4953]: I1211 10:11:48.591630 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d98f6e58-767e-4e80-8dc7-bf97cdc14997\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec306b9048e81de45ce4e5ae1f564ab611980d56edf94f34c48cba7299dd754e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7453febb17d4aadef8c87c8d256a0339b441e2bed33a20a3f7cf88b4d0ce5a83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5c40bd3d558c5cff3d458a0b5a993371c3e8b6afc0035a64a21ffc0cc6c2357\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b22d8239ad9f5511dc6ae773c7ea181c4e194b0847b58332e716953d9deb9cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:48Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:48 crc kubenswrapper[4953]: I1211 10:11:48.623322 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x6f57" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c09d8243-6693-433e-bce1-8a99e5e37b95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c84dce3b218bcc630289f0bff286bad09d4481e1787ef2c048799bfc6c97108f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c84dce3b218bcc630289f0bff286bad09d4481e1787ef2c048799bfc6c97108f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x6f57\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:48Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:48 crc kubenswrapper[4953]: I1211 10:11:48.639261 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb9312a7af4fcd14d64411afec83b7315dbe399254aab23665cccfa0b04a62db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:48Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:48 crc kubenswrapper[4953]: I1211 10:11:48.653071 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q2898" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed741fb7-1326-48b7-a713-17c9f0243eac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c91690c6fc715e967f98fc731db9ff317a21946b0903480ee2534f5e71ae7ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5z9nk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd6810974250266a6a2efbea13db5cb6f52a4bbdec05955f7b9f58e55d7a8c4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5z9nk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q2898\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:48Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:49 crc kubenswrapper[4953]: I1211 10:11:49.051848 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x6f57" event={"ID":"c09d8243-6693-433e-bce1-8a99e5e37b95","Type":"ContainerStarted","Data":"99f628cfb5844a18bd0013d2cd5f7f545ef7a561017498bdfa4a6b6fc4686e54"} Dec 11 10:11:49 crc kubenswrapper[4953]: I1211 10:11:49.368038 4953 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 10:11:49 crc kubenswrapper[4953]: I1211 10:11:49.369867 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:11:49 crc kubenswrapper[4953]: I1211 10:11:49.369900 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:11:49 crc kubenswrapper[4953]: I1211 10:11:49.369909 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:11:49 crc kubenswrapper[4953]: I1211 10:11:49.370037 4953 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 11 10:11:49 crc kubenswrapper[4953]: I1211 10:11:49.377378 4953 kubelet_node_status.go:115] "Node was previously registered" node="crc" Dec 11 10:11:49 crc kubenswrapper[4953]: I1211 10:11:49.377706 4953 kubelet_node_status.go:79] "Successfully registered node" node="crc" Dec 11 10:11:49 crc kubenswrapper[4953]: I1211 10:11:49.378935 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:11:49 crc kubenswrapper[4953]: I1211 10:11:49.378964 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:11:49 crc kubenswrapper[4953]: I1211 10:11:49.378976 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:11:49 crc kubenswrapper[4953]: I1211 10:11:49.378994 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:11:49 crc kubenswrapper[4953]: I1211 10:11:49.379007 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:11:49Z","lastTransitionTime":"2025-12-11T10:11:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:11:49 crc kubenswrapper[4953]: E1211 10:11:49.442692 4953 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T10:11:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T10:11:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T10:11:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T10:11:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5fa37296-71b7-4540-87a3-260b8ecb76f4\\\",\\\"systemUUID\\\":\\\"28c30a59-aa99-484b-82a7-0daea6b2659e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:49Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:49 crc kubenswrapper[4953]: I1211 10:11:49.450325 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:11:49 crc kubenswrapper[4953]: I1211 10:11:49.450373 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:11:49 crc kubenswrapper[4953]: I1211 10:11:49.450387 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:11:49 crc kubenswrapper[4953]: I1211 10:11:49.450403 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:11:49 crc kubenswrapper[4953]: I1211 10:11:49.450414 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:11:49Z","lastTransitionTime":"2025-12-11T10:11:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:11:49 crc kubenswrapper[4953]: I1211 10:11:49.472669 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 10:11:49 crc kubenswrapper[4953]: I1211 10:11:49.472676 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 10:11:49 crc kubenswrapper[4953]: E1211 10:11:49.472829 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 10:11:49 crc kubenswrapper[4953]: E1211 10:11:49.472888 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 10:11:49 crc kubenswrapper[4953]: E1211 10:11:49.490034 4953 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T10:11:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T10:11:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T10:11:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T10:11:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5fa37296-71b7-4540-87a3-260b8ecb76f4\\\",\\\"systemUUID\\\":\\\"28c30a59-aa99-484b-82a7-0daea6b2659e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:49Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:49 crc kubenswrapper[4953]: I1211 10:11:49.494264 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:11:49 crc kubenswrapper[4953]: I1211 10:11:49.494301 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:11:49 crc kubenswrapper[4953]: I1211 10:11:49.494311 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:11:49 crc kubenswrapper[4953]: I1211 10:11:49.494327 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:11:49 crc kubenswrapper[4953]: I1211 10:11:49.494340 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:11:49Z","lastTransitionTime":"2025-12-11T10:11:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:11:49 crc kubenswrapper[4953]: E1211 10:11:49.524123 4953 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T10:11:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T10:11:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T10:11:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T10:11:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5fa37296-71b7-4540-87a3-260b8ecb76f4\\\",\\\"systemUUID\\\":\\\"28c30a59-aa99-484b-82a7-0daea6b2659e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:49Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:49 crc kubenswrapper[4953]: I1211 10:11:49.528565 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:11:49 crc kubenswrapper[4953]: I1211 10:11:49.528628 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:11:49 crc kubenswrapper[4953]: I1211 10:11:49.528640 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:11:49 crc kubenswrapper[4953]: I1211 10:11:49.528658 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:11:49 crc kubenswrapper[4953]: I1211 10:11:49.528671 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:11:49Z","lastTransitionTime":"2025-12-11T10:11:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:11:49 crc kubenswrapper[4953]: E1211 10:11:49.543632 4953 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T10:11:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T10:11:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T10:11:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T10:11:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5fa37296-71b7-4540-87a3-260b8ecb76f4\\\",\\\"systemUUID\\\":\\\"28c30a59-aa99-484b-82a7-0daea6b2659e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:49Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:49 crc kubenswrapper[4953]: I1211 10:11:49.684360 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:11:49 crc kubenswrapper[4953]: I1211 10:11:49.684403 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:11:49 crc kubenswrapper[4953]: I1211 10:11:49.684415 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:11:49 crc kubenswrapper[4953]: I1211 10:11:49.684433 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:11:49 crc kubenswrapper[4953]: I1211 10:11:49.684445 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:11:49Z","lastTransitionTime":"2025-12-11T10:11:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:11:49 crc kubenswrapper[4953]: E1211 10:11:49.718640 4953 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T10:11:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T10:11:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T10:11:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T10:11:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5fa37296-71b7-4540-87a3-260b8ecb76f4\\\",\\\"systemUUID\\\":\\\"28c30a59-aa99-484b-82a7-0daea6b2659e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:49Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:49 crc kubenswrapper[4953]: E1211 10:11:49.718841 4953 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 11 10:11:49 crc kubenswrapper[4953]: I1211 10:11:49.735471 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:11:49 crc kubenswrapper[4953]: I1211 10:11:49.735515 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:11:49 crc kubenswrapper[4953]: I1211 10:11:49.735541 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:11:49 crc kubenswrapper[4953]: I1211 10:11:49.735585 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:11:49 crc kubenswrapper[4953]: I1211 10:11:49.735601 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:11:49Z","lastTransitionTime":"2025-12-11T10:11:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:11:49 crc kubenswrapper[4953]: I1211 10:11:49.838638 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:11:49 crc kubenswrapper[4953]: I1211 10:11:49.838670 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:11:49 crc kubenswrapper[4953]: I1211 10:11:49.838679 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:11:49 crc kubenswrapper[4953]: I1211 10:11:49.838692 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:11:49 crc kubenswrapper[4953]: I1211 10:11:49.838700 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:11:49Z","lastTransitionTime":"2025-12-11T10:11:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:11:49 crc kubenswrapper[4953]: I1211 10:11:49.941427 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:11:49 crc kubenswrapper[4953]: I1211 10:11:49.941466 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:11:49 crc kubenswrapper[4953]: I1211 10:11:49.941475 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:11:49 crc kubenswrapper[4953]: I1211 10:11:49.941489 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:11:49 crc kubenswrapper[4953]: I1211 10:11:49.941498 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:11:49Z","lastTransitionTime":"2025-12-11T10:11:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:11:50 crc kubenswrapper[4953]: I1211 10:11:50.044067 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:11:50 crc kubenswrapper[4953]: I1211 10:11:50.044089 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:11:50 crc kubenswrapper[4953]: I1211 10:11:50.044097 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:11:50 crc kubenswrapper[4953]: I1211 10:11:50.044109 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:11:50 crc kubenswrapper[4953]: I1211 10:11:50.044117 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:11:50Z","lastTransitionTime":"2025-12-11T10:11:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:11:50 crc kubenswrapper[4953]: I1211 10:11:50.055185 4953 generic.go:334] "Generic (PLEG): container finished" podID="d80d6bd6-dd9c-433e-93cb-2be48e4cea72" containerID="8c456b7fb076e4e1f8ebe23ee26c07f2038800afe76c0e183ea82aebebe5e20d" exitCode=0 Dec 11 10:11:50 crc kubenswrapper[4953]: I1211 10:11:50.055233 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-pqtrx" event={"ID":"d80d6bd6-dd9c-433e-93cb-2be48e4cea72","Type":"ContainerDied","Data":"8c456b7fb076e4e1f8ebe23ee26c07f2038800afe76c0e183ea82aebebe5e20d"} Dec 11 10:11:50 crc kubenswrapper[4953]: I1211 10:11:50.062354 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x6f57" event={"ID":"c09d8243-6693-433e-bce1-8a99e5e37b95","Type":"ContainerStarted","Data":"b2587a8f43682f31a01fae073769712321abb001f975f5ac5413264489a110f7"} Dec 11 10:11:50 crc kubenswrapper[4953]: I1211 10:11:50.062404 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x6f57" event={"ID":"c09d8243-6693-433e-bce1-8a99e5e37b95","Type":"ContainerStarted","Data":"622e6781ae06491000d6c0e3d2922942c3709bb09483a8e0d2ab723972c2aa78"} Dec 11 10:11:50 crc kubenswrapper[4953]: I1211 10:11:50.062416 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x6f57" event={"ID":"c09d8243-6693-433e-bce1-8a99e5e37b95","Type":"ContainerStarted","Data":"b88a1ca7bd533da2486525e0a6c3dc45272433e743146be01eb8013265f02d93"} Dec 11 10:11:50 crc kubenswrapper[4953]: I1211 10:11:50.062427 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x6f57" event={"ID":"c09d8243-6693-433e-bce1-8a99e5e37b95","Type":"ContainerStarted","Data":"42759c1ba178a28ddf9b11b221249364f6993c1288309dff6b1c50be13c3b6fc"} Dec 11 10:11:50 crc kubenswrapper[4953]: I1211 10:11:50.077303 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:50Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:50 crc kubenswrapper[4953]: I1211 10:11:50.089117 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:50Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:50 crc kubenswrapper[4953]: I1211 10:11:50.104004 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7cgmm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5c6cf50-ac15-4b65-98a3-24d5b55a9f5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e8c3b294febaab8650ca738b055222b11b0f3502da927fb9bb1f2f30b97c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrv98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7cgmm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:50Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:50 crc kubenswrapper[4953]: I1211 10:11:50.116462 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ps59j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed9da9e3-3f97-49f6-9774-3c2f06987b9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8b7289e76184818bc11ef0e99cd573244647de790af79ac277a91ebf305bc1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vngds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ps59j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:50Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:50 crc kubenswrapper[4953]: I1211 10:11:50.135494 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pqtrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d80d6bd6-dd9c-433e-93cb-2be48e4cea72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c456b7fb076e4e1f8ebe23ee26c07f2038800afe76c0e183ea82aebebe5e20d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c456b7fb076e4e1f8ebe23ee26c07f2038800afe76c0e183ea82aebebe5e20d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pqtrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:50Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:50 crc kubenswrapper[4953]: I1211 10:11:50.147002 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:11:50 crc kubenswrapper[4953]: I1211 10:11:50.147044 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:11:50 crc kubenswrapper[4953]: I1211 10:11:50.147055 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:11:50 crc kubenswrapper[4953]: I1211 10:11:50.147068 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:11:50 crc kubenswrapper[4953]: I1211 10:11:50.147077 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:11:50Z","lastTransitionTime":"2025-12-11T10:11:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:11:50 crc kubenswrapper[4953]: I1211 10:11:50.150137 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h4dvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"644e1d40-ab80-469e-94b4-540e52b8e2c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f734acf34a05a9425f305c809775bae58615ae1d5f89e3b519e54d7e7abb8bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbwwr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h4dvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:50Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:50 crc kubenswrapper[4953]: I1211 10:11:50.164444 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d98f6e58-767e-4e80-8dc7-bf97cdc14997\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec306b9048e81de45ce4e5ae1f564ab611980d56edf94f34c48cba7299dd754e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7453febb17d4aadef8c87c8d256a0339b441e2bed33a20a3f7cf88b4d0ce5a83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5c40bd3d558c5cff3d458a0b5a993371c3e8b6afc0035a64a21ffc0cc6c2357\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b22d8239ad9f5511dc6ae773c7ea181c4e194b0847b58332e716953d9deb9cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:50Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:50 crc kubenswrapper[4953]: I1211 10:11:50.178733 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7f8ca70-14ac-499f-9a73-c03f1cb9d3f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afbf1d478a1ccbd17c29483adf2e39e60be93dfde72d96dd4c45ee2b81c7db7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89487ecc0b25583d92a2adb537e660618a1f0477d9b0ca805c7d5cc120a38ef5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5850c59617cbc5cbf3d86246bfb8d7645964fdb32f406648e47de3d2e1dcca39\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b38e6fc7946d99ff7570627e9bfd01e9f5e029ad3f3e2cda276461f222d7950\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a91255550d88dd1963fef1112d90d2c1e779fc3e2dd1e7c824640879b8c6a58e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T10:11:37Z\\\",\\\"message\\\":\\\"W1211 10:11:26.311312 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1211 10:11:26.312053 1 crypto.go:601] Generating new CA for check-endpoints-signer@1765447886 cert, and key in /tmp/serving-cert-3652440615/serving-signer.crt, /tmp/serving-cert-3652440615/serving-signer.key\\\\nI1211 10:11:26.711906 1 observer_polling.go:159] Starting file observer\\\\nW1211 10:11:26.714018 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1211 10:11:26.714220 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1211 10:11:26.715195 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3652440615/tls.crt::/tmp/serving-cert-3652440615/tls.key\\\\\\\"\\\\nF1211 10:11:37.220702 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:26Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2348bd7a336966cd91aa6ba1cf71771e7fd111085acbb0481adee82d7a6e109\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4536100178c4611eea8cfe2b15d79f55fdd32b7004de523cc2694bd656ce5be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4536100178c4611eea8cfe2b15d79f55fdd32b7004de523cc2694bd656ce5be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:50Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:50 crc kubenswrapper[4953]: I1211 10:11:50.196174 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:50Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:50 crc kubenswrapper[4953]: I1211 10:11:50.212763 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:50Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:50 crc kubenswrapper[4953]: I1211 10:11:50.228487 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb9312a7af4fcd14d64411afec83b7315dbe399254aab23665cccfa0b04a62db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:50Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:50 crc kubenswrapper[4953]: I1211 10:11:50.240145 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q2898" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed741fb7-1326-48b7-a713-17c9f0243eac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c91690c6fc715e967f98fc731db9ff317a21946b0903480ee2534f5e71ae7ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5z9nk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd6810974250266a6a2efbea13db5cb6f52a4bbdec05955f7b9f58e55d7a8c4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5z9nk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q2898\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:50Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:50 crc kubenswrapper[4953]: I1211 10:11:50.249288 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:11:50 crc kubenswrapper[4953]: I1211 10:11:50.249320 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:11:50 crc kubenswrapper[4953]: I1211 10:11:50.249329 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:11:50 crc kubenswrapper[4953]: I1211 10:11:50.249343 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:11:50 crc kubenswrapper[4953]: I1211 10:11:50.249352 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:11:50Z","lastTransitionTime":"2025-12-11T10:11:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:11:50 crc kubenswrapper[4953]: I1211 10:11:50.263066 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x6f57" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c09d8243-6693-433e-bce1-8a99e5e37b95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c84dce3b218bcc630289f0bff286bad09d4481e1787ef2c048799bfc6c97108f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c84dce3b218bcc630289f0bff286bad09d4481e1787ef2c048799bfc6c97108f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x6f57\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:50Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:50 crc kubenswrapper[4953]: I1211 10:11:50.276217 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e38e7bec81ab11b9afe5c592d5c57aa1c0527e5e4031265a00a99ef8cb3c6dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da0ab06260b0bf565e089d1d1a78ae71e0ce94f0d5e867393dafc543f9014367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:50Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:50 crc kubenswrapper[4953]: I1211 10:11:50.351419 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:11:50 crc kubenswrapper[4953]: I1211 10:11:50.351695 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:11:50 crc kubenswrapper[4953]: I1211 10:11:50.351762 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:11:50 crc kubenswrapper[4953]: I1211 10:11:50.351821 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:11:50 crc kubenswrapper[4953]: I1211 10:11:50.351876 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:11:50Z","lastTransitionTime":"2025-12-11T10:11:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:11:50 crc kubenswrapper[4953]: I1211 10:11:50.454381 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:11:50 crc kubenswrapper[4953]: I1211 10:11:50.454772 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:11:50 crc kubenswrapper[4953]: I1211 10:11:50.454906 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:11:50 crc kubenswrapper[4953]: I1211 10:11:50.455036 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:11:50 crc kubenswrapper[4953]: I1211 10:11:50.455158 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:11:50Z","lastTransitionTime":"2025-12-11T10:11:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:11:50 crc kubenswrapper[4953]: I1211 10:11:50.472734 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 10:11:50 crc kubenswrapper[4953]: E1211 10:11:50.472942 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 10:11:50 crc kubenswrapper[4953]: I1211 10:11:50.560531 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:11:50 crc kubenswrapper[4953]: I1211 10:11:50.560997 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:11:50 crc kubenswrapper[4953]: I1211 10:11:50.561444 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:11:50 crc kubenswrapper[4953]: I1211 10:11:50.561807 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:11:50 crc kubenswrapper[4953]: I1211 10:11:50.562173 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:11:50Z","lastTransitionTime":"2025-12-11T10:11:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:11:50 crc kubenswrapper[4953]: I1211 10:11:50.685213 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:11:50 crc kubenswrapper[4953]: I1211 10:11:50.685247 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:11:50 crc kubenswrapper[4953]: I1211 10:11:50.685255 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:11:50 crc kubenswrapper[4953]: I1211 10:11:50.685268 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:11:50 crc kubenswrapper[4953]: I1211 10:11:50.685278 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:11:50Z","lastTransitionTime":"2025-12-11T10:11:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:11:50 crc kubenswrapper[4953]: I1211 10:11:50.788059 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:11:50 crc kubenswrapper[4953]: I1211 10:11:50.788276 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:11:50 crc kubenswrapper[4953]: I1211 10:11:50.788354 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:11:50 crc kubenswrapper[4953]: I1211 10:11:50.788432 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:11:50 crc kubenswrapper[4953]: I1211 10:11:50.788520 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:11:50Z","lastTransitionTime":"2025-12-11T10:11:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:11:50 crc kubenswrapper[4953]: I1211 10:11:50.890951 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:11:50 crc kubenswrapper[4953]: I1211 10:11:50.890993 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:11:50 crc kubenswrapper[4953]: I1211 10:11:50.891003 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:11:50 crc kubenswrapper[4953]: I1211 10:11:50.891018 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:11:50 crc kubenswrapper[4953]: I1211 10:11:50.891028 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:11:50Z","lastTransitionTime":"2025-12-11T10:11:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:11:51 crc kubenswrapper[4953]: I1211 10:11:51.005920 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 10:11:51 crc kubenswrapper[4953]: I1211 10:11:51.006760 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:11:51 crc kubenswrapper[4953]: I1211 10:11:51.006793 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:11:51 crc kubenswrapper[4953]: I1211 10:11:51.006804 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:11:51 crc kubenswrapper[4953]: I1211 10:11:51.006817 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:11:51 crc kubenswrapper[4953]: I1211 10:11:51.006829 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:11:51Z","lastTransitionTime":"2025-12-11T10:11:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:11:51 crc kubenswrapper[4953]: I1211 10:11:51.020041 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ps59j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed9da9e3-3f97-49f6-9774-3c2f06987b9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8b7289e76184818bc11ef0e99cd573244647de790af79ac277a91ebf305bc1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vngds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ps59j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:51Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:51 crc kubenswrapper[4953]: I1211 10:11:51.071792 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pqtrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d80d6bd6-dd9c-433e-93cb-2be48e4cea72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c456b7fb076e4e1f8ebe23ee26c07f2038800afe76c0e183ea82aebebe5e20d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c456b7fb076e4e1f8ebe23ee26c07f2038800afe76c0e183ea82aebebe5e20d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pqtrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:51Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:51 crc kubenswrapper[4953]: I1211 10:11:51.109128 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 10:11:51 crc kubenswrapper[4953]: E1211 10:11:51.109276 4953 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 11 10:11:51 crc kubenswrapper[4953]: E1211 10:11:51.109334 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-11 10:11:59.109310041 +0000 UTC m=+37.133169074 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 11 10:11:51 crc kubenswrapper[4953]: I1211 10:11:51.111453 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:11:51 crc kubenswrapper[4953]: I1211 10:11:51.111493 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:11:51 crc kubenswrapper[4953]: I1211 10:11:51.111508 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:11:51 crc kubenswrapper[4953]: I1211 10:11:51.111524 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:11:51 crc kubenswrapper[4953]: I1211 10:11:51.111533 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:11:51Z","lastTransitionTime":"2025-12-11T10:11:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:11:51 crc kubenswrapper[4953]: I1211 10:11:51.111980 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-pqtrx" event={"ID":"d80d6bd6-dd9c-433e-93cb-2be48e4cea72","Type":"ContainerStarted","Data":"d1199214a609bfaf541ddc0e45714a86f4165bdb240eda61b8ee35764cc2b6c8"} Dec 11 10:11:51 crc kubenswrapper[4953]: I1211 10:11:51.120327 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:51Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:51 crc kubenswrapper[4953]: I1211 10:11:51.125327 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x6f57" event={"ID":"c09d8243-6693-433e-bce1-8a99e5e37b95","Type":"ContainerStarted","Data":"c34fbadf5e4606e6ad395137eff9b9edc19b5d02125e818a6d8a19d0d20649e1"} Dec 11 10:11:51 crc kubenswrapper[4953]: I1211 10:11:51.132872 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:51Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:51 crc kubenswrapper[4953]: I1211 10:11:51.141418 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7cgmm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5c6cf50-ac15-4b65-98a3-24d5b55a9f5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e8c3b294febaab8650ca738b055222b11b0f3502da927fb9bb1f2f30b97c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrv98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7cgmm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:51Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:51 crc kubenswrapper[4953]: I1211 10:11:51.153378 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:51Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:51 crc kubenswrapper[4953]: I1211 10:11:51.164938 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:51Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:51 crc kubenswrapper[4953]: I1211 10:11:51.176378 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h4dvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"644e1d40-ab80-469e-94b4-540e52b8e2c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f734acf34a05a9425f305c809775bae58615ae1d5f89e3b519e54d7e7abb8bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbwwr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h4dvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:51Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:51 crc kubenswrapper[4953]: I1211 10:11:51.188611 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d98f6e58-767e-4e80-8dc7-bf97cdc14997\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec306b9048e81de45ce4e5ae1f564ab611980d56edf94f34c48cba7299dd754e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7453febb17d4aadef8c87c8d256a0339b441e2bed33a20a3f7cf88b4d0ce5a83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5c40bd3d558c5cff3d458a0b5a993371c3e8b6afc0035a64a21ffc0cc6c2357\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b22d8239ad9f5511dc6ae773c7ea181c4e194b0847b58332e716953d9deb9cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:51Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:51 crc kubenswrapper[4953]: I1211 10:11:51.201231 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7f8ca70-14ac-499f-9a73-c03f1cb9d3f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afbf1d478a1ccbd17c29483adf2e39e60be93dfde72d96dd4c45ee2b81c7db7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89487ecc0b25583d92a2adb537e660618a1f0477d9b0ca805c7d5cc120a38ef5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5850c59617cbc5cbf3d86246bfb8d7645964fdb32f406648e47de3d2e1dcca39\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b38e6fc7946d99ff7570627e9bfd01e9f5e029ad3f3e2cda276461f222d7950\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a91255550d88dd1963fef1112d90d2c1e779fc3e2dd1e7c824640879b8c6a58e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T10:11:37Z\\\",\\\"message\\\":\\\"W1211 10:11:26.311312 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1211 10:11:26.312053 1 crypto.go:601] Generating new CA for check-endpoints-signer@1765447886 cert, and key in /tmp/serving-cert-3652440615/serving-signer.crt, /tmp/serving-cert-3652440615/serving-signer.key\\\\nI1211 10:11:26.711906 1 observer_polling.go:159] Starting file observer\\\\nW1211 10:11:26.714018 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1211 10:11:26.714220 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1211 10:11:26.715195 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3652440615/tls.crt::/tmp/serving-cert-3652440615/tls.key\\\\\\\"\\\\nF1211 10:11:37.220702 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:26Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2348bd7a336966cd91aa6ba1cf71771e7fd111085acbb0481adee82d7a6e109\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4536100178c4611eea8cfe2b15d79f55fdd32b7004de523cc2694bd656ce5be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4536100178c4611eea8cfe2b15d79f55fdd32b7004de523cc2694bd656ce5be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:51Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:51 crc kubenswrapper[4953]: I1211 10:11:51.213842 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb9312a7af4fcd14d64411afec83b7315dbe399254aab23665cccfa0b04a62db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:51Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:51 crc kubenswrapper[4953]: I1211 10:11:51.214115 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:11:51 crc kubenswrapper[4953]: I1211 10:11:51.214141 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:11:51 crc kubenswrapper[4953]: I1211 10:11:51.214149 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:11:51 crc kubenswrapper[4953]: I1211 10:11:51.214161 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:11:51 crc kubenswrapper[4953]: I1211 10:11:51.214171 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:11:51Z","lastTransitionTime":"2025-12-11T10:11:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:11:51 crc kubenswrapper[4953]: I1211 10:11:51.226537 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q2898" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed741fb7-1326-48b7-a713-17c9f0243eac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c91690c6fc715e967f98fc731db9ff317a21946b0903480ee2534f5e71ae7ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5z9nk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd6810974250266a6a2efbea13db5cb6f52a4bbdec05955f7b9f58e55d7a8c4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5z9nk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q2898\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:51Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:51 crc kubenswrapper[4953]: I1211 10:11:51.245662 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x6f57" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c09d8243-6693-433e-bce1-8a99e5e37b95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c84dce3b218bcc630289f0bff286bad09d4481e1787ef2c048799bfc6c97108f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c84dce3b218bcc630289f0bff286bad09d4481e1787ef2c048799bfc6c97108f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x6f57\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:51Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:51 crc kubenswrapper[4953]: I1211 10:11:51.257643 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e38e7bec81ab11b9afe5c592d5c57aa1c0527e5e4031265a00a99ef8cb3c6dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da0ab06260b0bf565e089d1d1a78ae71e0ce94f0d5e867393dafc543f9014367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:51Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:51 crc kubenswrapper[4953]: I1211 10:11:51.273969 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x6f57" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c09d8243-6693-433e-bce1-8a99e5e37b95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c84dce3b218bcc630289f0bff286bad09d4481e1787ef2c048799bfc6c97108f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c84dce3b218bcc630289f0bff286bad09d4481e1787ef2c048799bfc6c97108f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x6f57\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:51Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:51 crc kubenswrapper[4953]: I1211 10:11:51.285548 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb9312a7af4fcd14d64411afec83b7315dbe399254aab23665cccfa0b04a62db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:51Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:51 crc kubenswrapper[4953]: I1211 10:11:51.296243 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q2898" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed741fb7-1326-48b7-a713-17c9f0243eac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c91690c6fc715e967f98fc731db9ff317a21946b0903480ee2534f5e71ae7ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5z9nk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd6810974250266a6a2efbea13db5cb6f52a4bbdec05955f7b9f58e55d7a8c4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5z9nk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q2898\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:51Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:51 crc kubenswrapper[4953]: I1211 10:11:51.310849 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e38e7bec81ab11b9afe5c592d5c57aa1c0527e5e4031265a00a99ef8cb3c6dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da0ab06260b0bf565e089d1d1a78ae71e0ce94f0d5e867393dafc543f9014367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:51Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:51 crc kubenswrapper[4953]: I1211 10:11:51.317901 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:11:51 crc kubenswrapper[4953]: I1211 10:11:51.317930 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:11:51 crc kubenswrapper[4953]: I1211 10:11:51.317938 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:11:51 crc kubenswrapper[4953]: I1211 10:11:51.317950 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:11:51 crc kubenswrapper[4953]: I1211 10:11:51.317959 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:11:51Z","lastTransitionTime":"2025-12-11T10:11:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:11:51 crc kubenswrapper[4953]: I1211 10:11:51.321251 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7cgmm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5c6cf50-ac15-4b65-98a3-24d5b55a9f5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e8c3b294febaab8650ca738b055222b11b0f3502da927fb9bb1f2f30b97c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrv98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7cgmm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:51Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:51 crc kubenswrapper[4953]: I1211 10:11:51.330938 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ps59j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed9da9e3-3f97-49f6-9774-3c2f06987b9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8b7289e76184818bc11ef0e99cd573244647de790af79ac277a91ebf305bc1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vngds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ps59j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:51Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:51 crc kubenswrapper[4953]: I1211 10:11:51.345622 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pqtrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d80d6bd6-dd9c-433e-93cb-2be48e4cea72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c456b7fb076e4e1f8ebe23ee26c07f2038800afe76c0e183ea82aebebe5e20d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c456b7fb076e4e1f8ebe23ee26c07f2038800afe76c0e183ea82aebebe5e20d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1199214a609bfaf541ddc0e45714a86f4165bdb240eda61b8ee35764cc2b6c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pqtrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:51Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:51 crc kubenswrapper[4953]: I1211 10:11:51.360675 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:51Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:51 crc kubenswrapper[4953]: I1211 10:11:51.374649 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:51Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:51 crc kubenswrapper[4953]: I1211 10:11:51.392814 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7f8ca70-14ac-499f-9a73-c03f1cb9d3f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afbf1d478a1ccbd17c29483adf2e39e60be93dfde72d96dd4c45ee2b81c7db7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89487ecc0b25583d92a2adb537e660618a1f0477d9b0ca805c7d5cc120a38ef5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5850c59617cbc5cbf3d86246bfb8d7645964fdb32f406648e47de3d2e1dcca39\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b38e6fc7946d99ff7570627e9bfd01e9f5e029ad3f3e2cda276461f222d7950\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a91255550d88dd1963fef1112d90d2c1e779fc3e2dd1e7c824640879b8c6a58e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T10:11:37Z\\\",\\\"message\\\":\\\"W1211 10:11:26.311312 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1211 10:11:26.312053 1 crypto.go:601] Generating new CA for check-endpoints-signer@1765447886 cert, and key in /tmp/serving-cert-3652440615/serving-signer.crt, /tmp/serving-cert-3652440615/serving-signer.key\\\\nI1211 10:11:26.711906 1 observer_polling.go:159] Starting file observer\\\\nW1211 10:11:26.714018 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1211 10:11:26.714220 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1211 10:11:26.715195 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3652440615/tls.crt::/tmp/serving-cert-3652440615/tls.key\\\\\\\"\\\\nF1211 10:11:37.220702 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:26Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2348bd7a336966cd91aa6ba1cf71771e7fd111085acbb0481adee82d7a6e109\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4536100178c4611eea8cfe2b15d79f55fdd32b7004de523cc2694bd656ce5be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4536100178c4611eea8cfe2b15d79f55fdd32b7004de523cc2694bd656ce5be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:51Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:51 crc kubenswrapper[4953]: I1211 10:11:51.407163 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:51Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:51 crc kubenswrapper[4953]: I1211 10:11:51.420980 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:11:51 crc kubenswrapper[4953]: I1211 10:11:51.421025 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:11:51 crc kubenswrapper[4953]: I1211 10:11:51.421040 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:11:51 crc kubenswrapper[4953]: I1211 10:11:51.421056 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:11:51 crc kubenswrapper[4953]: I1211 10:11:51.421069 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:11:51Z","lastTransitionTime":"2025-12-11T10:11:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:11:51 crc kubenswrapper[4953]: I1211 10:11:51.421196 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:51Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:51 crc kubenswrapper[4953]: I1211 10:11:51.437445 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h4dvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"644e1d40-ab80-469e-94b4-540e52b8e2c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f734acf34a05a9425f305c809775bae58615ae1d5f89e3b519e54d7e7abb8bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbwwr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h4dvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:51Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:51 crc kubenswrapper[4953]: I1211 10:11:51.448173 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d98f6e58-767e-4e80-8dc7-bf97cdc14997\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec306b9048e81de45ce4e5ae1f564ab611980d56edf94f34c48cba7299dd754e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7453febb17d4aadef8c87c8d256a0339b441e2bed33a20a3f7cf88b4d0ce5a83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5c40bd3d558c5cff3d458a0b5a993371c3e8b6afc0035a64a21ffc0cc6c2357\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b22d8239ad9f5511dc6ae773c7ea181c4e194b0847b58332e716953d9deb9cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:51Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:51 crc kubenswrapper[4953]: I1211 10:11:51.472680 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 10:11:51 crc kubenswrapper[4953]: I1211 10:11:51.472718 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 10:11:51 crc kubenswrapper[4953]: E1211 10:11:51.472923 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 10:11:51 crc kubenswrapper[4953]: E1211 10:11:51.473090 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 10:11:51 crc kubenswrapper[4953]: I1211 10:11:51.513810 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 10:11:51 crc kubenswrapper[4953]: I1211 10:11:51.513975 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 10:11:51 crc kubenswrapper[4953]: I1211 10:11:51.514023 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 10:11:51 crc kubenswrapper[4953]: I1211 10:11:51.514064 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 10:11:51 crc kubenswrapper[4953]: E1211 10:11:51.514210 4953 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 11 10:11:51 crc kubenswrapper[4953]: E1211 10:11:51.514230 4953 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 11 10:11:51 crc kubenswrapper[4953]: E1211 10:11:51.514242 4953 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 10:11:51 crc kubenswrapper[4953]: E1211 10:11:51.514298 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-11 10:11:59.514280461 +0000 UTC m=+37.538139494 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 10:11:51 crc kubenswrapper[4953]: E1211 10:11:51.514604 4953 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 11 10:11:51 crc kubenswrapper[4953]: E1211 10:11:51.514674 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 10:11:59.514646913 +0000 UTC m=+37.538505946 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:11:51 crc kubenswrapper[4953]: E1211 10:11:51.514784 4953 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 11 10:11:51 crc kubenswrapper[4953]: E1211 10:11:51.514884 4953 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 11 10:11:51 crc kubenswrapper[4953]: E1211 10:11:51.514900 4953 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 10:11:51 crc kubenswrapper[4953]: E1211 10:11:51.514841 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-11 10:11:59.514823629 +0000 UTC m=+37.538682712 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 11 10:11:51 crc kubenswrapper[4953]: E1211 10:11:51.515174 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-11 10:11:59.51515853 +0000 UTC m=+37.539017593 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 10:11:51 crc kubenswrapper[4953]: I1211 10:11:51.540270 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:11:51 crc kubenswrapper[4953]: I1211 10:11:51.540602 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:11:51 crc kubenswrapper[4953]: I1211 10:11:51.540828 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:11:51 crc kubenswrapper[4953]: I1211 10:11:51.540909 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:11:51 crc kubenswrapper[4953]: I1211 10:11:51.540994 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:11:51Z","lastTransitionTime":"2025-12-11T10:11:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:11:51 crc kubenswrapper[4953]: I1211 10:11:51.643624 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:11:51 crc kubenswrapper[4953]: I1211 10:11:51.643883 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:11:51 crc kubenswrapper[4953]: I1211 10:11:51.644002 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:11:51 crc kubenswrapper[4953]: I1211 10:11:51.644132 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:11:51 crc kubenswrapper[4953]: I1211 10:11:51.644283 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:11:51Z","lastTransitionTime":"2025-12-11T10:11:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:11:51 crc kubenswrapper[4953]: I1211 10:11:51.746878 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:11:51 crc kubenswrapper[4953]: I1211 10:11:51.746918 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:11:51 crc kubenswrapper[4953]: I1211 10:11:51.746930 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:11:51 crc kubenswrapper[4953]: I1211 10:11:51.746946 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:11:51 crc kubenswrapper[4953]: I1211 10:11:51.746958 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:11:51Z","lastTransitionTime":"2025-12-11T10:11:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:11:51 crc kubenswrapper[4953]: I1211 10:11:51.849471 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:11:51 crc kubenswrapper[4953]: I1211 10:11:51.849508 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:11:51 crc kubenswrapper[4953]: I1211 10:11:51.849517 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:11:51 crc kubenswrapper[4953]: I1211 10:11:51.849532 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:11:51 crc kubenswrapper[4953]: I1211 10:11:51.849542 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:11:51Z","lastTransitionTime":"2025-12-11T10:11:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:11:51 crc kubenswrapper[4953]: I1211 10:11:51.954261 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:11:51 crc kubenswrapper[4953]: I1211 10:11:51.954319 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:11:51 crc kubenswrapper[4953]: I1211 10:11:51.954336 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:11:51 crc kubenswrapper[4953]: I1211 10:11:51.954360 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:11:51 crc kubenswrapper[4953]: I1211 10:11:51.954380 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:11:51Z","lastTransitionTime":"2025-12-11T10:11:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:11:52 crc kubenswrapper[4953]: I1211 10:11:52.057872 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:11:52 crc kubenswrapper[4953]: I1211 10:11:52.057916 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:11:52 crc kubenswrapper[4953]: I1211 10:11:52.057925 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:11:52 crc kubenswrapper[4953]: I1211 10:11:52.057940 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:11:52 crc kubenswrapper[4953]: I1211 10:11:52.057950 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:11:52Z","lastTransitionTime":"2025-12-11T10:11:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:11:52 crc kubenswrapper[4953]: I1211 10:11:52.159874 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:11:52 crc kubenswrapper[4953]: I1211 10:11:52.159914 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:11:52 crc kubenswrapper[4953]: I1211 10:11:52.159924 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:11:52 crc kubenswrapper[4953]: I1211 10:11:52.159940 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:11:52 crc kubenswrapper[4953]: I1211 10:11:52.159952 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:11:52Z","lastTransitionTime":"2025-12-11T10:11:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:11:52 crc kubenswrapper[4953]: I1211 10:11:52.262180 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:11:52 crc kubenswrapper[4953]: I1211 10:11:52.262218 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:11:52 crc kubenswrapper[4953]: I1211 10:11:52.262229 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:11:52 crc kubenswrapper[4953]: I1211 10:11:52.262243 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:11:52 crc kubenswrapper[4953]: I1211 10:11:52.262254 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:11:52Z","lastTransitionTime":"2025-12-11T10:11:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:11:52 crc kubenswrapper[4953]: I1211 10:11:52.365130 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:11:52 crc kubenswrapper[4953]: I1211 10:11:52.365435 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:11:52 crc kubenswrapper[4953]: I1211 10:11:52.365503 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:11:52 crc kubenswrapper[4953]: I1211 10:11:52.365602 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:11:52 crc kubenswrapper[4953]: I1211 10:11:52.365690 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:11:52Z","lastTransitionTime":"2025-12-11T10:11:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:11:52 crc kubenswrapper[4953]: I1211 10:11:52.468471 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:11:52 crc kubenswrapper[4953]: I1211 10:11:52.468516 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:11:52 crc kubenswrapper[4953]: I1211 10:11:52.468536 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:11:52 crc kubenswrapper[4953]: I1211 10:11:52.468557 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:11:52 crc kubenswrapper[4953]: I1211 10:11:52.468591 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:11:52Z","lastTransitionTime":"2025-12-11T10:11:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:11:52 crc kubenswrapper[4953]: I1211 10:11:52.472717 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 10:11:52 crc kubenswrapper[4953]: E1211 10:11:52.472815 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 10:11:52 crc kubenswrapper[4953]: I1211 10:11:52.485756 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e38e7bec81ab11b9afe5c592d5c57aa1c0527e5e4031265a00a99ef8cb3c6dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da0ab06260b0bf565e089d1d1a78ae71e0ce94f0d5e867393dafc543f9014367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:52Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:52 crc kubenswrapper[4953]: I1211 10:11:52.496196 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ps59j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed9da9e3-3f97-49f6-9774-3c2f06987b9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8b7289e76184818bc11ef0e99cd573244647de790af79ac277a91ebf305bc1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vngds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ps59j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:52Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:52 crc kubenswrapper[4953]: I1211 10:11:52.511379 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pqtrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d80d6bd6-dd9c-433e-93cb-2be48e4cea72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c456b7fb076e4e1f8ebe23ee26c07f2038800afe76c0e183ea82aebebe5e20d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c456b7fb076e4e1f8ebe23ee26c07f2038800afe76c0e183ea82aebebe5e20d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1199214a609bfaf541ddc0e45714a86f4165bdb240eda61b8ee35764cc2b6c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pqtrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:52Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:52 crc kubenswrapper[4953]: I1211 10:11:52.525605 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:52Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:52 crc kubenswrapper[4953]: I1211 10:11:52.536253 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:52Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:52 crc kubenswrapper[4953]: I1211 10:11:52.546932 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7cgmm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5c6cf50-ac15-4b65-98a3-24d5b55a9f5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e8c3b294febaab8650ca738b055222b11b0f3502da927fb9bb1f2f30b97c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrv98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7cgmm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:52Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:52 crc kubenswrapper[4953]: I1211 10:11:52.558032 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:52Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:52 crc kubenswrapper[4953]: I1211 10:11:52.568713 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:52Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:52 crc kubenswrapper[4953]: I1211 10:11:52.570280 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:11:52 crc kubenswrapper[4953]: I1211 10:11:52.570314 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:11:52 crc kubenswrapper[4953]: I1211 10:11:52.570326 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:11:52 crc kubenswrapper[4953]: I1211 10:11:52.570341 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:11:52 crc kubenswrapper[4953]: I1211 10:11:52.570354 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:11:52Z","lastTransitionTime":"2025-12-11T10:11:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:11:52 crc kubenswrapper[4953]: I1211 10:11:52.581966 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h4dvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"644e1d40-ab80-469e-94b4-540e52b8e2c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f734acf34a05a9425f305c809775bae58615ae1d5f89e3b519e54d7e7abb8bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbwwr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h4dvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:52Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:52 crc kubenswrapper[4953]: I1211 10:11:52.601258 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d98f6e58-767e-4e80-8dc7-bf97cdc14997\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec306b9048e81de45ce4e5ae1f564ab611980d56edf94f34c48cba7299dd754e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7453febb17d4aadef8c87c8d256a0339b441e2bed33a20a3f7cf88b4d0ce5a83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5c40bd3d558c5cff3d458a0b5a993371c3e8b6afc0035a64a21ffc0cc6c2357\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b22d8239ad9f5511dc6ae773c7ea181c4e194b0847b58332e716953d9deb9cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:52Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:52 crc kubenswrapper[4953]: I1211 10:11:52.614940 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7f8ca70-14ac-499f-9a73-c03f1cb9d3f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afbf1d478a1ccbd17c29483adf2e39e60be93dfde72d96dd4c45ee2b81c7db7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89487ecc0b25583d92a2adb537e660618a1f0477d9b0ca805c7d5cc120a38ef5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5850c59617cbc5cbf3d86246bfb8d7645964fdb32f406648e47de3d2e1dcca39\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b38e6fc7946d99ff7570627e9bfd01e9f5e029ad3f3e2cda276461f222d7950\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a91255550d88dd1963fef1112d90d2c1e779fc3e2dd1e7c824640879b8c6a58e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T10:11:37Z\\\",\\\"message\\\":\\\"W1211 10:11:26.311312 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1211 10:11:26.312053 1 crypto.go:601] Generating new CA for check-endpoints-signer@1765447886 cert, and key in /tmp/serving-cert-3652440615/serving-signer.crt, /tmp/serving-cert-3652440615/serving-signer.key\\\\nI1211 10:11:26.711906 1 observer_polling.go:159] Starting file observer\\\\nW1211 10:11:26.714018 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1211 10:11:26.714220 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1211 10:11:26.715195 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3652440615/tls.crt::/tmp/serving-cert-3652440615/tls.key\\\\\\\"\\\\nF1211 10:11:37.220702 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:26Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2348bd7a336966cd91aa6ba1cf71771e7fd111085acbb0481adee82d7a6e109\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4536100178c4611eea8cfe2b15d79f55fdd32b7004de523cc2694bd656ce5be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4536100178c4611eea8cfe2b15d79f55fdd32b7004de523cc2694bd656ce5be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:52Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:52 crc kubenswrapper[4953]: I1211 10:11:52.629037 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb9312a7af4fcd14d64411afec83b7315dbe399254aab23665cccfa0b04a62db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:52Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:52 crc kubenswrapper[4953]: I1211 10:11:52.640847 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q2898" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed741fb7-1326-48b7-a713-17c9f0243eac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c91690c6fc715e967f98fc731db9ff317a21946b0903480ee2534f5e71ae7ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5z9nk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd6810974250266a6a2efbea13db5cb6f52a4bbdec05955f7b9f58e55d7a8c4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5z9nk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q2898\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:52Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:52 crc kubenswrapper[4953]: I1211 10:11:52.689954 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x6f57" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c09d8243-6693-433e-bce1-8a99e5e37b95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c84dce3b218bcc630289f0bff286bad09d4481e1787ef2c048799bfc6c97108f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c84dce3b218bcc630289f0bff286bad09d4481e1787ef2c048799bfc6c97108f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x6f57\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:52Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:52 crc kubenswrapper[4953]: I1211 10:11:52.691212 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:11:52 crc kubenswrapper[4953]: I1211 10:11:52.691245 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:11:52 crc kubenswrapper[4953]: I1211 10:11:52.691276 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:11:52 crc kubenswrapper[4953]: I1211 10:11:52.691292 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:11:52 crc kubenswrapper[4953]: I1211 10:11:52.691301 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:11:52Z","lastTransitionTime":"2025-12-11T10:11:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:11:52 crc kubenswrapper[4953]: I1211 10:11:52.794024 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:11:52 crc kubenswrapper[4953]: I1211 10:11:52.794084 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:11:52 crc kubenswrapper[4953]: I1211 10:11:52.794095 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:11:52 crc kubenswrapper[4953]: I1211 10:11:52.794113 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:11:52 crc kubenswrapper[4953]: I1211 10:11:52.794124 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:11:52Z","lastTransitionTime":"2025-12-11T10:11:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:11:52 crc kubenswrapper[4953]: I1211 10:11:52.896869 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:11:52 crc kubenswrapper[4953]: I1211 10:11:52.896935 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:11:52 crc kubenswrapper[4953]: I1211 10:11:52.896954 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:11:52 crc kubenswrapper[4953]: I1211 10:11:52.896995 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:11:52 crc kubenswrapper[4953]: I1211 10:11:52.897014 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:11:52Z","lastTransitionTime":"2025-12-11T10:11:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:11:53 crc kubenswrapper[4953]: I1211 10:11:53.000086 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:11:53 crc kubenswrapper[4953]: I1211 10:11:53.000150 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:11:53 crc kubenswrapper[4953]: I1211 10:11:53.000167 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:11:53 crc kubenswrapper[4953]: I1211 10:11:53.000188 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:11:53 crc kubenswrapper[4953]: I1211 10:11:53.000204 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:11:53Z","lastTransitionTime":"2025-12-11T10:11:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:11:53 crc kubenswrapper[4953]: I1211 10:11:53.102446 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:11:53 crc kubenswrapper[4953]: I1211 10:11:53.102494 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:11:53 crc kubenswrapper[4953]: I1211 10:11:53.102508 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:11:53 crc kubenswrapper[4953]: I1211 10:11:53.102524 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:11:53 crc kubenswrapper[4953]: I1211 10:11:53.102536 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:11:53Z","lastTransitionTime":"2025-12-11T10:11:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:11:53 crc kubenswrapper[4953]: I1211 10:11:53.138455 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"e57ec14864d78b0463b4bd4af9dfa21aec61df60a63a38b7d98ba4871716edfc"} Dec 11 10:11:53 crc kubenswrapper[4953]: I1211 10:11:53.142039 4953 generic.go:334] "Generic (PLEG): container finished" podID="d80d6bd6-dd9c-433e-93cb-2be48e4cea72" containerID="d1199214a609bfaf541ddc0e45714a86f4165bdb240eda61b8ee35764cc2b6c8" exitCode=0 Dec 11 10:11:53 crc kubenswrapper[4953]: I1211 10:11:53.142117 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-pqtrx" event={"ID":"d80d6bd6-dd9c-433e-93cb-2be48e4cea72","Type":"ContainerDied","Data":"d1199214a609bfaf541ddc0e45714a86f4165bdb240eda61b8ee35764cc2b6c8"} Dec 11 10:11:53 crc kubenswrapper[4953]: I1211 10:11:53.150448 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x6f57" event={"ID":"c09d8243-6693-433e-bce1-8a99e5e37b95","Type":"ContainerStarted","Data":"8da892281a7b8dea449aee461dcb35bef204e2a768689e751abcb21204091aaa"} Dec 11 10:11:53 crc kubenswrapper[4953]: I1211 10:11:53.160922 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7f8ca70-14ac-499f-9a73-c03f1cb9d3f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afbf1d478a1ccbd17c29483adf2e39e60be93dfde72d96dd4c45ee2b81c7db7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89487ecc0b25583d92a2adb537e660618a1f0477d9b0ca805c7d5cc120a38ef5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5850c59617cbc5cbf3d86246bfb8d7645964fdb32f406648e47de3d2e1dcca39\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b38e6fc7946d99ff7570627e9bfd01e9f5e029ad3f3e2cda276461f222d7950\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a91255550d88dd1963fef1112d90d2c1e779fc3e2dd1e7c824640879b8c6a58e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T10:11:37Z\\\",\\\"message\\\":\\\"W1211 10:11:26.311312 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1211 10:11:26.312053 1 crypto.go:601] Generating new CA for check-endpoints-signer@1765447886 cert, and key in /tmp/serving-cert-3652440615/serving-signer.crt, /tmp/serving-cert-3652440615/serving-signer.key\\\\nI1211 10:11:26.711906 1 observer_polling.go:159] Starting file observer\\\\nW1211 10:11:26.714018 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1211 10:11:26.714220 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1211 10:11:26.715195 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3652440615/tls.crt::/tmp/serving-cert-3652440615/tls.key\\\\\\\"\\\\nF1211 10:11:37.220702 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:26Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2348bd7a336966cd91aa6ba1cf71771e7fd111085acbb0481adee82d7a6e109\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4536100178c4611eea8cfe2b15d79f55fdd32b7004de523cc2694bd656ce5be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4536100178c4611eea8cfe2b15d79f55fdd32b7004de523cc2694bd656ce5be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:53Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:53 crc kubenswrapper[4953]: I1211 10:11:53.177619 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:53Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:53 crc kubenswrapper[4953]: I1211 10:11:53.191835 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e57ec14864d78b0463b4bd4af9dfa21aec61df60a63a38b7d98ba4871716edfc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:53Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:53 crc kubenswrapper[4953]: I1211 10:11:53.205119 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:11:53 crc kubenswrapper[4953]: I1211 10:11:53.205173 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:11:53 crc kubenswrapper[4953]: I1211 10:11:53.205183 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:11:53 crc kubenswrapper[4953]: I1211 10:11:53.205198 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:11:53 crc kubenswrapper[4953]: I1211 10:11:53.205207 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:11:53Z","lastTransitionTime":"2025-12-11T10:11:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:11:53 crc kubenswrapper[4953]: I1211 10:11:53.207169 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h4dvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"644e1d40-ab80-469e-94b4-540e52b8e2c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f734acf34a05a9425f305c809775bae58615ae1d5f89e3b519e54d7e7abb8bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbwwr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h4dvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:53Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:53 crc kubenswrapper[4953]: I1211 10:11:53.219794 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d98f6e58-767e-4e80-8dc7-bf97cdc14997\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec306b9048e81de45ce4e5ae1f564ab611980d56edf94f34c48cba7299dd754e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7453febb17d4aadef8c87c8d256a0339b441e2bed33a20a3f7cf88b4d0ce5a83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5c40bd3d558c5cff3d458a0b5a993371c3e8b6afc0035a64a21ffc0cc6c2357\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b22d8239ad9f5511dc6ae773c7ea181c4e194b0847b58332e716953d9deb9cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:53Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:53 crc kubenswrapper[4953]: I1211 10:11:53.243187 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x6f57" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c09d8243-6693-433e-bce1-8a99e5e37b95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c84dce3b218bcc630289f0bff286bad09d4481e1787ef2c048799bfc6c97108f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c84dce3b218bcc630289f0bff286bad09d4481e1787ef2c048799bfc6c97108f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x6f57\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:53Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:53 crc kubenswrapper[4953]: I1211 10:11:53.258019 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb9312a7af4fcd14d64411afec83b7315dbe399254aab23665cccfa0b04a62db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:53Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:53 crc kubenswrapper[4953]: I1211 10:11:53.274785 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q2898" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed741fb7-1326-48b7-a713-17c9f0243eac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c91690c6fc715e967f98fc731db9ff317a21946b0903480ee2534f5e71ae7ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5z9nk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd6810974250266a6a2efbea13db5cb6f52a4bbdec05955f7b9f58e55d7a8c4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5z9nk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q2898\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:53Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:53 crc kubenswrapper[4953]: I1211 10:11:53.291300 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e38e7bec81ab11b9afe5c592d5c57aa1c0527e5e4031265a00a99ef8cb3c6dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da0ab06260b0bf565e089d1d1a78ae71e0ce94f0d5e867393dafc543f9014367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:53Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:53 crc kubenswrapper[4953]: I1211 10:11:53.303442 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7cgmm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5c6cf50-ac15-4b65-98a3-24d5b55a9f5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e8c3b294febaab8650ca738b055222b11b0f3502da927fb9bb1f2f30b97c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrv98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7cgmm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:53Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:53 crc kubenswrapper[4953]: I1211 10:11:53.308899 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:11:53 crc kubenswrapper[4953]: I1211 10:11:53.308964 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:11:53 crc kubenswrapper[4953]: I1211 10:11:53.308982 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:11:53 crc kubenswrapper[4953]: I1211 10:11:53.309006 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:11:53 crc kubenswrapper[4953]: I1211 10:11:53.309024 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:11:53Z","lastTransitionTime":"2025-12-11T10:11:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:11:53 crc kubenswrapper[4953]: I1211 10:11:53.320853 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ps59j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed9da9e3-3f97-49f6-9774-3c2f06987b9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8b7289e76184818bc11ef0e99cd573244647de790af79ac277a91ebf305bc1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vngds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ps59j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:53Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:53 crc kubenswrapper[4953]: I1211 10:11:53.337278 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pqtrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d80d6bd6-dd9c-433e-93cb-2be48e4cea72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c456b7fb076e4e1f8ebe23ee26c07f2038800afe76c0e183ea82aebebe5e20d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c456b7fb076e4e1f8ebe23ee26c07f2038800afe76c0e183ea82aebebe5e20d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1199214a609bfaf541ddc0e45714a86f4165bdb240eda61b8ee35764cc2b6c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pqtrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:53Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:53 crc kubenswrapper[4953]: I1211 10:11:53.348540 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:53Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:53 crc kubenswrapper[4953]: I1211 10:11:53.363377 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:53Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:53 crc kubenswrapper[4953]: I1211 10:11:53.381239 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e38e7bec81ab11b9afe5c592d5c57aa1c0527e5e4031265a00a99ef8cb3c6dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da0ab06260b0bf565e089d1d1a78ae71e0ce94f0d5e867393dafc543f9014367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:53Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:53 crc kubenswrapper[4953]: I1211 10:11:53.397272 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:53Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:53 crc kubenswrapper[4953]: I1211 10:11:53.410782 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:53Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:53 crc kubenswrapper[4953]: I1211 10:11:53.411510 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:11:53 crc kubenswrapper[4953]: I1211 10:11:53.411556 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:11:53 crc kubenswrapper[4953]: I1211 10:11:53.411590 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:11:53 crc kubenswrapper[4953]: I1211 10:11:53.411609 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:11:53 crc kubenswrapper[4953]: I1211 10:11:53.411622 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:11:53Z","lastTransitionTime":"2025-12-11T10:11:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:11:53 crc kubenswrapper[4953]: I1211 10:11:53.421897 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7cgmm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5c6cf50-ac15-4b65-98a3-24d5b55a9f5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e8c3b294febaab8650ca738b055222b11b0f3502da927fb9bb1f2f30b97c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrv98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7cgmm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:53Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:53 crc kubenswrapper[4953]: I1211 10:11:53.431397 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ps59j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed9da9e3-3f97-49f6-9774-3c2f06987b9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8b7289e76184818bc11ef0e99cd573244647de790af79ac277a91ebf305bc1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vngds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ps59j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:53Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:53 crc kubenswrapper[4953]: I1211 10:11:53.444601 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pqtrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d80d6bd6-dd9c-433e-93cb-2be48e4cea72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c456b7fb076e4e1f8ebe23ee26c07f2038800afe76c0e183ea82aebebe5e20d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c456b7fb076e4e1f8ebe23ee26c07f2038800afe76c0e183ea82aebebe5e20d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1199214a609bfaf541ddc0e45714a86f4165bdb240eda61b8ee35764cc2b6c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1199214a609bfaf541ddc0e45714a86f4165bdb240eda61b8ee35764cc2b6c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pqtrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:53Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:53 crc kubenswrapper[4953]: I1211 10:11:53.454936 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d98f6e58-767e-4e80-8dc7-bf97cdc14997\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec306b9048e81de45ce4e5ae1f564ab611980d56edf94f34c48cba7299dd754e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7453febb17d4aadef8c87c8d256a0339b441e2bed33a20a3f7cf88b4d0ce5a83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5c40bd3d558c5cff3d458a0b5a993371c3e8b6afc0035a64a21ffc0cc6c2357\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b22d8239ad9f5511dc6ae773c7ea181c4e194b0847b58332e716953d9deb9cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:53Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:53 crc kubenswrapper[4953]: I1211 10:11:53.467754 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7f8ca70-14ac-499f-9a73-c03f1cb9d3f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afbf1d478a1ccbd17c29483adf2e39e60be93dfde72d96dd4c45ee2b81c7db7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89487ecc0b25583d92a2adb537e660618a1f0477d9b0ca805c7d5cc120a38ef5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5850c59617cbc5cbf3d86246bfb8d7645964fdb32f406648e47de3d2e1dcca39\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b38e6fc7946d99ff7570627e9bfd01e9f5e029ad3f3e2cda276461f222d7950\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a91255550d88dd1963fef1112d90d2c1e779fc3e2dd1e7c824640879b8c6a58e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T10:11:37Z\\\",\\\"message\\\":\\\"W1211 10:11:26.311312 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1211 10:11:26.312053 1 crypto.go:601] Generating new CA for check-endpoints-signer@1765447886 cert, and key in /tmp/serving-cert-3652440615/serving-signer.crt, /tmp/serving-cert-3652440615/serving-signer.key\\\\nI1211 10:11:26.711906 1 observer_polling.go:159] Starting file observer\\\\nW1211 10:11:26.714018 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1211 10:11:26.714220 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1211 10:11:26.715195 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3652440615/tls.crt::/tmp/serving-cert-3652440615/tls.key\\\\\\\"\\\\nF1211 10:11:37.220702 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:26Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2348bd7a336966cd91aa6ba1cf71771e7fd111085acbb0481adee82d7a6e109\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4536100178c4611eea8cfe2b15d79f55fdd32b7004de523cc2694bd656ce5be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4536100178c4611eea8cfe2b15d79f55fdd32b7004de523cc2694bd656ce5be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:53Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:53 crc kubenswrapper[4953]: I1211 10:11:53.472784 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 10:11:53 crc kubenswrapper[4953]: I1211 10:11:53.472856 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 10:11:53 crc kubenswrapper[4953]: E1211 10:11:53.472919 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 10:11:53 crc kubenswrapper[4953]: E1211 10:11:53.473046 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 10:11:53 crc kubenswrapper[4953]: I1211 10:11:53.479356 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:53Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:53 crc kubenswrapper[4953]: I1211 10:11:53.489809 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e57ec14864d78b0463b4bd4af9dfa21aec61df60a63a38b7d98ba4871716edfc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:53Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:53 crc kubenswrapper[4953]: I1211 10:11:53.501134 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h4dvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"644e1d40-ab80-469e-94b4-540e52b8e2c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f734acf34a05a9425f305c809775bae58615ae1d5f89e3b519e54d7e7abb8bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbwwr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h4dvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:53Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:53 crc kubenswrapper[4953]: I1211 10:11:53.512773 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb9312a7af4fcd14d64411afec83b7315dbe399254aab23665cccfa0b04a62db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:53Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:53 crc kubenswrapper[4953]: I1211 10:11:53.514360 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:11:53 crc kubenswrapper[4953]: I1211 10:11:53.514391 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:11:53 crc kubenswrapper[4953]: I1211 10:11:53.514403 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:11:53 crc kubenswrapper[4953]: I1211 10:11:53.514418 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:11:53 crc kubenswrapper[4953]: I1211 10:11:53.514429 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:11:53Z","lastTransitionTime":"2025-12-11T10:11:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:11:53 crc kubenswrapper[4953]: I1211 10:11:53.522651 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q2898" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed741fb7-1326-48b7-a713-17c9f0243eac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c91690c6fc715e967f98fc731db9ff317a21946b0903480ee2534f5e71ae7ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5z9nk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd6810974250266a6a2efbea13db5cb6f52a4bbdec05955f7b9f58e55d7a8c4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5z9nk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q2898\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:53Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:53 crc kubenswrapper[4953]: I1211 10:11:53.544936 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x6f57" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c09d8243-6693-433e-bce1-8a99e5e37b95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c84dce3b218bcc630289f0bff286bad09d4481e1787ef2c048799bfc6c97108f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c84dce3b218bcc630289f0bff286bad09d4481e1787ef2c048799bfc6c97108f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x6f57\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:53Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:53 crc kubenswrapper[4953]: I1211 10:11:53.617232 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:11:53 crc kubenswrapper[4953]: I1211 10:11:53.617279 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:11:53 crc kubenswrapper[4953]: I1211 10:11:53.617295 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:11:53 crc kubenswrapper[4953]: I1211 10:11:53.617316 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:11:53 crc kubenswrapper[4953]: I1211 10:11:53.617332 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:11:53Z","lastTransitionTime":"2025-12-11T10:11:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:11:53 crc kubenswrapper[4953]: I1211 10:11:53.721885 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:11:53 crc kubenswrapper[4953]: I1211 10:11:53.721910 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:11:53 crc kubenswrapper[4953]: I1211 10:11:53.721920 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:11:53 crc kubenswrapper[4953]: I1211 10:11:53.721938 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:11:53 crc kubenswrapper[4953]: I1211 10:11:53.721951 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:11:53Z","lastTransitionTime":"2025-12-11T10:11:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:11:53 crc kubenswrapper[4953]: I1211 10:11:53.828738 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:11:53 crc kubenswrapper[4953]: I1211 10:11:53.829086 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:11:53 crc kubenswrapper[4953]: I1211 10:11:53.829096 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:11:53 crc kubenswrapper[4953]: I1211 10:11:53.829109 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:11:53 crc kubenswrapper[4953]: I1211 10:11:53.829120 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:11:53Z","lastTransitionTime":"2025-12-11T10:11:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:11:53 crc kubenswrapper[4953]: I1211 10:11:53.931772 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:11:53 crc kubenswrapper[4953]: I1211 10:11:53.931824 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:11:53 crc kubenswrapper[4953]: I1211 10:11:53.931850 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:11:53 crc kubenswrapper[4953]: I1211 10:11:53.931873 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:11:53 crc kubenswrapper[4953]: I1211 10:11:53.931888 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:11:53Z","lastTransitionTime":"2025-12-11T10:11:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:11:54 crc kubenswrapper[4953]: I1211 10:11:54.034503 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:11:54 crc kubenswrapper[4953]: I1211 10:11:54.034818 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:11:54 crc kubenswrapper[4953]: I1211 10:11:54.034918 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:11:54 crc kubenswrapper[4953]: I1211 10:11:54.035028 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:11:54 crc kubenswrapper[4953]: I1211 10:11:54.035118 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:11:54Z","lastTransitionTime":"2025-12-11T10:11:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:11:54 crc kubenswrapper[4953]: I1211 10:11:54.139107 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:11:54 crc kubenswrapper[4953]: I1211 10:11:54.139183 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:11:54 crc kubenswrapper[4953]: I1211 10:11:54.139210 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:11:54 crc kubenswrapper[4953]: I1211 10:11:54.139241 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:11:54 crc kubenswrapper[4953]: I1211 10:11:54.139265 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:11:54Z","lastTransitionTime":"2025-12-11T10:11:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:11:54 crc kubenswrapper[4953]: I1211 10:11:54.156626 4953 generic.go:334] "Generic (PLEG): container finished" podID="d80d6bd6-dd9c-433e-93cb-2be48e4cea72" containerID="79056708a2a0ccc6dca203be18aec98296471e9c95fa3124c910deb77f04cb23" exitCode=0 Dec 11 10:11:54 crc kubenswrapper[4953]: I1211 10:11:54.156694 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-pqtrx" event={"ID":"d80d6bd6-dd9c-433e-93cb-2be48e4cea72","Type":"ContainerDied","Data":"79056708a2a0ccc6dca203be18aec98296471e9c95fa3124c910deb77f04cb23"} Dec 11 10:11:54 crc kubenswrapper[4953]: I1211 10:11:54.170783 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e57ec14864d78b0463b4bd4af9dfa21aec61df60a63a38b7d98ba4871716edfc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:54Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:54 crc kubenswrapper[4953]: I1211 10:11:54.187959 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h4dvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"644e1d40-ab80-469e-94b4-540e52b8e2c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f734acf34a05a9425f305c809775bae58615ae1d5f89e3b519e54d7e7abb8bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbwwr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h4dvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:54Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:54 crc kubenswrapper[4953]: I1211 10:11:54.203606 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d98f6e58-767e-4e80-8dc7-bf97cdc14997\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec306b9048e81de45ce4e5ae1f564ab611980d56edf94f34c48cba7299dd754e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7453febb17d4aadef8c87c8d256a0339b441e2bed33a20a3f7cf88b4d0ce5a83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5c40bd3d558c5cff3d458a0b5a993371c3e8b6afc0035a64a21ffc0cc6c2357\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b22d8239ad9f5511dc6ae773c7ea181c4e194b0847b58332e716953d9deb9cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:54Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:54 crc kubenswrapper[4953]: I1211 10:11:54.218267 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7f8ca70-14ac-499f-9a73-c03f1cb9d3f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afbf1d478a1ccbd17c29483adf2e39e60be93dfde72d96dd4c45ee2b81c7db7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89487ecc0b25583d92a2adb537e660618a1f0477d9b0ca805c7d5cc120a38ef5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5850c59617cbc5cbf3d86246bfb8d7645964fdb32f406648e47de3d2e1dcca39\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b38e6fc7946d99ff7570627e9bfd01e9f5e029ad3f3e2cda276461f222d7950\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a91255550d88dd1963fef1112d90d2c1e779fc3e2dd1e7c824640879b8c6a58e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T10:11:37Z\\\",\\\"message\\\":\\\"W1211 10:11:26.311312 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1211 10:11:26.312053 1 crypto.go:601] Generating new CA for check-endpoints-signer@1765447886 cert, and key in /tmp/serving-cert-3652440615/serving-signer.crt, /tmp/serving-cert-3652440615/serving-signer.key\\\\nI1211 10:11:26.711906 1 observer_polling.go:159] Starting file observer\\\\nW1211 10:11:26.714018 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1211 10:11:26.714220 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1211 10:11:26.715195 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3652440615/tls.crt::/tmp/serving-cert-3652440615/tls.key\\\\\\\"\\\\nF1211 10:11:37.220702 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:26Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2348bd7a336966cd91aa6ba1cf71771e7fd111085acbb0481adee82d7a6e109\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4536100178c4611eea8cfe2b15d79f55fdd32b7004de523cc2694bd656ce5be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4536100178c4611eea8cfe2b15d79f55fdd32b7004de523cc2694bd656ce5be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:54Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:54 crc kubenswrapper[4953]: I1211 10:11:54.229982 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:54Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:54 crc kubenswrapper[4953]: I1211 10:11:54.241933 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:11:54 crc kubenswrapper[4953]: I1211 10:11:54.241964 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:11:54 crc kubenswrapper[4953]: I1211 10:11:54.241973 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:11:54 crc kubenswrapper[4953]: I1211 10:11:54.241987 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:11:54 crc kubenswrapper[4953]: I1211 10:11:54.241998 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:11:54Z","lastTransitionTime":"2025-12-11T10:11:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:11:54 crc kubenswrapper[4953]: I1211 10:11:54.242186 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb9312a7af4fcd14d64411afec83b7315dbe399254aab23665cccfa0b04a62db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:54Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:54 crc kubenswrapper[4953]: I1211 10:11:54.252248 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q2898" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed741fb7-1326-48b7-a713-17c9f0243eac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c91690c6fc715e967f98fc731db9ff317a21946b0903480ee2534f5e71ae7ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5z9nk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd6810974250266a6a2efbea13db5cb6f52a4bbdec05955f7b9f58e55d7a8c4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5z9nk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q2898\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:54Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:54 crc kubenswrapper[4953]: I1211 10:11:54.288445 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x6f57" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c09d8243-6693-433e-bce1-8a99e5e37b95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c84dce3b218bcc630289f0bff286bad09d4481e1787ef2c048799bfc6c97108f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c84dce3b218bcc630289f0bff286bad09d4481e1787ef2c048799bfc6c97108f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x6f57\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:54Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:54 crc kubenswrapper[4953]: I1211 10:11:54.309457 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e38e7bec81ab11b9afe5c592d5c57aa1c0527e5e4031265a00a99ef8cb3c6dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da0ab06260b0bf565e089d1d1a78ae71e0ce94f0d5e867393dafc543f9014367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:54Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:54 crc kubenswrapper[4953]: I1211 10:11:54.331424 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pqtrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d80d6bd6-dd9c-433e-93cb-2be48e4cea72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c456b7fb076e4e1f8ebe23ee26c07f2038800afe76c0e183ea82aebebe5e20d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c456b7fb076e4e1f8ebe23ee26c07f2038800afe76c0e183ea82aebebe5e20d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1199214a609bfaf541ddc0e45714a86f4165bdb240eda61b8ee35764cc2b6c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1199214a609bfaf541ddc0e45714a86f4165bdb240eda61b8ee35764cc2b6c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79056708a2a0ccc6dca203be18aec98296471e9c95fa3124c910deb77f04cb23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79056708a2a0ccc6dca203be18aec98296471e9c95fa3124c910deb77f04cb23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pqtrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:54Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:54 crc kubenswrapper[4953]: I1211 10:11:54.344020 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:11:54 crc kubenswrapper[4953]: I1211 10:11:54.344067 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:11:54 crc kubenswrapper[4953]: I1211 10:11:54.344079 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:11:54 crc kubenswrapper[4953]: I1211 10:11:54.344096 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:11:54 crc kubenswrapper[4953]: I1211 10:11:54.344109 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:11:54Z","lastTransitionTime":"2025-12-11T10:11:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:11:54 crc kubenswrapper[4953]: I1211 10:11:54.345815 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:54Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:54 crc kubenswrapper[4953]: I1211 10:11:54.358638 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:54Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:54 crc kubenswrapper[4953]: I1211 10:11:54.369837 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7cgmm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5c6cf50-ac15-4b65-98a3-24d5b55a9f5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e8c3b294febaab8650ca738b055222b11b0f3502da927fb9bb1f2f30b97c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrv98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7cgmm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:54Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:54 crc kubenswrapper[4953]: I1211 10:11:54.380178 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ps59j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed9da9e3-3f97-49f6-9774-3c2f06987b9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8b7289e76184818bc11ef0e99cd573244647de790af79ac277a91ebf305bc1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vngds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ps59j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:54Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:54 crc kubenswrapper[4953]: I1211 10:11:54.446022 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:11:54 crc kubenswrapper[4953]: I1211 10:11:54.446076 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:11:54 crc kubenswrapper[4953]: I1211 10:11:54.446093 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:11:54 crc kubenswrapper[4953]: I1211 10:11:54.446116 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:11:54 crc kubenswrapper[4953]: I1211 10:11:54.446131 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:11:54Z","lastTransitionTime":"2025-12-11T10:11:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:11:54 crc kubenswrapper[4953]: I1211 10:11:54.472467 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 10:11:54 crc kubenswrapper[4953]: E1211 10:11:54.472630 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 10:11:54 crc kubenswrapper[4953]: I1211 10:11:54.547847 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:11:54 crc kubenswrapper[4953]: I1211 10:11:54.547892 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:11:54 crc kubenswrapper[4953]: I1211 10:11:54.547904 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:11:54 crc kubenswrapper[4953]: I1211 10:11:54.547921 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:11:54 crc kubenswrapper[4953]: I1211 10:11:54.547933 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:11:54Z","lastTransitionTime":"2025-12-11T10:11:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:11:54 crc kubenswrapper[4953]: I1211 10:11:54.650026 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:11:54 crc kubenswrapper[4953]: I1211 10:11:54.650065 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:11:54 crc kubenswrapper[4953]: I1211 10:11:54.650075 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:11:54 crc kubenswrapper[4953]: I1211 10:11:54.650090 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:11:54 crc kubenswrapper[4953]: I1211 10:11:54.650108 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:11:54Z","lastTransitionTime":"2025-12-11T10:11:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:11:54 crc kubenswrapper[4953]: I1211 10:11:54.752489 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:11:54 crc kubenswrapper[4953]: I1211 10:11:54.752533 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:11:54 crc kubenswrapper[4953]: I1211 10:11:54.752544 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:11:54 crc kubenswrapper[4953]: I1211 10:11:54.752557 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:11:54 crc kubenswrapper[4953]: I1211 10:11:54.752567 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:11:54Z","lastTransitionTime":"2025-12-11T10:11:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:11:54 crc kubenswrapper[4953]: I1211 10:11:54.855991 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:11:54 crc kubenswrapper[4953]: I1211 10:11:54.856045 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:11:54 crc kubenswrapper[4953]: I1211 10:11:54.856060 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:11:54 crc kubenswrapper[4953]: I1211 10:11:54.856080 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:11:54 crc kubenswrapper[4953]: I1211 10:11:54.856096 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:11:54Z","lastTransitionTime":"2025-12-11T10:11:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:11:54 crc kubenswrapper[4953]: I1211 10:11:54.958716 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:11:54 crc kubenswrapper[4953]: I1211 10:11:54.958758 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:11:54 crc kubenswrapper[4953]: I1211 10:11:54.958768 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:11:54 crc kubenswrapper[4953]: I1211 10:11:54.958784 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:11:54 crc kubenswrapper[4953]: I1211 10:11:54.958793 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:11:54Z","lastTransitionTime":"2025-12-11T10:11:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:11:55 crc kubenswrapper[4953]: I1211 10:11:55.060899 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:11:55 crc kubenswrapper[4953]: I1211 10:11:55.060963 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:11:55 crc kubenswrapper[4953]: I1211 10:11:55.060980 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:11:55 crc kubenswrapper[4953]: I1211 10:11:55.061002 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:11:55 crc kubenswrapper[4953]: I1211 10:11:55.061019 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:11:55Z","lastTransitionTime":"2025-12-11T10:11:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:11:55 crc kubenswrapper[4953]: I1211 10:11:55.162490 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:11:55 crc kubenswrapper[4953]: I1211 10:11:55.162528 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:11:55 crc kubenswrapper[4953]: I1211 10:11:55.162541 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:11:55 crc kubenswrapper[4953]: I1211 10:11:55.162557 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:11:55 crc kubenswrapper[4953]: I1211 10:11:55.162589 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:11:55Z","lastTransitionTime":"2025-12-11T10:11:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:11:55 crc kubenswrapper[4953]: I1211 10:11:55.164237 4953 generic.go:334] "Generic (PLEG): container finished" podID="d80d6bd6-dd9c-433e-93cb-2be48e4cea72" containerID="bdf32e1c2b4bd849d12dcee505be41ebbd38d03660e37c1b36d65f81e55d5160" exitCode=0 Dec 11 10:11:55 crc kubenswrapper[4953]: I1211 10:11:55.164276 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-pqtrx" event={"ID":"d80d6bd6-dd9c-433e-93cb-2be48e4cea72","Type":"ContainerDied","Data":"bdf32e1c2b4bd849d12dcee505be41ebbd38d03660e37c1b36d65f81e55d5160"} Dec 11 10:11:55 crc kubenswrapper[4953]: I1211 10:11:55.179928 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:55Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:55 crc kubenswrapper[4953]: I1211 10:11:55.192200 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:55Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:55 crc kubenswrapper[4953]: I1211 10:11:55.204735 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7cgmm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5c6cf50-ac15-4b65-98a3-24d5b55a9f5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e8c3b294febaab8650ca738b055222b11b0f3502da927fb9bb1f2f30b97c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrv98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7cgmm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:55Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:55 crc kubenswrapper[4953]: I1211 10:11:55.213257 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ps59j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed9da9e3-3f97-49f6-9774-3c2f06987b9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8b7289e76184818bc11ef0e99cd573244647de790af79ac277a91ebf305bc1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vngds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ps59j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:55Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:55 crc kubenswrapper[4953]: I1211 10:11:55.225880 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pqtrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d80d6bd6-dd9c-433e-93cb-2be48e4cea72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c456b7fb076e4e1f8ebe23ee26c07f2038800afe76c0e183ea82aebebe5e20d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c456b7fb076e4e1f8ebe23ee26c07f2038800afe76c0e183ea82aebebe5e20d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1199214a609bfaf541ddc0e45714a86f4165bdb240eda61b8ee35764cc2b6c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1199214a609bfaf541ddc0e45714a86f4165bdb240eda61b8ee35764cc2b6c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79056708a2a0ccc6dca203be18aec98296471e9c95fa3124c910deb77f04cb23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79056708a2a0ccc6dca203be18aec98296471e9c95fa3124c910deb77f04cb23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdf32e1c2b4bd849d12dcee505be41ebbd38d03660e37c1b36d65f81e55d5160\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdf32e1c2b4bd849d12dcee505be41ebbd38d03660e37c1b36d65f81e55d5160\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pqtrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:55Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:55 crc kubenswrapper[4953]: I1211 10:11:55.238408 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d98f6e58-767e-4e80-8dc7-bf97cdc14997\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec306b9048e81de45ce4e5ae1f564ab611980d56edf94f34c48cba7299dd754e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7453febb17d4aadef8c87c8d256a0339b441e2bed33a20a3f7cf88b4d0ce5a83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5c40bd3d558c5cff3d458a0b5a993371c3e8b6afc0035a64a21ffc0cc6c2357\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b22d8239ad9f5511dc6ae773c7ea181c4e194b0847b58332e716953d9deb9cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:55Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:55 crc kubenswrapper[4953]: I1211 10:11:55.253444 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7f8ca70-14ac-499f-9a73-c03f1cb9d3f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afbf1d478a1ccbd17c29483adf2e39e60be93dfde72d96dd4c45ee2b81c7db7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89487ecc0b25583d92a2adb537e660618a1f0477d9b0ca805c7d5cc120a38ef5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5850c59617cbc5cbf3d86246bfb8d7645964fdb32f406648e47de3d2e1dcca39\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b38e6fc7946d99ff7570627e9bfd01e9f5e029ad3f3e2cda276461f222d7950\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a91255550d88dd1963fef1112d90d2c1e779fc3e2dd1e7c824640879b8c6a58e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T10:11:37Z\\\",\\\"message\\\":\\\"W1211 10:11:26.311312 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1211 10:11:26.312053 1 crypto.go:601] Generating new CA for check-endpoints-signer@1765447886 cert, and key in /tmp/serving-cert-3652440615/serving-signer.crt, /tmp/serving-cert-3652440615/serving-signer.key\\\\nI1211 10:11:26.711906 1 observer_polling.go:159] Starting file observer\\\\nW1211 10:11:26.714018 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1211 10:11:26.714220 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1211 10:11:26.715195 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3652440615/tls.crt::/tmp/serving-cert-3652440615/tls.key\\\\\\\"\\\\nF1211 10:11:37.220702 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:26Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2348bd7a336966cd91aa6ba1cf71771e7fd111085acbb0481adee82d7a6e109\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4536100178c4611eea8cfe2b15d79f55fdd32b7004de523cc2694bd656ce5be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4536100178c4611eea8cfe2b15d79f55fdd32b7004de523cc2694bd656ce5be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:55Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:55 crc kubenswrapper[4953]: I1211 10:11:55.264812 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:11:55 crc kubenswrapper[4953]: I1211 10:11:55.264850 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:11:55 crc kubenswrapper[4953]: I1211 10:11:55.264858 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:11:55 crc kubenswrapper[4953]: I1211 10:11:55.264874 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:11:55 crc kubenswrapper[4953]: I1211 10:11:55.264884 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:11:55Z","lastTransitionTime":"2025-12-11T10:11:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:11:55 crc kubenswrapper[4953]: I1211 10:11:55.265868 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:55Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:55 crc kubenswrapper[4953]: I1211 10:11:55.277402 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e57ec14864d78b0463b4bd4af9dfa21aec61df60a63a38b7d98ba4871716edfc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:55Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:55 crc kubenswrapper[4953]: I1211 10:11:55.291942 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h4dvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"644e1d40-ab80-469e-94b4-540e52b8e2c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f734acf34a05a9425f305c809775bae58615ae1d5f89e3b519e54d7e7abb8bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbwwr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h4dvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:55Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:55 crc kubenswrapper[4953]: I1211 10:11:55.305166 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb9312a7af4fcd14d64411afec83b7315dbe399254aab23665cccfa0b04a62db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:55Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:55 crc kubenswrapper[4953]: I1211 10:11:55.316887 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q2898" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed741fb7-1326-48b7-a713-17c9f0243eac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c91690c6fc715e967f98fc731db9ff317a21946b0903480ee2534f5e71ae7ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5z9nk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd6810974250266a6a2efbea13db5cb6f52a4bbdec05955f7b9f58e55d7a8c4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5z9nk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q2898\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:55Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:55 crc kubenswrapper[4953]: I1211 10:11:55.333603 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x6f57" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c09d8243-6693-433e-bce1-8a99e5e37b95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c84dce3b218bcc630289f0bff286bad09d4481e1787ef2c048799bfc6c97108f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c84dce3b218bcc630289f0bff286bad09d4481e1787ef2c048799bfc6c97108f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x6f57\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:55Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:55 crc kubenswrapper[4953]: I1211 10:11:55.344386 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e38e7bec81ab11b9afe5c592d5c57aa1c0527e5e4031265a00a99ef8cb3c6dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da0ab06260b0bf565e089d1d1a78ae71e0ce94f0d5e867393dafc543f9014367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:55Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:55 crc kubenswrapper[4953]: I1211 10:11:55.367397 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:11:55 crc kubenswrapper[4953]: I1211 10:11:55.367449 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:11:55 crc kubenswrapper[4953]: I1211 10:11:55.367460 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:11:55 crc kubenswrapper[4953]: I1211 10:11:55.367480 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:11:55 crc kubenswrapper[4953]: I1211 10:11:55.367494 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:11:55Z","lastTransitionTime":"2025-12-11T10:11:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:11:55 crc kubenswrapper[4953]: I1211 10:11:55.469866 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:11:55 crc kubenswrapper[4953]: I1211 10:11:55.469893 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:11:55 crc kubenswrapper[4953]: I1211 10:11:55.469903 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:11:55 crc kubenswrapper[4953]: I1211 10:11:55.469917 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:11:55 crc kubenswrapper[4953]: I1211 10:11:55.469927 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:11:55Z","lastTransitionTime":"2025-12-11T10:11:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:11:55 crc kubenswrapper[4953]: I1211 10:11:55.472607 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 10:11:55 crc kubenswrapper[4953]: E1211 10:11:55.472711 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 10:11:55 crc kubenswrapper[4953]: I1211 10:11:55.472845 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 10:11:55 crc kubenswrapper[4953]: E1211 10:11:55.472986 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 10:11:55 crc kubenswrapper[4953]: I1211 10:11:55.572286 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:11:55 crc kubenswrapper[4953]: I1211 10:11:55.572328 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:11:55 crc kubenswrapper[4953]: I1211 10:11:55.572338 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:11:55 crc kubenswrapper[4953]: I1211 10:11:55.572350 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:11:55 crc kubenswrapper[4953]: I1211 10:11:55.572358 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:11:55Z","lastTransitionTime":"2025-12-11T10:11:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:11:55 crc kubenswrapper[4953]: I1211 10:11:55.675346 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:11:55 crc kubenswrapper[4953]: I1211 10:11:55.675378 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:11:55 crc kubenswrapper[4953]: I1211 10:11:55.675386 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:11:55 crc kubenswrapper[4953]: I1211 10:11:55.675397 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:11:55 crc kubenswrapper[4953]: I1211 10:11:55.675406 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:11:55Z","lastTransitionTime":"2025-12-11T10:11:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:11:55 crc kubenswrapper[4953]: I1211 10:11:55.778662 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:11:55 crc kubenswrapper[4953]: I1211 10:11:55.778703 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:11:55 crc kubenswrapper[4953]: I1211 10:11:55.778713 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:11:55 crc kubenswrapper[4953]: I1211 10:11:55.778739 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:11:55 crc kubenswrapper[4953]: I1211 10:11:55.778750 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:11:55Z","lastTransitionTime":"2025-12-11T10:11:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:11:55 crc kubenswrapper[4953]: I1211 10:11:55.912748 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:11:55 crc kubenswrapper[4953]: I1211 10:11:55.912864 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:11:55 crc kubenswrapper[4953]: I1211 10:11:55.912883 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:11:55 crc kubenswrapper[4953]: I1211 10:11:55.912908 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:11:55 crc kubenswrapper[4953]: I1211 10:11:55.912924 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:11:55Z","lastTransitionTime":"2025-12-11T10:11:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:11:56 crc kubenswrapper[4953]: I1211 10:11:56.014744 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:11:56 crc kubenswrapper[4953]: I1211 10:11:56.014787 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:11:56 crc kubenswrapper[4953]: I1211 10:11:56.014797 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:11:56 crc kubenswrapper[4953]: I1211 10:11:56.014823 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:11:56 crc kubenswrapper[4953]: I1211 10:11:56.014832 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:11:56Z","lastTransitionTime":"2025-12-11T10:11:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:11:56 crc kubenswrapper[4953]: I1211 10:11:56.118133 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:11:56 crc kubenswrapper[4953]: I1211 10:11:56.118170 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:11:56 crc kubenswrapper[4953]: I1211 10:11:56.118180 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:11:56 crc kubenswrapper[4953]: I1211 10:11:56.118195 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:11:56 crc kubenswrapper[4953]: I1211 10:11:56.118206 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:11:56Z","lastTransitionTime":"2025-12-11T10:11:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:11:56 crc kubenswrapper[4953]: I1211 10:11:56.171522 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x6f57" event={"ID":"c09d8243-6693-433e-bce1-8a99e5e37b95","Type":"ContainerStarted","Data":"dd7fc5afd00221721e6c142aa8ec3ec7cd5d5e2eb757952f34c702fa0aa2f9fc"} Dec 11 10:11:56 crc kubenswrapper[4953]: I1211 10:11:56.172603 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-x6f57" Dec 11 10:11:56 crc kubenswrapper[4953]: I1211 10:11:56.172680 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-x6f57" Dec 11 10:11:56 crc kubenswrapper[4953]: I1211 10:11:56.177016 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-pqtrx" event={"ID":"d80d6bd6-dd9c-433e-93cb-2be48e4cea72","Type":"ContainerStarted","Data":"c6dd8c365888d82936ae2eeef058fd79b7134d40d2096eeb655fc79faa658ce6"} Dec 11 10:11:56 crc kubenswrapper[4953]: I1211 10:11:56.191772 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:56Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:56 crc kubenswrapper[4953]: I1211 10:11:56.239943 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:56Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:56 crc kubenswrapper[4953]: I1211 10:11:56.251111 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:11:56 crc kubenswrapper[4953]: I1211 10:11:56.251143 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:11:56 crc kubenswrapper[4953]: I1211 10:11:56.251153 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:11:56 crc kubenswrapper[4953]: I1211 10:11:56.251169 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:11:56 crc kubenswrapper[4953]: I1211 10:11:56.251179 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:11:56Z","lastTransitionTime":"2025-12-11T10:11:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:11:56 crc kubenswrapper[4953]: I1211 10:11:56.256840 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-x6f57" Dec 11 10:11:56 crc kubenswrapper[4953]: I1211 10:11:56.257288 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-x6f57" Dec 11 10:11:56 crc kubenswrapper[4953]: I1211 10:11:56.257755 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7cgmm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5c6cf50-ac15-4b65-98a3-24d5b55a9f5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e8c3b294febaab8650ca738b055222b11b0f3502da927fb9bb1f2f30b97c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrv98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7cgmm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:56Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:56 crc kubenswrapper[4953]: I1211 10:11:56.279535 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ps59j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed9da9e3-3f97-49f6-9774-3c2f06987b9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8b7289e76184818bc11ef0e99cd573244647de790af79ac277a91ebf305bc1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vngds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ps59j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:56Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:56 crc kubenswrapper[4953]: I1211 10:11:56.297336 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pqtrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d80d6bd6-dd9c-433e-93cb-2be48e4cea72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c456b7fb076e4e1f8ebe23ee26c07f2038800afe76c0e183ea82aebebe5e20d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c456b7fb076e4e1f8ebe23ee26c07f2038800afe76c0e183ea82aebebe5e20d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1199214a609bfaf541ddc0e45714a86f4165bdb240eda61b8ee35764cc2b6c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1199214a609bfaf541ddc0e45714a86f4165bdb240eda61b8ee35764cc2b6c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79056708a2a0ccc6dca203be18aec98296471e9c95fa3124c910deb77f04cb23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79056708a2a0ccc6dca203be18aec98296471e9c95fa3124c910deb77f04cb23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdf32e1c2b4bd849d12dcee505be41ebbd38d03660e37c1b36d65f81e55d5160\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdf32e1c2b4bd849d12dcee505be41ebbd38d03660e37c1b36d65f81e55d5160\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pqtrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:56Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:56 crc kubenswrapper[4953]: I1211 10:11:56.310466 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h4dvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"644e1d40-ab80-469e-94b4-540e52b8e2c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f734acf34a05a9425f305c809775bae58615ae1d5f89e3b519e54d7e7abb8bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbwwr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h4dvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:56Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:56 crc kubenswrapper[4953]: I1211 10:11:56.327329 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d98f6e58-767e-4e80-8dc7-bf97cdc14997\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec306b9048e81de45ce4e5ae1f564ab611980d56edf94f34c48cba7299dd754e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7453febb17d4aadef8c87c8d256a0339b441e2bed33a20a3f7cf88b4d0ce5a83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5c40bd3d558c5cff3d458a0b5a993371c3e8b6afc0035a64a21ffc0cc6c2357\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b22d8239ad9f5511dc6ae773c7ea181c4e194b0847b58332e716953d9deb9cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:56Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:56 crc kubenswrapper[4953]: I1211 10:11:56.339208 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7f8ca70-14ac-499f-9a73-c03f1cb9d3f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afbf1d478a1ccbd17c29483adf2e39e60be93dfde72d96dd4c45ee2b81c7db7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89487ecc0b25583d92a2adb537e660618a1f0477d9b0ca805c7d5cc120a38ef5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5850c59617cbc5cbf3d86246bfb8d7645964fdb32f406648e47de3d2e1dcca39\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b38e6fc7946d99ff7570627e9bfd01e9f5e029ad3f3e2cda276461f222d7950\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a91255550d88dd1963fef1112d90d2c1e779fc3e2dd1e7c824640879b8c6a58e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T10:11:37Z\\\",\\\"message\\\":\\\"W1211 10:11:26.311312 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1211 10:11:26.312053 1 crypto.go:601] Generating new CA for check-endpoints-signer@1765447886 cert, and key in /tmp/serving-cert-3652440615/serving-signer.crt, /tmp/serving-cert-3652440615/serving-signer.key\\\\nI1211 10:11:26.711906 1 observer_polling.go:159] Starting file observer\\\\nW1211 10:11:26.714018 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1211 10:11:26.714220 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1211 10:11:26.715195 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3652440615/tls.crt::/tmp/serving-cert-3652440615/tls.key\\\\\\\"\\\\nF1211 10:11:37.220702 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:26Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2348bd7a336966cd91aa6ba1cf71771e7fd111085acbb0481adee82d7a6e109\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4536100178c4611eea8cfe2b15d79f55fdd32b7004de523cc2694bd656ce5be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4536100178c4611eea8cfe2b15d79f55fdd32b7004de523cc2694bd656ce5be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:56Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:56 crc kubenswrapper[4953]: I1211 10:11:56.349497 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:56Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:56 crc kubenswrapper[4953]: I1211 10:11:56.353729 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:11:56 crc kubenswrapper[4953]: I1211 10:11:56.353752 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:11:56 crc kubenswrapper[4953]: I1211 10:11:56.353760 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:11:56 crc kubenswrapper[4953]: I1211 10:11:56.353772 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:11:56 crc kubenswrapper[4953]: I1211 10:11:56.353780 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:11:56Z","lastTransitionTime":"2025-12-11T10:11:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:11:56 crc kubenswrapper[4953]: I1211 10:11:56.360140 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e57ec14864d78b0463b4bd4af9dfa21aec61df60a63a38b7d98ba4871716edfc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:56Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:56 crc kubenswrapper[4953]: I1211 10:11:56.376402 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb9312a7af4fcd14d64411afec83b7315dbe399254aab23665cccfa0b04a62db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:56Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:56 crc kubenswrapper[4953]: I1211 10:11:56.391398 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q2898" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed741fb7-1326-48b7-a713-17c9f0243eac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c91690c6fc715e967f98fc731db9ff317a21946b0903480ee2534f5e71ae7ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5z9nk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd6810974250266a6a2efbea13db5cb6f52a4bbdec05955f7b9f58e55d7a8c4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5z9nk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q2898\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:56Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:56 crc kubenswrapper[4953]: I1211 10:11:56.411441 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x6f57" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c09d8243-6693-433e-bce1-8a99e5e37b95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b88a1ca7bd533da2486525e0a6c3dc45272433e743146be01eb8013265f02d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://622e6781ae06491000d6c0e3d2922942c3709bb09483a8e0d2ab723972c2aa78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c34fbadf5e4606e6ad395137eff9b9edc19b5d02125e818a6d8a19d0d20649e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2587a8f43682f31a01fae073769712321abb001f975f5ac5413264489a110f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42759c1ba178a28ddf9b11b221249364f6993c1288309dff6b1c50be13c3b6fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99f628cfb5844a18bd0013d2cd5f7f545ef7a561017498bdfa4a6b6fc4686e54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd7fc5afd00221721e6c142aa8ec3ec7cd5d5e2eb757952f34c702fa0aa2f9fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8da892281a7b8dea449aee461dcb35bef204e2a768689e751abcb21204091aaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c84dce3b218bcc630289f0bff286bad09d4481e1787ef2c048799bfc6c97108f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c84dce3b218bcc630289f0bff286bad09d4481e1787ef2c048799bfc6c97108f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x6f57\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:56Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:56 crc kubenswrapper[4953]: I1211 10:11:56.423685 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e38e7bec81ab11b9afe5c592d5c57aa1c0527e5e4031265a00a99ef8cb3c6dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da0ab06260b0bf565e089d1d1a78ae71e0ce94f0d5e867393dafc543f9014367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:56Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:56 crc kubenswrapper[4953]: I1211 10:11:56.439470 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb9312a7af4fcd14d64411afec83b7315dbe399254aab23665cccfa0b04a62db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:56Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:56 crc kubenswrapper[4953]: I1211 10:11:56.452203 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q2898" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed741fb7-1326-48b7-a713-17c9f0243eac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c91690c6fc715e967f98fc731db9ff317a21946b0903480ee2534f5e71ae7ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5z9nk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd6810974250266a6a2efbea13db5cb6f52a4bbdec05955f7b9f58e55d7a8c4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5z9nk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q2898\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:56Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:56 crc kubenswrapper[4953]: I1211 10:11:56.456863 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:11:56 crc kubenswrapper[4953]: I1211 10:11:56.456895 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:11:56 crc kubenswrapper[4953]: I1211 10:11:56.456906 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:11:56 crc kubenswrapper[4953]: I1211 10:11:56.456920 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:11:56 crc kubenswrapper[4953]: I1211 10:11:56.456931 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:11:56Z","lastTransitionTime":"2025-12-11T10:11:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:11:56 crc kubenswrapper[4953]: I1211 10:11:56.472404 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 10:11:56 crc kubenswrapper[4953]: E1211 10:11:56.472546 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 10:11:56 crc kubenswrapper[4953]: I1211 10:11:56.475569 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x6f57" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c09d8243-6693-433e-bce1-8a99e5e37b95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b88a1ca7bd533da2486525e0a6c3dc45272433e743146be01eb8013265f02d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://622e6781ae06491000d6c0e3d2922942c3709bb09483a8e0d2ab723972c2aa78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c34fbadf5e4606e6ad395137eff9b9edc19b5d02125e818a6d8a19d0d20649e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2587a8f43682f31a01fae073769712321abb001f975f5ac5413264489a110f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42759c1ba178a28ddf9b11b221249364f6993c1288309dff6b1c50be13c3b6fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99f628cfb5844a18bd0013d2cd5f7f545ef7a561017498bdfa4a6b6fc4686e54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd7fc5afd00221721e6c142aa8ec3ec7cd5d5e2eb757952f34c702fa0aa2f9fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8da892281a7b8dea449aee461dcb35bef204e2a768689e751abcb21204091aaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c84dce3b218bcc630289f0bff286bad09d4481e1787ef2c048799bfc6c97108f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c84dce3b218bcc630289f0bff286bad09d4481e1787ef2c048799bfc6c97108f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x6f57\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:56Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:56 crc kubenswrapper[4953]: I1211 10:11:56.489354 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e38e7bec81ab11b9afe5c592d5c57aa1c0527e5e4031265a00a99ef8cb3c6dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da0ab06260b0bf565e089d1d1a78ae71e0ce94f0d5e867393dafc543f9014367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:56Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:56 crc kubenswrapper[4953]: I1211 10:11:56.505859 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:56Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:56 crc kubenswrapper[4953]: I1211 10:11:56.520932 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:56Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:56 crc kubenswrapper[4953]: I1211 10:11:56.532518 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7cgmm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5c6cf50-ac15-4b65-98a3-24d5b55a9f5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e8c3b294febaab8650ca738b055222b11b0f3502da927fb9bb1f2f30b97c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrv98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7cgmm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:56Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:56 crc kubenswrapper[4953]: I1211 10:11:56.542757 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ps59j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed9da9e3-3f97-49f6-9774-3c2f06987b9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8b7289e76184818bc11ef0e99cd573244647de790af79ac277a91ebf305bc1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vngds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ps59j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:56Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:56 crc kubenswrapper[4953]: I1211 10:11:56.558587 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pqtrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d80d6bd6-dd9c-433e-93cb-2be48e4cea72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c456b7fb076e4e1f8ebe23ee26c07f2038800afe76c0e183ea82aebebe5e20d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c456b7fb076e4e1f8ebe23ee26c07f2038800afe76c0e183ea82aebebe5e20d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1199214a609bfaf541ddc0e45714a86f4165bdb240eda61b8ee35764cc2b6c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1199214a609bfaf541ddc0e45714a86f4165bdb240eda61b8ee35764cc2b6c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79056708a2a0ccc6dca203be18aec98296471e9c95fa3124c910deb77f04cb23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79056708a2a0ccc6dca203be18aec98296471e9c95fa3124c910deb77f04cb23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdf32e1c2b4bd849d12dcee505be41ebbd38d03660e37c1b36d65f81e55d5160\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdf32e1c2b4bd849d12dcee505be41ebbd38d03660e37c1b36d65f81e55d5160\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6dd8c365888d82936ae2eeef058fd79b7134d40d2096eeb655fc79faa658ce6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pqtrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:56Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:56 crc kubenswrapper[4953]: I1211 10:11:56.560447 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:11:56 crc kubenswrapper[4953]: I1211 10:11:56.560498 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:11:56 crc kubenswrapper[4953]: I1211 10:11:56.560508 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:11:56 crc kubenswrapper[4953]: I1211 10:11:56.560802 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:11:56 crc kubenswrapper[4953]: I1211 10:11:56.560919 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:11:56Z","lastTransitionTime":"2025-12-11T10:11:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:11:56 crc kubenswrapper[4953]: I1211 10:11:56.582252 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d98f6e58-767e-4e80-8dc7-bf97cdc14997\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec306b9048e81de45ce4e5ae1f564ab611980d56edf94f34c48cba7299dd754e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7453febb17d4aadef8c87c8d256a0339b441e2bed33a20a3f7cf88b4d0ce5a83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5c40bd3d558c5cff3d458a0b5a993371c3e8b6afc0035a64a21ffc0cc6c2357\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b22d8239ad9f5511dc6ae773c7ea181c4e194b0847b58332e716953d9deb9cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:56Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:56 crc kubenswrapper[4953]: I1211 10:11:56.597535 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7f8ca70-14ac-499f-9a73-c03f1cb9d3f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afbf1d478a1ccbd17c29483adf2e39e60be93dfde72d96dd4c45ee2b81c7db7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89487ecc0b25583d92a2adb537e660618a1f0477d9b0ca805c7d5cc120a38ef5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5850c59617cbc5cbf3d86246bfb8d7645964fdb32f406648e47de3d2e1dcca39\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b38e6fc7946d99ff7570627e9bfd01e9f5e029ad3f3e2cda276461f222d7950\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a91255550d88dd1963fef1112d90d2c1e779fc3e2dd1e7c824640879b8c6a58e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T10:11:37Z\\\",\\\"message\\\":\\\"W1211 10:11:26.311312 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1211 10:11:26.312053 1 crypto.go:601] Generating new CA for check-endpoints-signer@1765447886 cert, and key in /tmp/serving-cert-3652440615/serving-signer.crt, /tmp/serving-cert-3652440615/serving-signer.key\\\\nI1211 10:11:26.711906 1 observer_polling.go:159] Starting file observer\\\\nW1211 10:11:26.714018 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1211 10:11:26.714220 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1211 10:11:26.715195 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3652440615/tls.crt::/tmp/serving-cert-3652440615/tls.key\\\\\\\"\\\\nF1211 10:11:37.220702 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:26Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2348bd7a336966cd91aa6ba1cf71771e7fd111085acbb0481adee82d7a6e109\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4536100178c4611eea8cfe2b15d79f55fdd32b7004de523cc2694bd656ce5be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4536100178c4611eea8cfe2b15d79f55fdd32b7004de523cc2694bd656ce5be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:56Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:56 crc kubenswrapper[4953]: I1211 10:11:56.612113 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:56Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:56 crc kubenswrapper[4953]: I1211 10:11:56.626456 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e57ec14864d78b0463b4bd4af9dfa21aec61df60a63a38b7d98ba4871716edfc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:56Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:56 crc kubenswrapper[4953]: I1211 10:11:56.642785 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h4dvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"644e1d40-ab80-469e-94b4-540e52b8e2c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f734acf34a05a9425f305c809775bae58615ae1d5f89e3b519e54d7e7abb8bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbwwr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h4dvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:56Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:56 crc kubenswrapper[4953]: I1211 10:11:56.670310 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:11:56 crc kubenswrapper[4953]: I1211 10:11:56.670356 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:11:56 crc kubenswrapper[4953]: I1211 10:11:56.670366 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:11:56 crc kubenswrapper[4953]: I1211 10:11:56.670383 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:11:56 crc kubenswrapper[4953]: I1211 10:11:56.670393 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:11:56Z","lastTransitionTime":"2025-12-11T10:11:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:11:56 crc kubenswrapper[4953]: I1211 10:11:56.773309 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:11:56 crc kubenswrapper[4953]: I1211 10:11:56.773391 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:11:56 crc kubenswrapper[4953]: I1211 10:11:56.773417 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:11:56 crc kubenswrapper[4953]: I1211 10:11:56.773448 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:11:56 crc kubenswrapper[4953]: I1211 10:11:56.773472 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:11:56Z","lastTransitionTime":"2025-12-11T10:11:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:11:56 crc kubenswrapper[4953]: I1211 10:11:56.878313 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:11:56 crc kubenswrapper[4953]: I1211 10:11:56.878361 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:11:56 crc kubenswrapper[4953]: I1211 10:11:56.878376 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:11:56 crc kubenswrapper[4953]: I1211 10:11:56.878393 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:11:56 crc kubenswrapper[4953]: I1211 10:11:56.878405 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:11:56Z","lastTransitionTime":"2025-12-11T10:11:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:11:56 crc kubenswrapper[4953]: I1211 10:11:56.981489 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:11:56 crc kubenswrapper[4953]: I1211 10:11:56.981522 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:11:56 crc kubenswrapper[4953]: I1211 10:11:56.981530 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:11:56 crc kubenswrapper[4953]: I1211 10:11:56.981544 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:11:56 crc kubenswrapper[4953]: I1211 10:11:56.981553 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:11:56Z","lastTransitionTime":"2025-12-11T10:11:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:11:57 crc kubenswrapper[4953]: I1211 10:11:57.083825 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:11:57 crc kubenswrapper[4953]: I1211 10:11:57.083879 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:11:57 crc kubenswrapper[4953]: I1211 10:11:57.083888 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:11:57 crc kubenswrapper[4953]: I1211 10:11:57.083903 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:11:57 crc kubenswrapper[4953]: I1211 10:11:57.083916 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:11:57Z","lastTransitionTime":"2025-12-11T10:11:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:11:57 crc kubenswrapper[4953]: I1211 10:11:57.179798 4953 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 11 10:11:57 crc kubenswrapper[4953]: I1211 10:11:57.186169 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:11:57 crc kubenswrapper[4953]: I1211 10:11:57.186199 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:11:57 crc kubenswrapper[4953]: I1211 10:11:57.186209 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:11:57 crc kubenswrapper[4953]: I1211 10:11:57.186223 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:11:57 crc kubenswrapper[4953]: I1211 10:11:57.186233 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:11:57Z","lastTransitionTime":"2025-12-11T10:11:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:11:57 crc kubenswrapper[4953]: I1211 10:11:57.288714 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:11:57 crc kubenswrapper[4953]: I1211 10:11:57.288750 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:11:57 crc kubenswrapper[4953]: I1211 10:11:57.288759 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:11:57 crc kubenswrapper[4953]: I1211 10:11:57.288772 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:11:57 crc kubenswrapper[4953]: I1211 10:11:57.288781 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:11:57Z","lastTransitionTime":"2025-12-11T10:11:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:11:57 crc kubenswrapper[4953]: I1211 10:11:57.391038 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:11:57 crc kubenswrapper[4953]: I1211 10:11:57.391073 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:11:57 crc kubenswrapper[4953]: I1211 10:11:57.391083 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:11:57 crc kubenswrapper[4953]: I1211 10:11:57.391097 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:11:57 crc kubenswrapper[4953]: I1211 10:11:57.391105 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:11:57Z","lastTransitionTime":"2025-12-11T10:11:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:11:57 crc kubenswrapper[4953]: I1211 10:11:57.472779 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 10:11:57 crc kubenswrapper[4953]: I1211 10:11:57.472870 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 10:11:57 crc kubenswrapper[4953]: E1211 10:11:57.472935 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 10:11:57 crc kubenswrapper[4953]: E1211 10:11:57.473051 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 10:11:57 crc kubenswrapper[4953]: I1211 10:11:57.493659 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:11:57 crc kubenswrapper[4953]: I1211 10:11:57.493716 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:11:57 crc kubenswrapper[4953]: I1211 10:11:57.493727 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:11:57 crc kubenswrapper[4953]: I1211 10:11:57.493744 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:11:57 crc kubenswrapper[4953]: I1211 10:11:57.493756 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:11:57Z","lastTransitionTime":"2025-12-11T10:11:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:11:57 crc kubenswrapper[4953]: I1211 10:11:57.660308 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:11:57 crc kubenswrapper[4953]: I1211 10:11:57.660348 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:11:57 crc kubenswrapper[4953]: I1211 10:11:57.660357 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:11:57 crc kubenswrapper[4953]: I1211 10:11:57.660370 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:11:57 crc kubenswrapper[4953]: I1211 10:11:57.660380 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:11:57Z","lastTransitionTime":"2025-12-11T10:11:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:11:57 crc kubenswrapper[4953]: I1211 10:11:57.763137 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:11:57 crc kubenswrapper[4953]: I1211 10:11:57.763171 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:11:57 crc kubenswrapper[4953]: I1211 10:11:57.763180 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:11:57 crc kubenswrapper[4953]: I1211 10:11:57.763208 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:11:57 crc kubenswrapper[4953]: I1211 10:11:57.763217 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:11:57Z","lastTransitionTime":"2025-12-11T10:11:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:11:57 crc kubenswrapper[4953]: I1211 10:11:57.865832 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:11:57 crc kubenswrapper[4953]: I1211 10:11:57.865898 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:11:57 crc kubenswrapper[4953]: I1211 10:11:57.865917 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:11:57 crc kubenswrapper[4953]: I1211 10:11:57.865942 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:11:57 crc kubenswrapper[4953]: I1211 10:11:57.865959 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:11:57Z","lastTransitionTime":"2025-12-11T10:11:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:11:58 crc kubenswrapper[4953]: I1211 10:11:58.018755 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:11:58 crc kubenswrapper[4953]: I1211 10:11:58.018786 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:11:58 crc kubenswrapper[4953]: I1211 10:11:58.018794 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:11:58 crc kubenswrapper[4953]: I1211 10:11:58.018808 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:11:58 crc kubenswrapper[4953]: I1211 10:11:58.018821 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:11:58Z","lastTransitionTime":"2025-12-11T10:11:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:11:58 crc kubenswrapper[4953]: I1211 10:11:58.121045 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:11:58 crc kubenswrapper[4953]: I1211 10:11:58.121085 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:11:58 crc kubenswrapper[4953]: I1211 10:11:58.121094 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:11:58 crc kubenswrapper[4953]: I1211 10:11:58.121108 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:11:58 crc kubenswrapper[4953]: I1211 10:11:58.121119 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:11:58Z","lastTransitionTime":"2025-12-11T10:11:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:11:58 crc kubenswrapper[4953]: I1211 10:11:58.185979 4953 generic.go:334] "Generic (PLEG): container finished" podID="d80d6bd6-dd9c-433e-93cb-2be48e4cea72" containerID="c6dd8c365888d82936ae2eeef058fd79b7134d40d2096eeb655fc79faa658ce6" exitCode=0 Dec 11 10:11:58 crc kubenswrapper[4953]: I1211 10:11:58.186115 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-pqtrx" event={"ID":"d80d6bd6-dd9c-433e-93cb-2be48e4cea72","Type":"ContainerDied","Data":"c6dd8c365888d82936ae2eeef058fd79b7134d40d2096eeb655fc79faa658ce6"} Dec 11 10:11:58 crc kubenswrapper[4953]: I1211 10:11:58.186204 4953 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 11 10:11:58 crc kubenswrapper[4953]: I1211 10:11:58.231456 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:11:58 crc kubenswrapper[4953]: I1211 10:11:58.231503 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:11:58 crc kubenswrapper[4953]: I1211 10:11:58.231518 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:11:58 crc kubenswrapper[4953]: I1211 10:11:58.231541 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:11:58 crc kubenswrapper[4953]: I1211 10:11:58.231556 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:11:58Z","lastTransitionTime":"2025-12-11T10:11:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:11:58 crc kubenswrapper[4953]: I1211 10:11:58.247242 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb9312a7af4fcd14d64411afec83b7315dbe399254aab23665cccfa0b04a62db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:58Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:58 crc kubenswrapper[4953]: I1211 10:11:58.262711 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q2898" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed741fb7-1326-48b7-a713-17c9f0243eac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c91690c6fc715e967f98fc731db9ff317a21946b0903480ee2534f5e71ae7ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5z9nk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd6810974250266a6a2efbea13db5cb6f52a4bbdec05955f7b9f58e55d7a8c4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5z9nk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q2898\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:58Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:58 crc kubenswrapper[4953]: I1211 10:11:58.285091 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x6f57" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c09d8243-6693-433e-bce1-8a99e5e37b95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b88a1ca7bd533da2486525e0a6c3dc45272433e743146be01eb8013265f02d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://622e6781ae06491000d6c0e3d2922942c3709bb09483a8e0d2ab723972c2aa78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c34fbadf5e4606e6ad395137eff9b9edc19b5d02125e818a6d8a19d0d20649e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2587a8f43682f31a01fae073769712321abb001f975f5ac5413264489a110f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42759c1ba178a28ddf9b11b221249364f6993c1288309dff6b1c50be13c3b6fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99f628cfb5844a18bd0013d2cd5f7f545ef7a561017498bdfa4a6b6fc4686e54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd7fc5afd00221721e6c142aa8ec3ec7cd5d5e2eb757952f34c702fa0aa2f9fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8da892281a7b8dea449aee461dcb35bef204e2a768689e751abcb21204091aaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c84dce3b218bcc630289f0bff286bad09d4481e1787ef2c048799bfc6c97108f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c84dce3b218bcc630289f0bff286bad09d4481e1787ef2c048799bfc6c97108f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x6f57\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:58Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:58 crc kubenswrapper[4953]: I1211 10:11:58.300195 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e38e7bec81ab11b9afe5c592d5c57aa1c0527e5e4031265a00a99ef8cb3c6dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da0ab06260b0bf565e089d1d1a78ae71e0ce94f0d5e867393dafc543f9014367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:58Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:58 crc kubenswrapper[4953]: I1211 10:11:58.314657 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:58Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:58 crc kubenswrapper[4953]: I1211 10:11:58.331708 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:58Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:58 crc kubenswrapper[4953]: I1211 10:11:58.334206 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:11:58 crc kubenswrapper[4953]: I1211 10:11:58.334231 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:11:58 crc kubenswrapper[4953]: I1211 10:11:58.334239 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:11:58 crc kubenswrapper[4953]: I1211 10:11:58.334252 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:11:58 crc kubenswrapper[4953]: I1211 10:11:58.334261 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:11:58Z","lastTransitionTime":"2025-12-11T10:11:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:11:58 crc kubenswrapper[4953]: I1211 10:11:58.343136 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7cgmm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5c6cf50-ac15-4b65-98a3-24d5b55a9f5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e8c3b294febaab8650ca738b055222b11b0f3502da927fb9bb1f2f30b97c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrv98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7cgmm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:58Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:58 crc kubenswrapper[4953]: I1211 10:11:58.360354 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ps59j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed9da9e3-3f97-49f6-9774-3c2f06987b9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8b7289e76184818bc11ef0e99cd573244647de790af79ac277a91ebf305bc1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vngds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ps59j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:58Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:58 crc kubenswrapper[4953]: I1211 10:11:58.375993 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pqtrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d80d6bd6-dd9c-433e-93cb-2be48e4cea72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c456b7fb076e4e1f8ebe23ee26c07f2038800afe76c0e183ea82aebebe5e20d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c456b7fb076e4e1f8ebe23ee26c07f2038800afe76c0e183ea82aebebe5e20d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1199214a609bfaf541ddc0e45714a86f4165bdb240eda61b8ee35764cc2b6c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1199214a609bfaf541ddc0e45714a86f4165bdb240eda61b8ee35764cc2b6c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79056708a2a0ccc6dca203be18aec98296471e9c95fa3124c910deb77f04cb23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79056708a2a0ccc6dca203be18aec98296471e9c95fa3124c910deb77f04cb23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdf32e1c2b4bd849d12dcee505be41ebbd38d03660e37c1b36d65f81e55d5160\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdf32e1c2b4bd849d12dcee505be41ebbd38d03660e37c1b36d65f81e55d5160\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6dd8c365888d82936ae2eeef058fd79b7134d40d2096eeb655fc79faa658ce6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6dd8c365888d82936ae2eeef058fd79b7134d40d2096eeb655fc79faa658ce6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pqtrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:58Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:58 crc kubenswrapper[4953]: I1211 10:11:58.392819 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d98f6e58-767e-4e80-8dc7-bf97cdc14997\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec306b9048e81de45ce4e5ae1f564ab611980d56edf94f34c48cba7299dd754e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7453febb17d4aadef8c87c8d256a0339b441e2bed33a20a3f7cf88b4d0ce5a83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5c40bd3d558c5cff3d458a0b5a993371c3e8b6afc0035a64a21ffc0cc6c2357\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b22d8239ad9f5511dc6ae773c7ea181c4e194b0847b58332e716953d9deb9cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:58Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:58 crc kubenswrapper[4953]: I1211 10:11:58.407440 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7f8ca70-14ac-499f-9a73-c03f1cb9d3f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afbf1d478a1ccbd17c29483adf2e39e60be93dfde72d96dd4c45ee2b81c7db7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89487ecc0b25583d92a2adb537e660618a1f0477d9b0ca805c7d5cc120a38ef5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5850c59617cbc5cbf3d86246bfb8d7645964fdb32f406648e47de3d2e1dcca39\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b38e6fc7946d99ff7570627e9bfd01e9f5e029ad3f3e2cda276461f222d7950\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a91255550d88dd1963fef1112d90d2c1e779fc3e2dd1e7c824640879b8c6a58e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T10:11:37Z\\\",\\\"message\\\":\\\"W1211 10:11:26.311312 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1211 10:11:26.312053 1 crypto.go:601] Generating new CA for check-endpoints-signer@1765447886 cert, and key in /tmp/serving-cert-3652440615/serving-signer.crt, /tmp/serving-cert-3652440615/serving-signer.key\\\\nI1211 10:11:26.711906 1 observer_polling.go:159] Starting file observer\\\\nW1211 10:11:26.714018 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1211 10:11:26.714220 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1211 10:11:26.715195 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3652440615/tls.crt::/tmp/serving-cert-3652440615/tls.key\\\\\\\"\\\\nF1211 10:11:37.220702 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:26Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2348bd7a336966cd91aa6ba1cf71771e7fd111085acbb0481adee82d7a6e109\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4536100178c4611eea8cfe2b15d79f55fdd32b7004de523cc2694bd656ce5be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4536100178c4611eea8cfe2b15d79f55fdd32b7004de523cc2694bd656ce5be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:58Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:58 crc kubenswrapper[4953]: I1211 10:11:58.421354 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:58Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:58 crc kubenswrapper[4953]: I1211 10:11:58.435379 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e57ec14864d78b0463b4bd4af9dfa21aec61df60a63a38b7d98ba4871716edfc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:58Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:58 crc kubenswrapper[4953]: I1211 10:11:58.436857 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:11:58 crc kubenswrapper[4953]: I1211 10:11:58.437035 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:11:58 crc kubenswrapper[4953]: I1211 10:11:58.437407 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:11:58 crc kubenswrapper[4953]: I1211 10:11:58.437684 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:11:58 crc kubenswrapper[4953]: I1211 10:11:58.437840 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:11:58Z","lastTransitionTime":"2025-12-11T10:11:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:11:58 crc kubenswrapper[4953]: I1211 10:11:58.451061 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h4dvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"644e1d40-ab80-469e-94b4-540e52b8e2c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f734acf34a05a9425f305c809775bae58615ae1d5f89e3b519e54d7e7abb8bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbwwr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h4dvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:58Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:58 crc kubenswrapper[4953]: I1211 10:11:58.473288 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 10:11:58 crc kubenswrapper[4953]: E1211 10:11:58.473434 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 10:11:58 crc kubenswrapper[4953]: I1211 10:11:58.539678 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:11:58 crc kubenswrapper[4953]: I1211 10:11:58.539722 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:11:58 crc kubenswrapper[4953]: I1211 10:11:58.539743 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:11:58 crc kubenswrapper[4953]: I1211 10:11:58.539759 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:11:58 crc kubenswrapper[4953]: I1211 10:11:58.539770 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:11:58Z","lastTransitionTime":"2025-12-11T10:11:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:11:58 crc kubenswrapper[4953]: I1211 10:11:58.739431 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:11:58 crc kubenswrapper[4953]: I1211 10:11:58.739474 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:11:58 crc kubenswrapper[4953]: I1211 10:11:58.739487 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:11:58 crc kubenswrapper[4953]: I1211 10:11:58.739504 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:11:58 crc kubenswrapper[4953]: I1211 10:11:58.739517 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:11:58Z","lastTransitionTime":"2025-12-11T10:11:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:11:58 crc kubenswrapper[4953]: I1211 10:11:58.841634 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:11:58 crc kubenswrapper[4953]: I1211 10:11:58.841671 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:11:58 crc kubenswrapper[4953]: I1211 10:11:58.841683 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:11:58 crc kubenswrapper[4953]: I1211 10:11:58.841701 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:11:58 crc kubenswrapper[4953]: I1211 10:11:58.841713 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:11:58Z","lastTransitionTime":"2025-12-11T10:11:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:11:58 crc kubenswrapper[4953]: I1211 10:11:58.944462 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:11:58 crc kubenswrapper[4953]: I1211 10:11:58.944505 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:11:58 crc kubenswrapper[4953]: I1211 10:11:58.944517 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:11:58 crc kubenswrapper[4953]: I1211 10:11:58.944533 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:11:58 crc kubenswrapper[4953]: I1211 10:11:58.944545 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:11:58Z","lastTransitionTime":"2025-12-11T10:11:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:11:59 crc kubenswrapper[4953]: I1211 10:11:59.047001 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:11:59 crc kubenswrapper[4953]: I1211 10:11:59.047389 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:11:59 crc kubenswrapper[4953]: I1211 10:11:59.047464 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:11:59 crc kubenswrapper[4953]: I1211 10:11:59.047548 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:11:59 crc kubenswrapper[4953]: I1211 10:11:59.047631 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:11:59Z","lastTransitionTime":"2025-12-11T10:11:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:11:59 crc kubenswrapper[4953]: I1211 10:11:59.139778 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 10:11:59 crc kubenswrapper[4953]: E1211 10:11:59.139874 4953 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 11 10:11:59 crc kubenswrapper[4953]: E1211 10:11:59.139940 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-11 10:12:15.139915863 +0000 UTC m=+53.163774896 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 11 10:11:59 crc kubenswrapper[4953]: I1211 10:11:59.151145 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:11:59 crc kubenswrapper[4953]: I1211 10:11:59.151188 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:11:59 crc kubenswrapper[4953]: I1211 10:11:59.151197 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:11:59 crc kubenswrapper[4953]: I1211 10:11:59.151216 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:11:59 crc kubenswrapper[4953]: I1211 10:11:59.151227 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:11:59Z","lastTransitionTime":"2025-12-11T10:11:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:11:59 crc kubenswrapper[4953]: I1211 10:11:59.191139 4953 generic.go:334] "Generic (PLEG): container finished" podID="d80d6bd6-dd9c-433e-93cb-2be48e4cea72" containerID="22373c7e841c5b2889f89395496fcd5cf912db482ef228c680812c667bead5da" exitCode=0 Dec 11 10:11:59 crc kubenswrapper[4953]: I1211 10:11:59.191192 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-pqtrx" event={"ID":"d80d6bd6-dd9c-433e-93cb-2be48e4cea72","Type":"ContainerDied","Data":"22373c7e841c5b2889f89395496fcd5cf912db482ef228c680812c667bead5da"} Dec 11 10:11:59 crc kubenswrapper[4953]: I1211 10:11:59.267114 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:59Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:59 crc kubenswrapper[4953]: I1211 10:11:59.267445 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:11:59 crc kubenswrapper[4953]: I1211 10:11:59.267477 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:11:59 crc kubenswrapper[4953]: I1211 10:11:59.267488 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:11:59 crc kubenswrapper[4953]: I1211 10:11:59.267501 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:11:59 crc kubenswrapper[4953]: I1211 10:11:59.267511 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:11:59Z","lastTransitionTime":"2025-12-11T10:11:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:11:59 crc kubenswrapper[4953]: I1211 10:11:59.282375 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7cgmm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5c6cf50-ac15-4b65-98a3-24d5b55a9f5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e8c3b294febaab8650ca738b055222b11b0f3502da927fb9bb1f2f30b97c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrv98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7cgmm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:59Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:59 crc kubenswrapper[4953]: I1211 10:11:59.294277 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ps59j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed9da9e3-3f97-49f6-9774-3c2f06987b9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8b7289e76184818bc11ef0e99cd573244647de790af79ac277a91ebf305bc1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vngds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ps59j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:59Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:59 crc kubenswrapper[4953]: I1211 10:11:59.313913 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pqtrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d80d6bd6-dd9c-433e-93cb-2be48e4cea72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c456b7fb076e4e1f8ebe23ee26c07f2038800afe76c0e183ea82aebebe5e20d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c456b7fb076e4e1f8ebe23ee26c07f2038800afe76c0e183ea82aebebe5e20d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1199214a609bfaf541ddc0e45714a86f4165bdb240eda61b8ee35764cc2b6c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1199214a609bfaf541ddc0e45714a86f4165bdb240eda61b8ee35764cc2b6c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79056708a2a0ccc6dca203be18aec98296471e9c95fa3124c910deb77f04cb23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79056708a2a0ccc6dca203be18aec98296471e9c95fa3124c910deb77f04cb23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdf32e1c2b4bd849d12dcee505be41ebbd38d03660e37c1b36d65f81e55d5160\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdf32e1c2b4bd849d12dcee505be41ebbd38d03660e37c1b36d65f81e55d5160\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6dd8c365888d82936ae2eeef058fd79b7134d40d2096eeb655fc79faa658ce6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6dd8c365888d82936ae2eeef058fd79b7134d40d2096eeb655fc79faa658ce6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22373c7e841c5b2889f89395496fcd5cf912db482ef228c680812c667bead5da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22373c7e841c5b2889f89395496fcd5cf912db482ef228c680812c667bead5da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pqtrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:59Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:59 crc kubenswrapper[4953]: I1211 10:11:59.331022 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:59Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:59 crc kubenswrapper[4953]: I1211 10:11:59.351820 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7f8ca70-14ac-499f-9a73-c03f1cb9d3f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afbf1d478a1ccbd17c29483adf2e39e60be93dfde72d96dd4c45ee2b81c7db7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89487ecc0b25583d92a2adb537e660618a1f0477d9b0ca805c7d5cc120a38ef5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5850c59617cbc5cbf3d86246bfb8d7645964fdb32f406648e47de3d2e1dcca39\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b38e6fc7946d99ff7570627e9bfd01e9f5e029ad3f3e2cda276461f222d7950\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a91255550d88dd1963fef1112d90d2c1e779fc3e2dd1e7c824640879b8c6a58e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T10:11:37Z\\\",\\\"message\\\":\\\"W1211 10:11:26.311312 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1211 10:11:26.312053 1 crypto.go:601] Generating new CA for check-endpoints-signer@1765447886 cert, and key in /tmp/serving-cert-3652440615/serving-signer.crt, /tmp/serving-cert-3652440615/serving-signer.key\\\\nI1211 10:11:26.711906 1 observer_polling.go:159] Starting file observer\\\\nW1211 10:11:26.714018 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1211 10:11:26.714220 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1211 10:11:26.715195 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3652440615/tls.crt::/tmp/serving-cert-3652440615/tls.key\\\\\\\"\\\\nF1211 10:11:37.220702 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:26Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2348bd7a336966cd91aa6ba1cf71771e7fd111085acbb0481adee82d7a6e109\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4536100178c4611eea8cfe2b15d79f55fdd32b7004de523cc2694bd656ce5be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4536100178c4611eea8cfe2b15d79f55fdd32b7004de523cc2694bd656ce5be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:59Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:59 crc kubenswrapper[4953]: I1211 10:11:59.365539 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:59Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:59 crc kubenswrapper[4953]: I1211 10:11:59.377882 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:11:59 crc kubenswrapper[4953]: I1211 10:11:59.377924 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:11:59 crc kubenswrapper[4953]: I1211 10:11:59.377936 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:11:59 crc kubenswrapper[4953]: I1211 10:11:59.377970 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:11:59 crc kubenswrapper[4953]: I1211 10:11:59.377984 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:11:59Z","lastTransitionTime":"2025-12-11T10:11:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:11:59 crc kubenswrapper[4953]: I1211 10:11:59.379993 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e57ec14864d78b0463b4bd4af9dfa21aec61df60a63a38b7d98ba4871716edfc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:59Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:59 crc kubenswrapper[4953]: I1211 10:11:59.400973 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h4dvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"644e1d40-ab80-469e-94b4-540e52b8e2c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f734acf34a05a9425f305c809775bae58615ae1d5f89e3b519e54d7e7abb8bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbwwr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h4dvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:59Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:59 crc kubenswrapper[4953]: I1211 10:11:59.418354 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d98f6e58-767e-4e80-8dc7-bf97cdc14997\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec306b9048e81de45ce4e5ae1f564ab611980d56edf94f34c48cba7299dd754e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7453febb17d4aadef8c87c8d256a0339b441e2bed33a20a3f7cf88b4d0ce5a83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5c40bd3d558c5cff3d458a0b5a993371c3e8b6afc0035a64a21ffc0cc6c2357\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b22d8239ad9f5511dc6ae773c7ea181c4e194b0847b58332e716953d9deb9cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:59Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:59 crc kubenswrapper[4953]: I1211 10:11:59.429665 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q2898" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed741fb7-1326-48b7-a713-17c9f0243eac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c91690c6fc715e967f98fc731db9ff317a21946b0903480ee2534f5e71ae7ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5z9nk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd6810974250266a6a2efbea13db5cb6f52a4bbdec05955f7b9f58e55d7a8c4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5z9nk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q2898\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:59Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:59 crc kubenswrapper[4953]: I1211 10:11:59.458181 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x6f57" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c09d8243-6693-433e-bce1-8a99e5e37b95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b88a1ca7bd533da2486525e0a6c3dc45272433e743146be01eb8013265f02d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://622e6781ae06491000d6c0e3d2922942c3709bb09483a8e0d2ab723972c2aa78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c34fbadf5e4606e6ad395137eff9b9edc19b5d02125e818a6d8a19d0d20649e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2587a8f43682f31a01fae073769712321abb001f975f5ac5413264489a110f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42759c1ba178a28ddf9b11b221249364f6993c1288309dff6b1c50be13c3b6fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99f628cfb5844a18bd0013d2cd5f7f545ef7a561017498bdfa4a6b6fc4686e54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd7fc5afd00221721e6c142aa8ec3ec7cd5d5e2eb757952f34c702fa0aa2f9fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8da892281a7b8dea449aee461dcb35bef204e2a768689e751abcb21204091aaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c84dce3b218bcc630289f0bff286bad09d4481e1787ef2c048799bfc6c97108f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c84dce3b218bcc630289f0bff286bad09d4481e1787ef2c048799bfc6c97108f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x6f57\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:59Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:59 crc kubenswrapper[4953]: I1211 10:11:59.472304 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 10:11:59 crc kubenswrapper[4953]: E1211 10:11:59.472413 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 10:11:59 crc kubenswrapper[4953]: I1211 10:11:59.472316 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 10:11:59 crc kubenswrapper[4953]: E1211 10:11:59.472731 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 10:11:59 crc kubenswrapper[4953]: I1211 10:11:59.473360 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb9312a7af4fcd14d64411afec83b7315dbe399254aab23665cccfa0b04a62db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:59Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:59 crc kubenswrapper[4953]: I1211 10:11:59.480138 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:11:59 crc kubenswrapper[4953]: I1211 10:11:59.480171 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:11:59 crc kubenswrapper[4953]: I1211 10:11:59.480179 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:11:59 crc kubenswrapper[4953]: I1211 10:11:59.480191 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:11:59 crc kubenswrapper[4953]: I1211 10:11:59.480199 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:11:59Z","lastTransitionTime":"2025-12-11T10:11:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:11:59 crc kubenswrapper[4953]: I1211 10:11:59.491163 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e38e7bec81ab11b9afe5c592d5c57aa1c0527e5e4031265a00a99ef8cb3c6dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da0ab06260b0bf565e089d1d1a78ae71e0ce94f0d5e867393dafc543f9014367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:59Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:59 crc kubenswrapper[4953]: I1211 10:11:59.542564 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 10:11:59 crc kubenswrapper[4953]: I1211 10:11:59.542651 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 10:11:59 crc kubenswrapper[4953]: E1211 10:11:59.542732 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 10:12:15.54270722 +0000 UTC m=+53.566566253 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:11:59 crc kubenswrapper[4953]: E1211 10:11:59.542779 4953 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 11 10:11:59 crc kubenswrapper[4953]: E1211 10:11:59.542796 4953 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 11 10:11:59 crc kubenswrapper[4953]: I1211 10:11:59.542798 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 10:11:59 crc kubenswrapper[4953]: E1211 10:11:59.542812 4953 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 10:11:59 crc kubenswrapper[4953]: I1211 10:11:59.542834 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 10:11:59 crc kubenswrapper[4953]: E1211 10:11:59.542855 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-11 10:12:15.542841835 +0000 UTC m=+53.566700858 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 10:11:59 crc kubenswrapper[4953]: E1211 10:11:59.542957 4953 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 11 10:11:59 crc kubenswrapper[4953]: E1211 10:11:59.542994 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-11 10:12:15.54298336 +0000 UTC m=+53.566842393 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 11 10:11:59 crc kubenswrapper[4953]: E1211 10:11:59.543046 4953 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 11 10:11:59 crc kubenswrapper[4953]: E1211 10:11:59.543057 4953 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 11 10:11:59 crc kubenswrapper[4953]: E1211 10:11:59.543067 4953 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 10:11:59 crc kubenswrapper[4953]: E1211 10:11:59.543089 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-11 10:12:15.543083093 +0000 UTC m=+53.566942126 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 10:11:59 crc kubenswrapper[4953]: I1211 10:11:59.581723 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:11:59 crc kubenswrapper[4953]: I1211 10:11:59.581763 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:11:59 crc kubenswrapper[4953]: I1211 10:11:59.581773 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:11:59 crc kubenswrapper[4953]: I1211 10:11:59.581788 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:11:59 crc kubenswrapper[4953]: I1211 10:11:59.581798 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:11:59Z","lastTransitionTime":"2025-12-11T10:11:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:11:59 crc kubenswrapper[4953]: I1211 10:11:59.684979 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:11:59 crc kubenswrapper[4953]: I1211 10:11:59.685033 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:11:59 crc kubenswrapper[4953]: I1211 10:11:59.685042 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:11:59 crc kubenswrapper[4953]: I1211 10:11:59.685059 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:11:59 crc kubenswrapper[4953]: I1211 10:11:59.685067 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:11:59Z","lastTransitionTime":"2025-12-11T10:11:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:11:59 crc kubenswrapper[4953]: I1211 10:11:59.787090 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:11:59 crc kubenswrapper[4953]: I1211 10:11:59.787116 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:11:59 crc kubenswrapper[4953]: I1211 10:11:59.787124 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:11:59 crc kubenswrapper[4953]: I1211 10:11:59.787137 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:11:59 crc kubenswrapper[4953]: I1211 10:11:59.787145 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:11:59Z","lastTransitionTime":"2025-12-11T10:11:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:11:59 crc kubenswrapper[4953]: I1211 10:11:59.890140 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:11:59 crc kubenswrapper[4953]: I1211 10:11:59.890194 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:11:59 crc kubenswrapper[4953]: I1211 10:11:59.890210 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:11:59 crc kubenswrapper[4953]: I1211 10:11:59.890231 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:11:59 crc kubenswrapper[4953]: I1211 10:11:59.890248 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:11:59Z","lastTransitionTime":"2025-12-11T10:11:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:11:59 crc kubenswrapper[4953]: I1211 10:11:59.937183 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bjhsd"] Dec 11 10:11:59 crc kubenswrapper[4953]: I1211 10:11:59.937687 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bjhsd" Dec 11 10:11:59 crc kubenswrapper[4953]: I1211 10:11:59.940995 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Dec 11 10:11:59 crc kubenswrapper[4953]: I1211 10:11:59.941458 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Dec 11 10:11:59 crc kubenswrapper[4953]: I1211 10:11:59.945670 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d6c4cea1-0872-4490-8195-2a195090982c-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-bjhsd\" (UID: \"d6c4cea1-0872-4490-8195-2a195090982c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bjhsd" Dec 11 10:11:59 crc kubenswrapper[4953]: I1211 10:11:59.945722 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnnf2\" (UniqueName: \"kubernetes.io/projected/d6c4cea1-0872-4490-8195-2a195090982c-kube-api-access-cnnf2\") pod \"ovnkube-control-plane-749d76644c-bjhsd\" (UID: \"d6c4cea1-0872-4490-8195-2a195090982c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bjhsd" Dec 11 10:11:59 crc kubenswrapper[4953]: I1211 10:11:59.945744 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d6c4cea1-0872-4490-8195-2a195090982c-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-bjhsd\" (UID: \"d6c4cea1-0872-4490-8195-2a195090982c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bjhsd" Dec 11 10:11:59 crc kubenswrapper[4953]: I1211 10:11:59.945779 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d6c4cea1-0872-4490-8195-2a195090982c-env-overrides\") pod \"ovnkube-control-plane-749d76644c-bjhsd\" (UID: \"d6c4cea1-0872-4490-8195-2a195090982c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bjhsd" Dec 11 10:11:59 crc kubenswrapper[4953]: I1211 10:11:59.972115 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x6f57" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c09d8243-6693-433e-bce1-8a99e5e37b95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b88a1ca7bd533da2486525e0a6c3dc45272433e743146be01eb8013265f02d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://622e6781ae06491000d6c0e3d2922942c3709bb09483a8e0d2ab723972c2aa78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c34fbadf5e4606e6ad395137eff9b9edc19b5d02125e818a6d8a19d0d20649e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2587a8f43682f31a01fae073769712321abb001f975f5ac5413264489a110f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42759c1ba178a28ddf9b11b221249364f6993c1288309dff6b1c50be13c3b6fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99f628cfb5844a18bd0013d2cd5f7f545ef7a561017498bdfa4a6b6fc4686e54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd7fc5afd00221721e6c142aa8ec3ec7cd5d5e2eb757952f34c702fa0aa2f9fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8da892281a7b8dea449aee461dcb35bef204e2a768689e751abcb21204091aaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c84dce3b218bcc630289f0bff286bad09d4481e1787ef2c048799bfc6c97108f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c84dce3b218bcc630289f0bff286bad09d4481e1787ef2c048799bfc6c97108f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x6f57\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:59Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:59 crc kubenswrapper[4953]: I1211 10:11:59.976809 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:11:59 crc kubenswrapper[4953]: I1211 10:11:59.976868 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:11:59 crc kubenswrapper[4953]: I1211 10:11:59.976885 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:11:59 crc kubenswrapper[4953]: I1211 10:11:59.976909 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:11:59 crc kubenswrapper[4953]: I1211 10:11:59.976928 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:11:59Z","lastTransitionTime":"2025-12-11T10:11:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:11:59 crc kubenswrapper[4953]: I1211 10:11:59.987458 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bjhsd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c4cea1-0872-4490-8195-2a195090982c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnnf2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnnf2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bjhsd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:59Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:59 crc kubenswrapper[4953]: E1211 10:11:59.992279 4953 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T10:11:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T10:11:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T10:11:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T10:11:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5fa37296-71b7-4540-87a3-260b8ecb76f4\\\",\\\"systemUUID\\\":\\\"28c30a59-aa99-484b-82a7-0daea6b2659e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:11:59Z is after 2025-08-24T17:21:41Z" Dec 11 10:11:59 crc kubenswrapper[4953]: I1211 10:11:59.998185 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:11:59 crc kubenswrapper[4953]: I1211 10:11:59.998258 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:11:59 crc kubenswrapper[4953]: I1211 10:11:59.998284 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:11:59 crc kubenswrapper[4953]: I1211 10:11:59.998314 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:11:59 crc kubenswrapper[4953]: I1211 10:11:59.998330 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:11:59Z","lastTransitionTime":"2025-12-11T10:11:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:00 crc kubenswrapper[4953]: I1211 10:12:00.002947 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb9312a7af4fcd14d64411afec83b7315dbe399254aab23665cccfa0b04a62db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:00Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:00 crc kubenswrapper[4953]: E1211 10:12:00.011733 4953 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T10:11:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T10:11:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T10:11:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T10:11:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5fa37296-71b7-4540-87a3-260b8ecb76f4\\\",\\\"systemUUID\\\":\\\"28c30a59-aa99-484b-82a7-0daea6b2659e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:00Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:00 crc kubenswrapper[4953]: I1211 10:12:00.015953 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:00 crc kubenswrapper[4953]: I1211 10:12:00.016049 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:00 crc kubenswrapper[4953]: I1211 10:12:00.016077 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:00 crc kubenswrapper[4953]: I1211 10:12:00.016111 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:00 crc kubenswrapper[4953]: I1211 10:12:00.016137 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:00Z","lastTransitionTime":"2025-12-11T10:12:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:00 crc kubenswrapper[4953]: I1211 10:12:00.016209 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q2898" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed741fb7-1326-48b7-a713-17c9f0243eac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c91690c6fc715e967f98fc731db9ff317a21946b0903480ee2534f5e71ae7ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5z9nk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd6810974250266a6a2efbea13db5cb6f52a4bbdec05955f7b9f58e55d7a8c4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5z9nk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q2898\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:00Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:00 crc kubenswrapper[4953]: E1211 10:12:00.030363 4953 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T10:12:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T10:12:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T10:12:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T10:12:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5fa37296-71b7-4540-87a3-260b8ecb76f4\\\",\\\"systemUUID\\\":\\\"28c30a59-aa99-484b-82a7-0daea6b2659e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:00Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:00 crc kubenswrapper[4953]: I1211 10:12:00.034890 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:00 crc kubenswrapper[4953]: I1211 10:12:00.034934 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:00 crc kubenswrapper[4953]: I1211 10:12:00.034946 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:00 crc kubenswrapper[4953]: I1211 10:12:00.034966 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:00 crc kubenswrapper[4953]: I1211 10:12:00.034981 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:00Z","lastTransitionTime":"2025-12-11T10:12:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:00 crc kubenswrapper[4953]: I1211 10:12:00.039062 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e38e7bec81ab11b9afe5c592d5c57aa1c0527e5e4031265a00a99ef8cb3c6dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da0ab06260b0bf565e089d1d1a78ae71e0ce94f0d5e867393dafc543f9014367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:00Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:00 crc kubenswrapper[4953]: I1211 10:12:00.047285 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d6c4cea1-0872-4490-8195-2a195090982c-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-bjhsd\" (UID: \"d6c4cea1-0872-4490-8195-2a195090982c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bjhsd" Dec 11 10:12:00 crc kubenswrapper[4953]: I1211 10:12:00.047352 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d6c4cea1-0872-4490-8195-2a195090982c-env-overrides\") pod \"ovnkube-control-plane-749d76644c-bjhsd\" (UID: \"d6c4cea1-0872-4490-8195-2a195090982c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bjhsd" Dec 11 10:12:00 crc kubenswrapper[4953]: I1211 10:12:00.047378 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d6c4cea1-0872-4490-8195-2a195090982c-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-bjhsd\" (UID: \"d6c4cea1-0872-4490-8195-2a195090982c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bjhsd" Dec 11 10:12:00 crc kubenswrapper[4953]: I1211 10:12:00.047425 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cnnf2\" (UniqueName: \"kubernetes.io/projected/d6c4cea1-0872-4490-8195-2a195090982c-kube-api-access-cnnf2\") pod \"ovnkube-control-plane-749d76644c-bjhsd\" (UID: \"d6c4cea1-0872-4490-8195-2a195090982c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bjhsd" Dec 11 10:12:00 crc kubenswrapper[4953]: E1211 10:12:00.048107 4953 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T10:12:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T10:12:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T10:12:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T10:12:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5fa37296-71b7-4540-87a3-260b8ecb76f4\\\",\\\"systemUUID\\\":\\\"28c30a59-aa99-484b-82a7-0daea6b2659e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:00Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:00 crc kubenswrapper[4953]: I1211 10:12:00.048825 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d6c4cea1-0872-4490-8195-2a195090982c-env-overrides\") pod \"ovnkube-control-plane-749d76644c-bjhsd\" (UID: \"d6c4cea1-0872-4490-8195-2a195090982c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bjhsd" Dec 11 10:12:00 crc kubenswrapper[4953]: I1211 10:12:00.048964 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d6c4cea1-0872-4490-8195-2a195090982c-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-bjhsd\" (UID: \"d6c4cea1-0872-4490-8195-2a195090982c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bjhsd" Dec 11 10:12:00 crc kubenswrapper[4953]: I1211 10:12:00.052458 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:00 crc kubenswrapper[4953]: I1211 10:12:00.052490 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:00 crc kubenswrapper[4953]: I1211 10:12:00.052500 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:00 crc kubenswrapper[4953]: I1211 10:12:00.052519 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:00 crc kubenswrapper[4953]: I1211 10:12:00.052530 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:00Z","lastTransitionTime":"2025-12-11T10:12:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:00 crc kubenswrapper[4953]: I1211 10:12:00.053896 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7cgmm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5c6cf50-ac15-4b65-98a3-24d5b55a9f5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e8c3b294febaab8650ca738b055222b11b0f3502da927fb9bb1f2f30b97c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrv98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7cgmm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:00Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:00 crc kubenswrapper[4953]: I1211 10:12:00.054096 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d6c4cea1-0872-4490-8195-2a195090982c-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-bjhsd\" (UID: \"d6c4cea1-0872-4490-8195-2a195090982c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bjhsd" Dec 11 10:12:00 crc kubenswrapper[4953]: I1211 10:12:00.063015 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cnnf2\" (UniqueName: \"kubernetes.io/projected/d6c4cea1-0872-4490-8195-2a195090982c-kube-api-access-cnnf2\") pod \"ovnkube-control-plane-749d76644c-bjhsd\" (UID: \"d6c4cea1-0872-4490-8195-2a195090982c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bjhsd" Dec 11 10:12:00 crc kubenswrapper[4953]: I1211 10:12:00.067927 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ps59j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed9da9e3-3f97-49f6-9774-3c2f06987b9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8b7289e76184818bc11ef0e99cd573244647de790af79ac277a91ebf305bc1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vngds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ps59j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:00Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:00 crc kubenswrapper[4953]: E1211 10:12:00.068880 4953 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T10:12:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T10:12:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T10:12:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T10:12:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5fa37296-71b7-4540-87a3-260b8ecb76f4\\\",\\\"systemUUID\\\":\\\"28c30a59-aa99-484b-82a7-0daea6b2659e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:00Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:00 crc kubenswrapper[4953]: E1211 10:12:00.069123 4953 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 11 10:12:00 crc kubenswrapper[4953]: I1211 10:12:00.071059 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:00 crc kubenswrapper[4953]: I1211 10:12:00.071097 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:00 crc kubenswrapper[4953]: I1211 10:12:00.071118 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:00 crc kubenswrapper[4953]: I1211 10:12:00.071138 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:00 crc kubenswrapper[4953]: I1211 10:12:00.071153 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:00Z","lastTransitionTime":"2025-12-11T10:12:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:00 crc kubenswrapper[4953]: I1211 10:12:00.091792 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pqtrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d80d6bd6-dd9c-433e-93cb-2be48e4cea72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c456b7fb076e4e1f8ebe23ee26c07f2038800afe76c0e183ea82aebebe5e20d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c456b7fb076e4e1f8ebe23ee26c07f2038800afe76c0e183ea82aebebe5e20d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1199214a609bfaf541ddc0e45714a86f4165bdb240eda61b8ee35764cc2b6c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1199214a609bfaf541ddc0e45714a86f4165bdb240eda61b8ee35764cc2b6c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79056708a2a0ccc6dca203be18aec98296471e9c95fa3124c910deb77f04cb23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79056708a2a0ccc6dca203be18aec98296471e9c95fa3124c910deb77f04cb23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdf32e1c2b4bd849d12dcee505be41ebbd38d03660e37c1b36d65f81e55d5160\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdf32e1c2b4bd849d12dcee505be41ebbd38d03660e37c1b36d65f81e55d5160\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6dd8c365888d82936ae2eeef058fd79b7134d40d2096eeb655fc79faa658ce6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6dd8c365888d82936ae2eeef058fd79b7134d40d2096eeb655fc79faa658ce6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22373c7e841c5b2889f89395496fcd5cf912db482ef228c680812c667bead5da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22373c7e841c5b2889f89395496fcd5cf912db482ef228c680812c667bead5da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pqtrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:00Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:00 crc kubenswrapper[4953]: I1211 10:12:00.106799 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:00Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:00 crc kubenswrapper[4953]: I1211 10:12:00.119921 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:00Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:00 crc kubenswrapper[4953]: I1211 10:12:00.134136 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7f8ca70-14ac-499f-9a73-c03f1cb9d3f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afbf1d478a1ccbd17c29483adf2e39e60be93dfde72d96dd4c45ee2b81c7db7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89487ecc0b25583d92a2adb537e660618a1f0477d9b0ca805c7d5cc120a38ef5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5850c59617cbc5cbf3d86246bfb8d7645964fdb32f406648e47de3d2e1dcca39\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b38e6fc7946d99ff7570627e9bfd01e9f5e029ad3f3e2cda276461f222d7950\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a91255550d88dd1963fef1112d90d2c1e779fc3e2dd1e7c824640879b8c6a58e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T10:11:37Z\\\",\\\"message\\\":\\\"W1211 10:11:26.311312 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1211 10:11:26.312053 1 crypto.go:601] Generating new CA for check-endpoints-signer@1765447886 cert, and key in /tmp/serving-cert-3652440615/serving-signer.crt, /tmp/serving-cert-3652440615/serving-signer.key\\\\nI1211 10:11:26.711906 1 observer_polling.go:159] Starting file observer\\\\nW1211 10:11:26.714018 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1211 10:11:26.714220 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1211 10:11:26.715195 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3652440615/tls.crt::/tmp/serving-cert-3652440615/tls.key\\\\\\\"\\\\nF1211 10:11:37.220702 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:26Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2348bd7a336966cd91aa6ba1cf71771e7fd111085acbb0481adee82d7a6e109\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4536100178c4611eea8cfe2b15d79f55fdd32b7004de523cc2694bd656ce5be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4536100178c4611eea8cfe2b15d79f55fdd32b7004de523cc2694bd656ce5be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:00Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:00 crc kubenswrapper[4953]: I1211 10:12:00.146470 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:00Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:00 crc kubenswrapper[4953]: I1211 10:12:00.157979 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e57ec14864d78b0463b4bd4af9dfa21aec61df60a63a38b7d98ba4871716edfc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:00Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:00 crc kubenswrapper[4953]: I1211 10:12:00.171278 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h4dvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"644e1d40-ab80-469e-94b4-540e52b8e2c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f734acf34a05a9425f305c809775bae58615ae1d5f89e3b519e54d7e7abb8bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbwwr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h4dvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:00Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:00 crc kubenswrapper[4953]: I1211 10:12:00.173705 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:00 crc kubenswrapper[4953]: I1211 10:12:00.173748 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:00 crc kubenswrapper[4953]: I1211 10:12:00.173759 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:00 crc kubenswrapper[4953]: I1211 10:12:00.173777 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:00 crc kubenswrapper[4953]: I1211 10:12:00.173789 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:00Z","lastTransitionTime":"2025-12-11T10:12:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:00 crc kubenswrapper[4953]: I1211 10:12:00.184355 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d98f6e58-767e-4e80-8dc7-bf97cdc14997\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec306b9048e81de45ce4e5ae1f564ab611980d56edf94f34c48cba7299dd754e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7453febb17d4aadef8c87c8d256a0339b441e2bed33a20a3f7cf88b4d0ce5a83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5c40bd3d558c5cff3d458a0b5a993371c3e8b6afc0035a64a21ffc0cc6c2357\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b22d8239ad9f5511dc6ae773c7ea181c4e194b0847b58332e716953d9deb9cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:00Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:00 crc kubenswrapper[4953]: I1211 10:12:00.253323 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bjhsd" Dec 11 10:12:00 crc kubenswrapper[4953]: I1211 10:12:00.276770 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:00 crc kubenswrapper[4953]: I1211 10:12:00.276836 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:00 crc kubenswrapper[4953]: I1211 10:12:00.276855 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:00 crc kubenswrapper[4953]: I1211 10:12:00.276882 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:00 crc kubenswrapper[4953]: I1211 10:12:00.276900 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:00Z","lastTransitionTime":"2025-12-11T10:12:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:00 crc kubenswrapper[4953]: I1211 10:12:00.281026 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-x6f57" Dec 11 10:12:00 crc kubenswrapper[4953]: I1211 10:12:00.281448 4953 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 11 10:12:00 crc kubenswrapper[4953]: I1211 10:12:00.311114 4953 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-x6f57" podUID="c09d8243-6693-433e-bce1-8a99e5e37b95" containerName="ovnkube-controller" probeResult="failure" output="" Dec 11 10:12:00 crc kubenswrapper[4953]: I1211 10:12:00.347028 4953 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-x6f57" podUID="c09d8243-6693-433e-bce1-8a99e5e37b95" containerName="ovnkube-controller" probeResult="failure" output="" Dec 11 10:12:00 crc kubenswrapper[4953]: I1211 10:12:00.379824 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:00 crc kubenswrapper[4953]: I1211 10:12:00.379870 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:00 crc kubenswrapper[4953]: I1211 10:12:00.379881 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:00 crc kubenswrapper[4953]: I1211 10:12:00.379902 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:00 crc kubenswrapper[4953]: I1211 10:12:00.379915 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:00Z","lastTransitionTime":"2025-12-11T10:12:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:00 crc kubenswrapper[4953]: I1211 10:12:00.474633 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 10:12:00 crc kubenswrapper[4953]: E1211 10:12:00.474746 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 10:12:00 crc kubenswrapper[4953]: I1211 10:12:00.491169 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:00 crc kubenswrapper[4953]: I1211 10:12:00.491206 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:00 crc kubenswrapper[4953]: I1211 10:12:00.491216 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:00 crc kubenswrapper[4953]: I1211 10:12:00.491231 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:00 crc kubenswrapper[4953]: I1211 10:12:00.491241 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:00Z","lastTransitionTime":"2025-12-11T10:12:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:00 crc kubenswrapper[4953]: I1211 10:12:00.593514 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:00 crc kubenswrapper[4953]: I1211 10:12:00.593547 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:00 crc kubenswrapper[4953]: I1211 10:12:00.593555 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:00 crc kubenswrapper[4953]: I1211 10:12:00.593568 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:00 crc kubenswrapper[4953]: I1211 10:12:00.593588 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:00Z","lastTransitionTime":"2025-12-11T10:12:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:00 crc kubenswrapper[4953]: I1211 10:12:00.695892 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:00 crc kubenswrapper[4953]: I1211 10:12:00.696211 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:00 crc kubenswrapper[4953]: I1211 10:12:00.696221 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:00 crc kubenswrapper[4953]: I1211 10:12:00.696237 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:00 crc kubenswrapper[4953]: I1211 10:12:00.696250 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:00Z","lastTransitionTime":"2025-12-11T10:12:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:00 crc kubenswrapper[4953]: I1211 10:12:00.798301 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:00 crc kubenswrapper[4953]: I1211 10:12:00.798344 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:00 crc kubenswrapper[4953]: I1211 10:12:00.798356 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:00 crc kubenswrapper[4953]: I1211 10:12:00.798371 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:00 crc kubenswrapper[4953]: I1211 10:12:00.798381 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:00Z","lastTransitionTime":"2025-12-11T10:12:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:00 crc kubenswrapper[4953]: I1211 10:12:00.904523 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:00 crc kubenswrapper[4953]: I1211 10:12:00.904567 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:00 crc kubenswrapper[4953]: I1211 10:12:00.904603 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:00 crc kubenswrapper[4953]: I1211 10:12:00.904617 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:00 crc kubenswrapper[4953]: I1211 10:12:00.904629 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:00Z","lastTransitionTime":"2025-12-11T10:12:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:01 crc kubenswrapper[4953]: I1211 10:12:01.007724 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:01 crc kubenswrapper[4953]: I1211 10:12:01.007762 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:01 crc kubenswrapper[4953]: I1211 10:12:01.007771 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:01 crc kubenswrapper[4953]: I1211 10:12:01.007788 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:01 crc kubenswrapper[4953]: I1211 10:12:01.007819 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:01Z","lastTransitionTime":"2025-12-11T10:12:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:01 crc kubenswrapper[4953]: I1211 10:12:01.116615 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:01 crc kubenswrapper[4953]: I1211 10:12:01.116656 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:01 crc kubenswrapper[4953]: I1211 10:12:01.116666 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:01 crc kubenswrapper[4953]: I1211 10:12:01.116682 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:01 crc kubenswrapper[4953]: I1211 10:12:01.116693 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:01Z","lastTransitionTime":"2025-12-11T10:12:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:01 crc kubenswrapper[4953]: I1211 10:12:01.235768 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:01 crc kubenswrapper[4953]: I1211 10:12:01.235813 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:01 crc kubenswrapper[4953]: I1211 10:12:01.235823 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:01 crc kubenswrapper[4953]: I1211 10:12:01.235841 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:01 crc kubenswrapper[4953]: I1211 10:12:01.235853 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:01Z","lastTransitionTime":"2025-12-11T10:12:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:01 crc kubenswrapper[4953]: I1211 10:12:01.238383 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bjhsd" event={"ID":"d6c4cea1-0872-4490-8195-2a195090982c","Type":"ContainerStarted","Data":"1469f484fec8f5c7863ebaa62188bc38d6553fe3ef65e315a928924306724842"} Dec 11 10:12:01 crc kubenswrapper[4953]: I1211 10:12:01.238423 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bjhsd" event={"ID":"d6c4cea1-0872-4490-8195-2a195090982c","Type":"ContainerStarted","Data":"6e2ab3c73fffd4d07174524dd41c285309cc588049ea3896875e75982d072ced"} Dec 11 10:12:01 crc kubenswrapper[4953]: I1211 10:12:01.238433 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bjhsd" event={"ID":"d6c4cea1-0872-4490-8195-2a195090982c","Type":"ContainerStarted","Data":"39fad317055dc5dab9331e73514f03d2c2868aa447ed1d3841428822fcf93136"} Dec 11 10:12:01 crc kubenswrapper[4953]: I1211 10:12:01.241191 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-pqtrx" event={"ID":"d80d6bd6-dd9c-433e-93cb-2be48e4cea72","Type":"ContainerStarted","Data":"7525c3e73b38b27709833d8bf03853f82b08bafa8734d97890332f8aff9d3317"} Dec 11 10:12:01 crc kubenswrapper[4953]: I1211 10:12:01.252839 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:01Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:01 crc kubenswrapper[4953]: I1211 10:12:01.266987 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:01Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:01 crc kubenswrapper[4953]: I1211 10:12:01.277132 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7cgmm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5c6cf50-ac15-4b65-98a3-24d5b55a9f5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e8c3b294febaab8650ca738b055222b11b0f3502da927fb9bb1f2f30b97c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrv98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7cgmm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:01Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:01 crc kubenswrapper[4953]: I1211 10:12:01.289489 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ps59j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed9da9e3-3f97-49f6-9774-3c2f06987b9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8b7289e76184818bc11ef0e99cd573244647de790af79ac277a91ebf305bc1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vngds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ps59j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:01Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:01 crc kubenswrapper[4953]: I1211 10:12:01.305436 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pqtrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d80d6bd6-dd9c-433e-93cb-2be48e4cea72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c456b7fb076e4e1f8ebe23ee26c07f2038800afe76c0e183ea82aebebe5e20d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c456b7fb076e4e1f8ebe23ee26c07f2038800afe76c0e183ea82aebebe5e20d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1199214a609bfaf541ddc0e45714a86f4165bdb240eda61b8ee35764cc2b6c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1199214a609bfaf541ddc0e45714a86f4165bdb240eda61b8ee35764cc2b6c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79056708a2a0ccc6dca203be18aec98296471e9c95fa3124c910deb77f04cb23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79056708a2a0ccc6dca203be18aec98296471e9c95fa3124c910deb77f04cb23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdf32e1c2b4bd849d12dcee505be41ebbd38d03660e37c1b36d65f81e55d5160\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdf32e1c2b4bd849d12dcee505be41ebbd38d03660e37c1b36d65f81e55d5160\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6dd8c365888d82936ae2eeef058fd79b7134d40d2096eeb655fc79faa658ce6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6dd8c365888d82936ae2eeef058fd79b7134d40d2096eeb655fc79faa658ce6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22373c7e841c5b2889f89395496fcd5cf912db482ef228c680812c667bead5da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22373c7e841c5b2889f89395496fcd5cf912db482ef228c680812c667bead5da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pqtrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:01Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:01 crc kubenswrapper[4953]: I1211 10:12:01.528415 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 10:12:01 crc kubenswrapper[4953]: E1211 10:12:01.528518 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 10:12:01 crc kubenswrapper[4953]: I1211 10:12:01.528593 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 10:12:01 crc kubenswrapper[4953]: I1211 10:12:01.528642 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 10:12:01 crc kubenswrapper[4953]: E1211 10:12:01.528670 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 10:12:01 crc kubenswrapper[4953]: I1211 10:12:01.528767 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:01 crc kubenswrapper[4953]: E1211 10:12:01.528762 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 10:12:01 crc kubenswrapper[4953]: I1211 10:12:01.528787 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:01 crc kubenswrapper[4953]: I1211 10:12:01.528812 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:01 crc kubenswrapper[4953]: I1211 10:12:01.528829 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:01 crc kubenswrapper[4953]: I1211 10:12:01.528842 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:01Z","lastTransitionTime":"2025-12-11T10:12:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:01 crc kubenswrapper[4953]: I1211 10:12:01.537543 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d98f6e58-767e-4e80-8dc7-bf97cdc14997\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec306b9048e81de45ce4e5ae1f564ab611980d56edf94f34c48cba7299dd754e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7453febb17d4aadef8c87c8d256a0339b441e2bed33a20a3f7cf88b4d0ce5a83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5c40bd3d558c5cff3d458a0b5a993371c3e8b6afc0035a64a21ffc0cc6c2357\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b22d8239ad9f5511dc6ae773c7ea181c4e194b0847b58332e716953d9deb9cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:01Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:01 crc kubenswrapper[4953]: I1211 10:12:01.544701 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-qm4mr"] Dec 11 10:12:01 crc kubenswrapper[4953]: I1211 10:12:01.545118 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qm4mr" Dec 11 10:12:01 crc kubenswrapper[4953]: E1211 10:12:01.545173 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qm4mr" podUID="86f65b63-32e0-49cc-bc96-272ecfb987ed" Dec 11 10:12:01 crc kubenswrapper[4953]: I1211 10:12:01.558260 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7f8ca70-14ac-499f-9a73-c03f1cb9d3f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afbf1d478a1ccbd17c29483adf2e39e60be93dfde72d96dd4c45ee2b81c7db7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89487ecc0b25583d92a2adb537e660618a1f0477d9b0ca805c7d5cc120a38ef5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5850c59617cbc5cbf3d86246bfb8d7645964fdb32f406648e47de3d2e1dcca39\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b38e6fc7946d99ff7570627e9bfd01e9f5e029ad3f3e2cda276461f222d7950\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a91255550d88dd1963fef1112d90d2c1e779fc3e2dd1e7c824640879b8c6a58e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T10:11:37Z\\\",\\\"message\\\":\\\"W1211 10:11:26.311312 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1211 10:11:26.312053 1 crypto.go:601] Generating new CA for check-endpoints-signer@1765447886 cert, and key in /tmp/serving-cert-3652440615/serving-signer.crt, /tmp/serving-cert-3652440615/serving-signer.key\\\\nI1211 10:11:26.711906 1 observer_polling.go:159] Starting file observer\\\\nW1211 10:11:26.714018 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1211 10:11:26.714220 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1211 10:11:26.715195 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3652440615/tls.crt::/tmp/serving-cert-3652440615/tls.key\\\\\\\"\\\\nF1211 10:11:37.220702 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:26Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2348bd7a336966cd91aa6ba1cf71771e7fd111085acbb0481adee82d7a6e109\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4536100178c4611eea8cfe2b15d79f55fdd32b7004de523cc2694bd656ce5be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4536100178c4611eea8cfe2b15d79f55fdd32b7004de523cc2694bd656ce5be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:01Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:01 crc kubenswrapper[4953]: I1211 10:12:01.570347 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:01Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:01 crc kubenswrapper[4953]: I1211 10:12:01.582536 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e57ec14864d78b0463b4bd4af9dfa21aec61df60a63a38b7d98ba4871716edfc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:01Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:01 crc kubenswrapper[4953]: I1211 10:12:01.595001 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h4dvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"644e1d40-ab80-469e-94b4-540e52b8e2c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f734acf34a05a9425f305c809775bae58615ae1d5f89e3b519e54d7e7abb8bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbwwr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h4dvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:01Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:01 crc kubenswrapper[4953]: I1211 10:12:01.608019 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb9312a7af4fcd14d64411afec83b7315dbe399254aab23665cccfa0b04a62db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:01Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:01 crc kubenswrapper[4953]: I1211 10:12:01.619440 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q2898" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed741fb7-1326-48b7-a713-17c9f0243eac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c91690c6fc715e967f98fc731db9ff317a21946b0903480ee2534f5e71ae7ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5z9nk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd6810974250266a6a2efbea13db5cb6f52a4bbdec05955f7b9f58e55d7a8c4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5z9nk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q2898\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:01Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:01 crc kubenswrapper[4953]: I1211 10:12:01.627889 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqpb5\" (UniqueName: \"kubernetes.io/projected/86f65b63-32e0-49cc-bc96-272ecfb987ed-kube-api-access-hqpb5\") pod \"network-metrics-daemon-qm4mr\" (UID: \"86f65b63-32e0-49cc-bc96-272ecfb987ed\") " pod="openshift-multus/network-metrics-daemon-qm4mr" Dec 11 10:12:01 crc kubenswrapper[4953]: I1211 10:12:01.627947 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/86f65b63-32e0-49cc-bc96-272ecfb987ed-metrics-certs\") pod \"network-metrics-daemon-qm4mr\" (UID: \"86f65b63-32e0-49cc-bc96-272ecfb987ed\") " pod="openshift-multus/network-metrics-daemon-qm4mr" Dec 11 10:12:01 crc kubenswrapper[4953]: I1211 10:12:01.632109 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:01 crc kubenswrapper[4953]: I1211 10:12:01.632149 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:01 crc kubenswrapper[4953]: I1211 10:12:01.632162 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:01 crc kubenswrapper[4953]: I1211 10:12:01.632183 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:01 crc kubenswrapper[4953]: I1211 10:12:01.632197 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:01Z","lastTransitionTime":"2025-12-11T10:12:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:01 crc kubenswrapper[4953]: I1211 10:12:01.639307 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x6f57" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c09d8243-6693-433e-bce1-8a99e5e37b95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b88a1ca7bd533da2486525e0a6c3dc45272433e743146be01eb8013265f02d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://622e6781ae06491000d6c0e3d2922942c3709bb09483a8e0d2ab723972c2aa78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c34fbadf5e4606e6ad395137eff9b9edc19b5d02125e818a6d8a19d0d20649e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2587a8f43682f31a01fae073769712321abb001f975f5ac5413264489a110f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42759c1ba178a28ddf9b11b221249364f6993c1288309dff6b1c50be13c3b6fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99f628cfb5844a18bd0013d2cd5f7f545ef7a561017498bdfa4a6b6fc4686e54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd7fc5afd00221721e6c142aa8ec3ec7cd5d5e2eb757952f34c702fa0aa2f9fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8da892281a7b8dea449aee461dcb35bef204e2a768689e751abcb21204091aaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c84dce3b218bcc630289f0bff286bad09d4481e1787ef2c048799bfc6c97108f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c84dce3b218bcc630289f0bff286bad09d4481e1787ef2c048799bfc6c97108f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x6f57\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:01Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:01 crc kubenswrapper[4953]: I1211 10:12:01.652634 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bjhsd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c4cea1-0872-4490-8195-2a195090982c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e2ab3c73fffd4d07174524dd41c285309cc588049ea3896875e75982d072ced\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnnf2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1469f484fec8f5c7863ebaa62188bc38d6553fe3ef65e315a928924306724842\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnnf2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bjhsd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:01Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:01 crc kubenswrapper[4953]: I1211 10:12:01.729315 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqpb5\" (UniqueName: \"kubernetes.io/projected/86f65b63-32e0-49cc-bc96-272ecfb987ed-kube-api-access-hqpb5\") pod \"network-metrics-daemon-qm4mr\" (UID: \"86f65b63-32e0-49cc-bc96-272ecfb987ed\") " pod="openshift-multus/network-metrics-daemon-qm4mr" Dec 11 10:12:01 crc kubenswrapper[4953]: I1211 10:12:01.729372 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/86f65b63-32e0-49cc-bc96-272ecfb987ed-metrics-certs\") pod \"network-metrics-daemon-qm4mr\" (UID: \"86f65b63-32e0-49cc-bc96-272ecfb987ed\") " pod="openshift-multus/network-metrics-daemon-qm4mr" Dec 11 10:12:01 crc kubenswrapper[4953]: E1211 10:12:01.729534 4953 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 11 10:12:01 crc kubenswrapper[4953]: E1211 10:12:01.729615 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/86f65b63-32e0-49cc-bc96-272ecfb987ed-metrics-certs podName:86f65b63-32e0-49cc-bc96-272ecfb987ed nodeName:}" failed. No retries permitted until 2025-12-11 10:12:02.229597211 +0000 UTC m=+40.253456244 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/86f65b63-32e0-49cc-bc96-272ecfb987ed-metrics-certs") pod "network-metrics-daemon-qm4mr" (UID: "86f65b63-32e0-49cc-bc96-272ecfb987ed") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 11 10:12:01 crc kubenswrapper[4953]: I1211 10:12:01.734767 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:01 crc kubenswrapper[4953]: I1211 10:12:01.734815 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:01 crc kubenswrapper[4953]: I1211 10:12:01.734827 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:01 crc kubenswrapper[4953]: I1211 10:12:01.734857 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:01 crc kubenswrapper[4953]: I1211 10:12:01.734870 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:01Z","lastTransitionTime":"2025-12-11T10:12:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:01 crc kubenswrapper[4953]: I1211 10:12:01.748006 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e38e7bec81ab11b9afe5c592d5c57aa1c0527e5e4031265a00a99ef8cb3c6dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da0ab06260b0bf565e089d1d1a78ae71e0ce94f0d5e867393dafc543f9014367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:01Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:01 crc kubenswrapper[4953]: I1211 10:12:01.768488 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqpb5\" (UniqueName: \"kubernetes.io/projected/86f65b63-32e0-49cc-bc96-272ecfb987ed-kube-api-access-hqpb5\") pod \"network-metrics-daemon-qm4mr\" (UID: \"86f65b63-32e0-49cc-bc96-272ecfb987ed\") " pod="openshift-multus/network-metrics-daemon-qm4mr" Dec 11 10:12:01 crc kubenswrapper[4953]: I1211 10:12:01.774441 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:01Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:02 crc kubenswrapper[4953]: I1211 10:12:02.006016 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e57ec14864d78b0463b4bd4af9dfa21aec61df60a63a38b7d98ba4871716edfc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:02Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:02 crc kubenswrapper[4953]: I1211 10:12:02.014780 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:02 crc kubenswrapper[4953]: I1211 10:12:02.014818 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:02 crc kubenswrapper[4953]: I1211 10:12:02.014826 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:02 crc kubenswrapper[4953]: I1211 10:12:02.014840 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:02 crc kubenswrapper[4953]: I1211 10:12:02.014850 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:02Z","lastTransitionTime":"2025-12-11T10:12:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:02 crc kubenswrapper[4953]: I1211 10:12:02.030627 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h4dvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"644e1d40-ab80-469e-94b4-540e52b8e2c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f734acf34a05a9425f305c809775bae58615ae1d5f89e3b519e54d7e7abb8bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbwwr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h4dvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:02Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:02 crc kubenswrapper[4953]: I1211 10:12:02.043188 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d98f6e58-767e-4e80-8dc7-bf97cdc14997\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec306b9048e81de45ce4e5ae1f564ab611980d56edf94f34c48cba7299dd754e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7453febb17d4aadef8c87c8d256a0339b441e2bed33a20a3f7cf88b4d0ce5a83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5c40bd3d558c5cff3d458a0b5a993371c3e8b6afc0035a64a21ffc0cc6c2357\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b22d8239ad9f5511dc6ae773c7ea181c4e194b0847b58332e716953d9deb9cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:02Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:02 crc kubenswrapper[4953]: I1211 10:12:02.057673 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7f8ca70-14ac-499f-9a73-c03f1cb9d3f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afbf1d478a1ccbd17c29483adf2e39e60be93dfde72d96dd4c45ee2b81c7db7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89487ecc0b25583d92a2adb537e660618a1f0477d9b0ca805c7d5cc120a38ef5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5850c59617cbc5cbf3d86246bfb8d7645964fdb32f406648e47de3d2e1dcca39\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b38e6fc7946d99ff7570627e9bfd01e9f5e029ad3f3e2cda276461f222d7950\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a91255550d88dd1963fef1112d90d2c1e779fc3e2dd1e7c824640879b8c6a58e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T10:11:37Z\\\",\\\"message\\\":\\\"W1211 10:11:26.311312 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1211 10:11:26.312053 1 crypto.go:601] Generating new CA for check-endpoints-signer@1765447886 cert, and key in /tmp/serving-cert-3652440615/serving-signer.crt, /tmp/serving-cert-3652440615/serving-signer.key\\\\nI1211 10:11:26.711906 1 observer_polling.go:159] Starting file observer\\\\nW1211 10:11:26.714018 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1211 10:11:26.714220 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1211 10:11:26.715195 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3652440615/tls.crt::/tmp/serving-cert-3652440615/tls.key\\\\\\\"\\\\nF1211 10:11:37.220702 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:26Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2348bd7a336966cd91aa6ba1cf71771e7fd111085acbb0481adee82d7a6e109\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4536100178c4611eea8cfe2b15d79f55fdd32b7004de523cc2694bd656ce5be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4536100178c4611eea8cfe2b15d79f55fdd32b7004de523cc2694bd656ce5be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:02Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:02 crc kubenswrapper[4953]: I1211 10:12:02.067258 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bjhsd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c4cea1-0872-4490-8195-2a195090982c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e2ab3c73fffd4d07174524dd41c285309cc588049ea3896875e75982d072ced\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnnf2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1469f484fec8f5c7863ebaa62188bc38d6553fe3ef65e315a928924306724842\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnnf2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bjhsd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:02Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:02 crc kubenswrapper[4953]: I1211 10:12:02.079926 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb9312a7af4fcd14d64411afec83b7315dbe399254aab23665cccfa0b04a62db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:02Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:02 crc kubenswrapper[4953]: I1211 10:12:02.090365 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q2898" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed741fb7-1326-48b7-a713-17c9f0243eac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c91690c6fc715e967f98fc731db9ff317a21946b0903480ee2534f5e71ae7ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5z9nk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd6810974250266a6a2efbea13db5cb6f52a4bbdec05955f7b9f58e55d7a8c4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5z9nk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q2898\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:02Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:02 crc kubenswrapper[4953]: I1211 10:12:02.107429 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x6f57" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c09d8243-6693-433e-bce1-8a99e5e37b95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b88a1ca7bd533da2486525e0a6c3dc45272433e743146be01eb8013265f02d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://622e6781ae06491000d6c0e3d2922942c3709bb09483a8e0d2ab723972c2aa78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c34fbadf5e4606e6ad395137eff9b9edc19b5d02125e818a6d8a19d0d20649e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2587a8f43682f31a01fae073769712321abb001f975f5ac5413264489a110f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42759c1ba178a28ddf9b11b221249364f6993c1288309dff6b1c50be13c3b6fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99f628cfb5844a18bd0013d2cd5f7f545ef7a561017498bdfa4a6b6fc4686e54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd7fc5afd00221721e6c142aa8ec3ec7cd5d5e2eb757952f34c702fa0aa2f9fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8da892281a7b8dea449aee461dcb35bef204e2a768689e751abcb21204091aaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c84dce3b218bcc630289f0bff286bad09d4481e1787ef2c048799bfc6c97108f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c84dce3b218bcc630289f0bff286bad09d4481e1787ef2c048799bfc6c97108f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x6f57\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:02Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:02 crc kubenswrapper[4953]: I1211 10:12:02.133176 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e38e7bec81ab11b9afe5c592d5c57aa1c0527e5e4031265a00a99ef8cb3c6dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da0ab06260b0bf565e089d1d1a78ae71e0ce94f0d5e867393dafc543f9014367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:02Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:02 crc kubenswrapper[4953]: I1211 10:12:02.133225 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:02 crc kubenswrapper[4953]: I1211 10:12:02.133388 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:02 crc kubenswrapper[4953]: I1211 10:12:02.133402 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:02 crc kubenswrapper[4953]: I1211 10:12:02.133419 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:02 crc kubenswrapper[4953]: I1211 10:12:02.133431 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:02Z","lastTransitionTime":"2025-12-11T10:12:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:02 crc kubenswrapper[4953]: I1211 10:12:02.142471 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ps59j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed9da9e3-3f97-49f6-9774-3c2f06987b9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8b7289e76184818bc11ef0e99cd573244647de790af79ac277a91ebf305bc1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vngds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ps59j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:02Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:02 crc kubenswrapper[4953]: I1211 10:12:02.156458 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pqtrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d80d6bd6-dd9c-433e-93cb-2be48e4cea72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7525c3e73b38b27709833d8bf03853f82b08bafa8734d97890332f8aff9d3317\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c456b7fb076e4e1f8ebe23ee26c07f2038800afe76c0e183ea82aebebe5e20d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c456b7fb076e4e1f8ebe23ee26c07f2038800afe76c0e183ea82aebebe5e20d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1199214a609bfaf541ddc0e45714a86f4165bdb240eda61b8ee35764cc2b6c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1199214a609bfaf541ddc0e45714a86f4165bdb240eda61b8ee35764cc2b6c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79056708a2a0ccc6dca203be18aec98296471e9c95fa3124c910deb77f04cb23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79056708a2a0ccc6dca203be18aec98296471e9c95fa3124c910deb77f04cb23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdf32e1c2b4bd849d12dcee505be41ebbd38d03660e37c1b36d65f81e55d5160\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdf32e1c2b4bd849d12dcee505be41ebbd38d03660e37c1b36d65f81e55d5160\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6dd8c365888d82936ae2eeef058fd79b7134d40d2096eeb655fc79faa658ce6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6dd8c365888d82936ae2eeef058fd79b7134d40d2096eeb655fc79faa658ce6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22373c7e841c5b2889f89395496fcd5cf912db482ef228c680812c667bead5da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22373c7e841c5b2889f89395496fcd5cf912db482ef228c680812c667bead5da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pqtrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:02Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:02 crc kubenswrapper[4953]: I1211 10:12:02.165060 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qm4mr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86f65b63-32e0-49cc-bc96-272ecfb987ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqpb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqpb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:12:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qm4mr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:02Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:02 crc kubenswrapper[4953]: I1211 10:12:02.176308 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:02Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:02 crc kubenswrapper[4953]: I1211 10:12:02.187909 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:02Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:02 crc kubenswrapper[4953]: I1211 10:12:02.196798 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7cgmm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5c6cf50-ac15-4b65-98a3-24d5b55a9f5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e8c3b294febaab8650ca738b055222b11b0f3502da927fb9bb1f2f30b97c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrv98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7cgmm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:02Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:02 crc kubenswrapper[4953]: I1211 10:12:02.236024 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:02 crc kubenswrapper[4953]: I1211 10:12:02.236065 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:02 crc kubenswrapper[4953]: I1211 10:12:02.236074 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:02 crc kubenswrapper[4953]: I1211 10:12:02.236087 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:02 crc kubenswrapper[4953]: I1211 10:12:02.236097 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:02Z","lastTransitionTime":"2025-12-11T10:12:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:02 crc kubenswrapper[4953]: I1211 10:12:02.300213 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/86f65b63-32e0-49cc-bc96-272ecfb987ed-metrics-certs\") pod \"network-metrics-daemon-qm4mr\" (UID: \"86f65b63-32e0-49cc-bc96-272ecfb987ed\") " pod="openshift-multus/network-metrics-daemon-qm4mr" Dec 11 10:12:02 crc kubenswrapper[4953]: E1211 10:12:02.300396 4953 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 11 10:12:02 crc kubenswrapper[4953]: E1211 10:12:02.300474 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/86f65b63-32e0-49cc-bc96-272ecfb987ed-metrics-certs podName:86f65b63-32e0-49cc-bc96-272ecfb987ed nodeName:}" failed. No retries permitted until 2025-12-11 10:12:03.300455301 +0000 UTC m=+41.324314344 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/86f65b63-32e0-49cc-bc96-272ecfb987ed-metrics-certs") pod "network-metrics-daemon-qm4mr" (UID: "86f65b63-32e0-49cc-bc96-272ecfb987ed") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 11 10:12:02 crc kubenswrapper[4953]: I1211 10:12:02.345261 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:02 crc kubenswrapper[4953]: I1211 10:12:02.345310 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:02 crc kubenswrapper[4953]: I1211 10:12:02.345320 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:02 crc kubenswrapper[4953]: I1211 10:12:02.345333 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:02 crc kubenswrapper[4953]: I1211 10:12:02.345343 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:02Z","lastTransitionTime":"2025-12-11T10:12:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:02 crc kubenswrapper[4953]: I1211 10:12:02.448427 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:02 crc kubenswrapper[4953]: I1211 10:12:02.448457 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:02 crc kubenswrapper[4953]: I1211 10:12:02.448466 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:02 crc kubenswrapper[4953]: I1211 10:12:02.448480 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:02 crc kubenswrapper[4953]: I1211 10:12:02.448488 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:02Z","lastTransitionTime":"2025-12-11T10:12:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:02 crc kubenswrapper[4953]: I1211 10:12:02.489840 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d98f6e58-767e-4e80-8dc7-bf97cdc14997\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec306b9048e81de45ce4e5ae1f564ab611980d56edf94f34c48cba7299dd754e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7453febb17d4aadef8c87c8d256a0339b441e2bed33a20a3f7cf88b4d0ce5a83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5c40bd3d558c5cff3d458a0b5a993371c3e8b6afc0035a64a21ffc0cc6c2357\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b22d8239ad9f5511dc6ae773c7ea181c4e194b0847b58332e716953d9deb9cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:02Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:02 crc kubenswrapper[4953]: I1211 10:12:02.511398 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7f8ca70-14ac-499f-9a73-c03f1cb9d3f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afbf1d478a1ccbd17c29483adf2e39e60be93dfde72d96dd4c45ee2b81c7db7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89487ecc0b25583d92a2adb537e660618a1f0477d9b0ca805c7d5cc120a38ef5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5850c59617cbc5cbf3d86246bfb8d7645964fdb32f406648e47de3d2e1dcca39\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b38e6fc7946d99ff7570627e9bfd01e9f5e029ad3f3e2cda276461f222d7950\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a91255550d88dd1963fef1112d90d2c1e779fc3e2dd1e7c824640879b8c6a58e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T10:11:37Z\\\",\\\"message\\\":\\\"W1211 10:11:26.311312 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1211 10:11:26.312053 1 crypto.go:601] Generating new CA for check-endpoints-signer@1765447886 cert, and key in /tmp/serving-cert-3652440615/serving-signer.crt, /tmp/serving-cert-3652440615/serving-signer.key\\\\nI1211 10:11:26.711906 1 observer_polling.go:159] Starting file observer\\\\nW1211 10:11:26.714018 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1211 10:11:26.714220 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1211 10:11:26.715195 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3652440615/tls.crt::/tmp/serving-cert-3652440615/tls.key\\\\\\\"\\\\nF1211 10:11:37.220702 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:26Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2348bd7a336966cd91aa6ba1cf71771e7fd111085acbb0481adee82d7a6e109\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4536100178c4611eea8cfe2b15d79f55fdd32b7004de523cc2694bd656ce5be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4536100178c4611eea8cfe2b15d79f55fdd32b7004de523cc2694bd656ce5be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:02Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:02 crc kubenswrapper[4953]: I1211 10:12:02.527523 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:02Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:02 crc kubenswrapper[4953]: I1211 10:12:02.552058 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:02 crc kubenswrapper[4953]: I1211 10:12:02.552108 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:02 crc kubenswrapper[4953]: I1211 10:12:02.552125 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:02 crc kubenswrapper[4953]: I1211 10:12:02.552146 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:02 crc kubenswrapper[4953]: I1211 10:12:02.552162 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:02Z","lastTransitionTime":"2025-12-11T10:12:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:02 crc kubenswrapper[4953]: I1211 10:12:02.552438 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e57ec14864d78b0463b4bd4af9dfa21aec61df60a63a38b7d98ba4871716edfc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:02Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:02 crc kubenswrapper[4953]: I1211 10:12:02.569969 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h4dvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"644e1d40-ab80-469e-94b4-540e52b8e2c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f734acf34a05a9425f305c809775bae58615ae1d5f89e3b519e54d7e7abb8bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbwwr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h4dvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:02Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:02 crc kubenswrapper[4953]: I1211 10:12:02.587680 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb9312a7af4fcd14d64411afec83b7315dbe399254aab23665cccfa0b04a62db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:02Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:02 crc kubenswrapper[4953]: I1211 10:12:02.601489 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q2898" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed741fb7-1326-48b7-a713-17c9f0243eac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c91690c6fc715e967f98fc731db9ff317a21946b0903480ee2534f5e71ae7ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5z9nk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd6810974250266a6a2efbea13db5cb6f52a4bbdec05955f7b9f58e55d7a8c4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5z9nk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q2898\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:02Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:02 crc kubenswrapper[4953]: I1211 10:12:02.620319 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x6f57" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c09d8243-6693-433e-bce1-8a99e5e37b95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b88a1ca7bd533da2486525e0a6c3dc45272433e743146be01eb8013265f02d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://622e6781ae06491000d6c0e3d2922942c3709bb09483a8e0d2ab723972c2aa78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c34fbadf5e4606e6ad395137eff9b9edc19b5d02125e818a6d8a19d0d20649e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2587a8f43682f31a01fae073769712321abb001f975f5ac5413264489a110f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42759c1ba178a28ddf9b11b221249364f6993c1288309dff6b1c50be13c3b6fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99f628cfb5844a18bd0013d2cd5f7f545ef7a561017498bdfa4a6b6fc4686e54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd7fc5afd00221721e6c142aa8ec3ec7cd5d5e2eb757952f34c702fa0aa2f9fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8da892281a7b8dea449aee461dcb35bef204e2a768689e751abcb21204091aaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c84dce3b218bcc630289f0bff286bad09d4481e1787ef2c048799bfc6c97108f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c84dce3b218bcc630289f0bff286bad09d4481e1787ef2c048799bfc6c97108f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x6f57\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:02Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:02 crc kubenswrapper[4953]: I1211 10:12:02.634075 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bjhsd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c4cea1-0872-4490-8195-2a195090982c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e2ab3c73fffd4d07174524dd41c285309cc588049ea3896875e75982d072ced\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnnf2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1469f484fec8f5c7863ebaa62188bc38d6553fe3ef65e315a928924306724842\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnnf2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bjhsd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:02Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:02 crc kubenswrapper[4953]: I1211 10:12:02.649106 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e38e7bec81ab11b9afe5c592d5c57aa1c0527e5e4031265a00a99ef8cb3c6dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da0ab06260b0bf565e089d1d1a78ae71e0ce94f0d5e867393dafc543f9014367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:02Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:02 crc kubenswrapper[4953]: I1211 10:12:02.654289 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:02 crc kubenswrapper[4953]: I1211 10:12:02.654336 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:02 crc kubenswrapper[4953]: I1211 10:12:02.654350 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:02 crc kubenswrapper[4953]: I1211 10:12:02.654369 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:02 crc kubenswrapper[4953]: I1211 10:12:02.654381 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:02Z","lastTransitionTime":"2025-12-11T10:12:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:02 crc kubenswrapper[4953]: I1211 10:12:02.668733 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:02Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:02 crc kubenswrapper[4953]: I1211 10:12:02.689274 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:02Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:02 crc kubenswrapper[4953]: I1211 10:12:02.703499 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7cgmm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5c6cf50-ac15-4b65-98a3-24d5b55a9f5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e8c3b294febaab8650ca738b055222b11b0f3502da927fb9bb1f2f30b97c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrv98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7cgmm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:02Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:02 crc kubenswrapper[4953]: I1211 10:12:02.719755 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ps59j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed9da9e3-3f97-49f6-9774-3c2f06987b9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8b7289e76184818bc11ef0e99cd573244647de790af79ac277a91ebf305bc1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vngds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ps59j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:02Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:02 crc kubenswrapper[4953]: I1211 10:12:02.735143 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pqtrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d80d6bd6-dd9c-433e-93cb-2be48e4cea72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7525c3e73b38b27709833d8bf03853f82b08bafa8734d97890332f8aff9d3317\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c456b7fb076e4e1f8ebe23ee26c07f2038800afe76c0e183ea82aebebe5e20d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c456b7fb076e4e1f8ebe23ee26c07f2038800afe76c0e183ea82aebebe5e20d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1199214a609bfaf541ddc0e45714a86f4165bdb240eda61b8ee35764cc2b6c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1199214a609bfaf541ddc0e45714a86f4165bdb240eda61b8ee35764cc2b6c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79056708a2a0ccc6dca203be18aec98296471e9c95fa3124c910deb77f04cb23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79056708a2a0ccc6dca203be18aec98296471e9c95fa3124c910deb77f04cb23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdf32e1c2b4bd849d12dcee505be41ebbd38d03660e37c1b36d65f81e55d5160\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdf32e1c2b4bd849d12dcee505be41ebbd38d03660e37c1b36d65f81e55d5160\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6dd8c365888d82936ae2eeef058fd79b7134d40d2096eeb655fc79faa658ce6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6dd8c365888d82936ae2eeef058fd79b7134d40d2096eeb655fc79faa658ce6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22373c7e841c5b2889f89395496fcd5cf912db482ef228c680812c667bead5da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22373c7e841c5b2889f89395496fcd5cf912db482ef228c680812c667bead5da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pqtrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:02Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:02 crc kubenswrapper[4953]: I1211 10:12:02.744539 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qm4mr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86f65b63-32e0-49cc-bc96-272ecfb987ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqpb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqpb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:12:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qm4mr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:02Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:02 crc kubenswrapper[4953]: I1211 10:12:02.756163 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:02 crc kubenswrapper[4953]: I1211 10:12:02.756224 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:02 crc kubenswrapper[4953]: I1211 10:12:02.756238 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:02 crc kubenswrapper[4953]: I1211 10:12:02.756254 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:02 crc kubenswrapper[4953]: I1211 10:12:02.756265 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:02Z","lastTransitionTime":"2025-12-11T10:12:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:02 crc kubenswrapper[4953]: I1211 10:12:02.859783 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:02 crc kubenswrapper[4953]: I1211 10:12:02.859874 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:02 crc kubenswrapper[4953]: I1211 10:12:02.859924 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:02 crc kubenswrapper[4953]: I1211 10:12:02.860083 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:02 crc kubenswrapper[4953]: I1211 10:12:02.860118 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:02Z","lastTransitionTime":"2025-12-11T10:12:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:02 crc kubenswrapper[4953]: I1211 10:12:02.963667 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:02 crc kubenswrapper[4953]: I1211 10:12:02.963758 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:02 crc kubenswrapper[4953]: I1211 10:12:02.963780 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:02 crc kubenswrapper[4953]: I1211 10:12:02.963804 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:02 crc kubenswrapper[4953]: I1211 10:12:02.963823 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:02Z","lastTransitionTime":"2025-12-11T10:12:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:03 crc kubenswrapper[4953]: I1211 10:12:03.066993 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:03 crc kubenswrapper[4953]: I1211 10:12:03.067058 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:03 crc kubenswrapper[4953]: I1211 10:12:03.067071 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:03 crc kubenswrapper[4953]: I1211 10:12:03.067089 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:03 crc kubenswrapper[4953]: I1211 10:12:03.067102 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:03Z","lastTransitionTime":"2025-12-11T10:12:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:03 crc kubenswrapper[4953]: I1211 10:12:03.169962 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:03 crc kubenswrapper[4953]: I1211 10:12:03.170021 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:03 crc kubenswrapper[4953]: I1211 10:12:03.170040 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:03 crc kubenswrapper[4953]: I1211 10:12:03.170063 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:03 crc kubenswrapper[4953]: I1211 10:12:03.170081 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:03Z","lastTransitionTime":"2025-12-11T10:12:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:03 crc kubenswrapper[4953]: I1211 10:12:03.262484 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x6f57_c09d8243-6693-433e-bce1-8a99e5e37b95/ovnkube-controller/0.log" Dec 11 10:12:03 crc kubenswrapper[4953]: I1211 10:12:03.266610 4953 generic.go:334] "Generic (PLEG): container finished" podID="c09d8243-6693-433e-bce1-8a99e5e37b95" containerID="dd7fc5afd00221721e6c142aa8ec3ec7cd5d5e2eb757952f34c702fa0aa2f9fc" exitCode=1 Dec 11 10:12:03 crc kubenswrapper[4953]: I1211 10:12:03.266670 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x6f57" event={"ID":"c09d8243-6693-433e-bce1-8a99e5e37b95","Type":"ContainerDied","Data":"dd7fc5afd00221721e6c142aa8ec3ec7cd5d5e2eb757952f34c702fa0aa2f9fc"} Dec 11 10:12:03 crc kubenswrapper[4953]: I1211 10:12:03.267879 4953 scope.go:117] "RemoveContainer" containerID="dd7fc5afd00221721e6c142aa8ec3ec7cd5d5e2eb757952f34c702fa0aa2f9fc" Dec 11 10:12:03 crc kubenswrapper[4953]: I1211 10:12:03.276116 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:03 crc kubenswrapper[4953]: I1211 10:12:03.276180 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:03 crc kubenswrapper[4953]: I1211 10:12:03.276199 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:03 crc kubenswrapper[4953]: I1211 10:12:03.276225 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:03 crc kubenswrapper[4953]: I1211 10:12:03.276246 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:03Z","lastTransitionTime":"2025-12-11T10:12:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:03 crc kubenswrapper[4953]: I1211 10:12:03.294475 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7f8ca70-14ac-499f-9a73-c03f1cb9d3f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afbf1d478a1ccbd17c29483adf2e39e60be93dfde72d96dd4c45ee2b81c7db7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89487ecc0b25583d92a2adb537e660618a1f0477d9b0ca805c7d5cc120a38ef5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5850c59617cbc5cbf3d86246bfb8d7645964fdb32f406648e47de3d2e1dcca39\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b38e6fc7946d99ff7570627e9bfd01e9f5e029ad3f3e2cda276461f222d7950\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a91255550d88dd1963fef1112d90d2c1e779fc3e2dd1e7c824640879b8c6a58e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T10:11:37Z\\\",\\\"message\\\":\\\"W1211 10:11:26.311312 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1211 10:11:26.312053 1 crypto.go:601] Generating new CA for check-endpoints-signer@1765447886 cert, and key in /tmp/serving-cert-3652440615/serving-signer.crt, /tmp/serving-cert-3652440615/serving-signer.key\\\\nI1211 10:11:26.711906 1 observer_polling.go:159] Starting file observer\\\\nW1211 10:11:26.714018 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1211 10:11:26.714220 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1211 10:11:26.715195 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3652440615/tls.crt::/tmp/serving-cert-3652440615/tls.key\\\\\\\"\\\\nF1211 10:11:37.220702 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:26Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2348bd7a336966cd91aa6ba1cf71771e7fd111085acbb0481adee82d7a6e109\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4536100178c4611eea8cfe2b15d79f55fdd32b7004de523cc2694bd656ce5be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4536100178c4611eea8cfe2b15d79f55fdd32b7004de523cc2694bd656ce5be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:03Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:03 crc kubenswrapper[4953]: I1211 10:12:03.310882 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/86f65b63-32e0-49cc-bc96-272ecfb987ed-metrics-certs\") pod \"network-metrics-daemon-qm4mr\" (UID: \"86f65b63-32e0-49cc-bc96-272ecfb987ed\") " pod="openshift-multus/network-metrics-daemon-qm4mr" Dec 11 10:12:03 crc kubenswrapper[4953]: E1211 10:12:03.311162 4953 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 11 10:12:03 crc kubenswrapper[4953]: E1211 10:12:03.311234 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/86f65b63-32e0-49cc-bc96-272ecfb987ed-metrics-certs podName:86f65b63-32e0-49cc-bc96-272ecfb987ed nodeName:}" failed. No retries permitted until 2025-12-11 10:12:05.311215288 +0000 UTC m=+43.335074331 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/86f65b63-32e0-49cc-bc96-272ecfb987ed-metrics-certs") pod "network-metrics-daemon-qm4mr" (UID: "86f65b63-32e0-49cc-bc96-272ecfb987ed") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 11 10:12:03 crc kubenswrapper[4953]: I1211 10:12:03.319164 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:03Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:03 crc kubenswrapper[4953]: I1211 10:12:03.334851 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e57ec14864d78b0463b4bd4af9dfa21aec61df60a63a38b7d98ba4871716edfc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:03Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:03 crc kubenswrapper[4953]: I1211 10:12:03.358137 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h4dvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"644e1d40-ab80-469e-94b4-540e52b8e2c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f734acf34a05a9425f305c809775bae58615ae1d5f89e3b519e54d7e7abb8bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbwwr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h4dvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:03Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:03 crc kubenswrapper[4953]: I1211 10:12:03.372865 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d98f6e58-767e-4e80-8dc7-bf97cdc14997\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec306b9048e81de45ce4e5ae1f564ab611980d56edf94f34c48cba7299dd754e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7453febb17d4aadef8c87c8d256a0339b441e2bed33a20a3f7cf88b4d0ce5a83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5c40bd3d558c5cff3d458a0b5a993371c3e8b6afc0035a64a21ffc0cc6c2357\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b22d8239ad9f5511dc6ae773c7ea181c4e194b0847b58332e716953d9deb9cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:03Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:03 crc kubenswrapper[4953]: I1211 10:12:03.379343 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:03 crc kubenswrapper[4953]: I1211 10:12:03.379384 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:03 crc kubenswrapper[4953]: I1211 10:12:03.379397 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:03 crc kubenswrapper[4953]: I1211 10:12:03.379413 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:03 crc kubenswrapper[4953]: I1211 10:12:03.379425 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:03Z","lastTransitionTime":"2025-12-11T10:12:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:03 crc kubenswrapper[4953]: I1211 10:12:03.393751 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x6f57" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c09d8243-6693-433e-bce1-8a99e5e37b95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b88a1ca7bd533da2486525e0a6c3dc45272433e743146be01eb8013265f02d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://622e6781ae06491000d6c0e3d2922942c3709bb09483a8e0d2ab723972c2aa78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c34fbadf5e4606e6ad395137eff9b9edc19b5d02125e818a6d8a19d0d20649e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2587a8f43682f31a01fae073769712321abb001f975f5ac5413264489a110f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42759c1ba178a28ddf9b11b221249364f6993c1288309dff6b1c50be13c3b6fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99f628cfb5844a18bd0013d2cd5f7f545ef7a561017498bdfa4a6b6fc4686e54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd7fc5afd00221721e6c142aa8ec3ec7cd5d5e2eb757952f34c702fa0aa2f9fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd7fc5afd00221721e6c142aa8ec3ec7cd5d5e2eb757952f34c702fa0aa2f9fc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T10:12:02Z\\\",\\\"message\\\":\\\"11 10:12:01.786503 6156 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1211 10:12:01.786662 6156 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1211 10:12:01.786726 6156 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1211 10:12:01.786790 6156 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1211 10:12:01.786831 6156 handler.go:208] Removed *v1.Node event handler 2\\\\nI1211 10:12:01.786873 6156 handler.go:208] Removed *v1.Node event handler 7\\\\nI1211 10:12:01.786915 6156 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1211 10:12:01.797637 6156 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1211 10:12:01.797695 6156 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1211 10:12:01.797728 6156 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1211 10:12:01.797813 6156 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1211 10:12:01.797837 6156 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1211 10:12:01.797766 6156 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1211 10:12:01.797787 6156 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1211 10:12:01.797857 6156 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1211 10:12:01.997915 6156 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8da892281a7b8dea449aee461dcb35bef204e2a768689e751abcb21204091aaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c84dce3b218bcc630289f0bff286bad09d4481e1787ef2c048799bfc6c97108f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c84dce3b218bcc630289f0bff286bad09d4481e1787ef2c048799bfc6c97108f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x6f57\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:03Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:03 crc kubenswrapper[4953]: I1211 10:12:03.406124 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bjhsd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c4cea1-0872-4490-8195-2a195090982c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e2ab3c73fffd4d07174524dd41c285309cc588049ea3896875e75982d072ced\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnnf2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1469f484fec8f5c7863ebaa62188bc38d6553fe3ef65e315a928924306724842\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnnf2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bjhsd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:03Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:03 crc kubenswrapper[4953]: I1211 10:12:03.420552 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb9312a7af4fcd14d64411afec83b7315dbe399254aab23665cccfa0b04a62db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:03Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:03 crc kubenswrapper[4953]: I1211 10:12:03.434514 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q2898" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed741fb7-1326-48b7-a713-17c9f0243eac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c91690c6fc715e967f98fc731db9ff317a21946b0903480ee2534f5e71ae7ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5z9nk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd6810974250266a6a2efbea13db5cb6f52a4bbdec05955f7b9f58e55d7a8c4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5z9nk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q2898\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:03Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:03 crc kubenswrapper[4953]: I1211 10:12:03.447279 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e38e7bec81ab11b9afe5c592d5c57aa1c0527e5e4031265a00a99ef8cb3c6dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da0ab06260b0bf565e089d1d1a78ae71e0ce94f0d5e867393dafc543f9014367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:03Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:03 crc kubenswrapper[4953]: I1211 10:12:03.457220 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7cgmm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5c6cf50-ac15-4b65-98a3-24d5b55a9f5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e8c3b294febaab8650ca738b055222b11b0f3502da927fb9bb1f2f30b97c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrv98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7cgmm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:03Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:03 crc kubenswrapper[4953]: I1211 10:12:03.468372 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ps59j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed9da9e3-3f97-49f6-9774-3c2f06987b9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8b7289e76184818bc11ef0e99cd573244647de790af79ac277a91ebf305bc1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vngds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ps59j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:03Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:03 crc kubenswrapper[4953]: I1211 10:12:03.475764 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 10:12:03 crc kubenswrapper[4953]: E1211 10:12:03.475860 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 10:12:03 crc kubenswrapper[4953]: I1211 10:12:03.476145 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 10:12:03 crc kubenswrapper[4953]: E1211 10:12:03.476192 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 10:12:03 crc kubenswrapper[4953]: I1211 10:12:03.476226 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 10:12:03 crc kubenswrapper[4953]: E1211 10:12:03.476265 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 10:12:03 crc kubenswrapper[4953]: I1211 10:12:03.476302 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qm4mr" Dec 11 10:12:03 crc kubenswrapper[4953]: E1211 10:12:03.476345 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qm4mr" podUID="86f65b63-32e0-49cc-bc96-272ecfb987ed" Dec 11 10:12:03 crc kubenswrapper[4953]: I1211 10:12:03.481608 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:03 crc kubenswrapper[4953]: I1211 10:12:03.481643 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:03 crc kubenswrapper[4953]: I1211 10:12:03.481656 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:03 crc kubenswrapper[4953]: I1211 10:12:03.481672 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:03 crc kubenswrapper[4953]: I1211 10:12:03.481684 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:03Z","lastTransitionTime":"2025-12-11T10:12:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:03 crc kubenswrapper[4953]: I1211 10:12:03.485436 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pqtrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d80d6bd6-dd9c-433e-93cb-2be48e4cea72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7525c3e73b38b27709833d8bf03853f82b08bafa8734d97890332f8aff9d3317\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c456b7fb076e4e1f8ebe23ee26c07f2038800afe76c0e183ea82aebebe5e20d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c456b7fb076e4e1f8ebe23ee26c07f2038800afe76c0e183ea82aebebe5e20d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1199214a609bfaf541ddc0e45714a86f4165bdb240eda61b8ee35764cc2b6c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1199214a609bfaf541ddc0e45714a86f4165bdb240eda61b8ee35764cc2b6c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79056708a2a0ccc6dca203be18aec98296471e9c95fa3124c910deb77f04cb23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79056708a2a0ccc6dca203be18aec98296471e9c95fa3124c910deb77f04cb23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdf32e1c2b4bd849d12dcee505be41ebbd38d03660e37c1b36d65f81e55d5160\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdf32e1c2b4bd849d12dcee505be41ebbd38d03660e37c1b36d65f81e55d5160\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6dd8c365888d82936ae2eeef058fd79b7134d40d2096eeb655fc79faa658ce6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6dd8c365888d82936ae2eeef058fd79b7134d40d2096eeb655fc79faa658ce6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22373c7e841c5b2889f89395496fcd5cf912db482ef228c680812c667bead5da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22373c7e841c5b2889f89395496fcd5cf912db482ef228c680812c667bead5da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pqtrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:03Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:03 crc kubenswrapper[4953]: I1211 10:12:03.498828 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qm4mr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86f65b63-32e0-49cc-bc96-272ecfb987ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqpb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqpb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:12:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qm4mr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:03Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:03 crc kubenswrapper[4953]: I1211 10:12:03.511759 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:03Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:03 crc kubenswrapper[4953]: I1211 10:12:03.525600 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:03Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:03 crc kubenswrapper[4953]: I1211 10:12:03.583423 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:03 crc kubenswrapper[4953]: I1211 10:12:03.583449 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:03 crc kubenswrapper[4953]: I1211 10:12:03.583458 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:03 crc kubenswrapper[4953]: I1211 10:12:03.583469 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:03 crc kubenswrapper[4953]: I1211 10:12:03.583510 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:03Z","lastTransitionTime":"2025-12-11T10:12:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:03 crc kubenswrapper[4953]: I1211 10:12:03.685456 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:03 crc kubenswrapper[4953]: I1211 10:12:03.685489 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:03 crc kubenswrapper[4953]: I1211 10:12:03.685497 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:03 crc kubenswrapper[4953]: I1211 10:12:03.685510 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:03 crc kubenswrapper[4953]: I1211 10:12:03.685519 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:03Z","lastTransitionTime":"2025-12-11T10:12:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:03 crc kubenswrapper[4953]: I1211 10:12:03.787599 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:03 crc kubenswrapper[4953]: I1211 10:12:03.787776 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:03 crc kubenswrapper[4953]: I1211 10:12:03.787795 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:03 crc kubenswrapper[4953]: I1211 10:12:03.787810 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:03 crc kubenswrapper[4953]: I1211 10:12:03.787820 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:03Z","lastTransitionTime":"2025-12-11T10:12:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:03 crc kubenswrapper[4953]: I1211 10:12:03.890045 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:03 crc kubenswrapper[4953]: I1211 10:12:03.890087 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:03 crc kubenswrapper[4953]: I1211 10:12:03.890095 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:03 crc kubenswrapper[4953]: I1211 10:12:03.890110 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:03 crc kubenswrapper[4953]: I1211 10:12:03.890121 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:03Z","lastTransitionTime":"2025-12-11T10:12:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:03 crc kubenswrapper[4953]: I1211 10:12:03.992351 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:03 crc kubenswrapper[4953]: I1211 10:12:03.992393 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:03 crc kubenswrapper[4953]: I1211 10:12:03.992407 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:03 crc kubenswrapper[4953]: I1211 10:12:03.992424 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:03 crc kubenswrapper[4953]: I1211 10:12:03.992435 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:03Z","lastTransitionTime":"2025-12-11T10:12:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:04 crc kubenswrapper[4953]: I1211 10:12:04.094627 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:04 crc kubenswrapper[4953]: I1211 10:12:04.094780 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:04 crc kubenswrapper[4953]: I1211 10:12:04.094854 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:04 crc kubenswrapper[4953]: I1211 10:12:04.094949 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:04 crc kubenswrapper[4953]: I1211 10:12:04.095030 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:04Z","lastTransitionTime":"2025-12-11T10:12:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:04 crc kubenswrapper[4953]: I1211 10:12:04.197696 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:04 crc kubenswrapper[4953]: I1211 10:12:04.197752 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:04 crc kubenswrapper[4953]: I1211 10:12:04.197761 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:04 crc kubenswrapper[4953]: I1211 10:12:04.197775 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:04 crc kubenswrapper[4953]: I1211 10:12:04.197785 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:04Z","lastTransitionTime":"2025-12-11T10:12:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:04 crc kubenswrapper[4953]: I1211 10:12:04.275843 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x6f57_c09d8243-6693-433e-bce1-8a99e5e37b95/ovnkube-controller/0.log" Dec 11 10:12:04 crc kubenswrapper[4953]: I1211 10:12:04.278659 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x6f57" event={"ID":"c09d8243-6693-433e-bce1-8a99e5e37b95","Type":"ContainerStarted","Data":"f9438288b4f1630934ea4ec24d43b8c123a9bb536442289988101e87cc72425d"} Dec 11 10:12:04 crc kubenswrapper[4953]: I1211 10:12:04.279693 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-x6f57" Dec 11 10:12:04 crc kubenswrapper[4953]: I1211 10:12:04.300431 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb9312a7af4fcd14d64411afec83b7315dbe399254aab23665cccfa0b04a62db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:04Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:04 crc kubenswrapper[4953]: I1211 10:12:04.300884 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:04 crc kubenswrapper[4953]: I1211 10:12:04.301049 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:04 crc kubenswrapper[4953]: I1211 10:12:04.301116 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:04 crc kubenswrapper[4953]: I1211 10:12:04.301195 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:04 crc kubenswrapper[4953]: I1211 10:12:04.301280 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:04Z","lastTransitionTime":"2025-12-11T10:12:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:04 crc kubenswrapper[4953]: I1211 10:12:04.313007 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q2898" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed741fb7-1326-48b7-a713-17c9f0243eac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c91690c6fc715e967f98fc731db9ff317a21946b0903480ee2534f5e71ae7ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5z9nk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd6810974250266a6a2efbea13db5cb6f52a4bbdec05955f7b9f58e55d7a8c4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5z9nk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q2898\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:04Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:04 crc kubenswrapper[4953]: I1211 10:12:04.338309 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x6f57" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c09d8243-6693-433e-bce1-8a99e5e37b95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b88a1ca7bd533da2486525e0a6c3dc45272433e743146be01eb8013265f02d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://622e6781ae06491000d6c0e3d2922942c3709bb09483a8e0d2ab723972c2aa78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c34fbadf5e4606e6ad395137eff9b9edc19b5d02125e818a6d8a19d0d20649e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2587a8f43682f31a01fae073769712321abb001f975f5ac5413264489a110f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42759c1ba178a28ddf9b11b221249364f6993c1288309dff6b1c50be13c3b6fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99f628cfb5844a18bd0013d2cd5f7f545ef7a561017498bdfa4a6b6fc4686e54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9438288b4f1630934ea4ec24d43b8c123a9bb536442289988101e87cc72425d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd7fc5afd00221721e6c142aa8ec3ec7cd5d5e2eb757952f34c702fa0aa2f9fc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T10:12:02Z\\\",\\\"message\\\":\\\"11 10:12:01.786503 6156 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1211 10:12:01.786662 6156 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1211 10:12:01.786726 6156 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1211 10:12:01.786790 6156 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1211 10:12:01.786831 6156 handler.go:208] Removed *v1.Node event handler 2\\\\nI1211 10:12:01.786873 6156 handler.go:208] Removed *v1.Node event handler 7\\\\nI1211 10:12:01.786915 6156 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1211 10:12:01.797637 6156 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1211 10:12:01.797695 6156 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1211 10:12:01.797728 6156 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1211 10:12:01.797813 6156 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1211 10:12:01.797837 6156 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1211 10:12:01.797766 6156 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1211 10:12:01.797787 6156 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1211 10:12:01.797857 6156 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1211 10:12:01.997915 6156 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:12:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8da892281a7b8dea449aee461dcb35bef204e2a768689e751abcb21204091aaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c84dce3b218bcc630289f0bff286bad09d4481e1787ef2c048799bfc6c97108f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c84dce3b218bcc630289f0bff286bad09d4481e1787ef2c048799bfc6c97108f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x6f57\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:04Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:04 crc kubenswrapper[4953]: I1211 10:12:04.351971 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bjhsd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c4cea1-0872-4490-8195-2a195090982c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e2ab3c73fffd4d07174524dd41c285309cc588049ea3896875e75982d072ced\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnnf2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1469f484fec8f5c7863ebaa62188bc38d6553fe3ef65e315a928924306724842\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnnf2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bjhsd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:04Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:04 crc kubenswrapper[4953]: I1211 10:12:04.366290 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e38e7bec81ab11b9afe5c592d5c57aa1c0527e5e4031265a00a99ef8cb3c6dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da0ab06260b0bf565e089d1d1a78ae71e0ce94f0d5e867393dafc543f9014367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:04Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:04 crc kubenswrapper[4953]: I1211 10:12:04.378839 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:04Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:04 crc kubenswrapper[4953]: I1211 10:12:04.390214 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:04Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:04 crc kubenswrapper[4953]: I1211 10:12:04.399925 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7cgmm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5c6cf50-ac15-4b65-98a3-24d5b55a9f5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e8c3b294febaab8650ca738b055222b11b0f3502da927fb9bb1f2f30b97c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrv98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7cgmm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:04Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:04 crc kubenswrapper[4953]: I1211 10:12:04.403488 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:04 crc kubenswrapper[4953]: I1211 10:12:04.403526 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:04 crc kubenswrapper[4953]: I1211 10:12:04.403539 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:04 crc kubenswrapper[4953]: I1211 10:12:04.403554 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:04 crc kubenswrapper[4953]: I1211 10:12:04.403563 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:04Z","lastTransitionTime":"2025-12-11T10:12:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:04 crc kubenswrapper[4953]: I1211 10:12:04.410601 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ps59j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed9da9e3-3f97-49f6-9774-3c2f06987b9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8b7289e76184818bc11ef0e99cd573244647de790af79ac277a91ebf305bc1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vngds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ps59j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:04Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:04 crc kubenswrapper[4953]: I1211 10:12:04.427449 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pqtrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d80d6bd6-dd9c-433e-93cb-2be48e4cea72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7525c3e73b38b27709833d8bf03853f82b08bafa8734d97890332f8aff9d3317\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c456b7fb076e4e1f8ebe23ee26c07f2038800afe76c0e183ea82aebebe5e20d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c456b7fb076e4e1f8ebe23ee26c07f2038800afe76c0e183ea82aebebe5e20d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1199214a609bfaf541ddc0e45714a86f4165bdb240eda61b8ee35764cc2b6c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1199214a609bfaf541ddc0e45714a86f4165bdb240eda61b8ee35764cc2b6c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79056708a2a0ccc6dca203be18aec98296471e9c95fa3124c910deb77f04cb23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79056708a2a0ccc6dca203be18aec98296471e9c95fa3124c910deb77f04cb23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdf32e1c2b4bd849d12dcee505be41ebbd38d03660e37c1b36d65f81e55d5160\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdf32e1c2b4bd849d12dcee505be41ebbd38d03660e37c1b36d65f81e55d5160\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6dd8c365888d82936ae2eeef058fd79b7134d40d2096eeb655fc79faa658ce6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6dd8c365888d82936ae2eeef058fd79b7134d40d2096eeb655fc79faa658ce6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22373c7e841c5b2889f89395496fcd5cf912db482ef228c680812c667bead5da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22373c7e841c5b2889f89395496fcd5cf912db482ef228c680812c667bead5da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pqtrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:04Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:04 crc kubenswrapper[4953]: I1211 10:12:04.438784 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qm4mr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86f65b63-32e0-49cc-bc96-272ecfb987ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqpb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqpb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:12:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qm4mr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:04Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:04 crc kubenswrapper[4953]: I1211 10:12:04.450831 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d98f6e58-767e-4e80-8dc7-bf97cdc14997\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec306b9048e81de45ce4e5ae1f564ab611980d56edf94f34c48cba7299dd754e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7453febb17d4aadef8c87c8d256a0339b441e2bed33a20a3f7cf88b4d0ce5a83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5c40bd3d558c5cff3d458a0b5a993371c3e8b6afc0035a64a21ffc0cc6c2357\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b22d8239ad9f5511dc6ae773c7ea181c4e194b0847b58332e716953d9deb9cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:04Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:04 crc kubenswrapper[4953]: I1211 10:12:04.465454 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7f8ca70-14ac-499f-9a73-c03f1cb9d3f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afbf1d478a1ccbd17c29483adf2e39e60be93dfde72d96dd4c45ee2b81c7db7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89487ecc0b25583d92a2adb537e660618a1f0477d9b0ca805c7d5cc120a38ef5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5850c59617cbc5cbf3d86246bfb8d7645964fdb32f406648e47de3d2e1dcca39\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b38e6fc7946d99ff7570627e9bfd01e9f5e029ad3f3e2cda276461f222d7950\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a91255550d88dd1963fef1112d90d2c1e779fc3e2dd1e7c824640879b8c6a58e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T10:11:37Z\\\",\\\"message\\\":\\\"W1211 10:11:26.311312 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1211 10:11:26.312053 1 crypto.go:601] Generating new CA for check-endpoints-signer@1765447886 cert, and key in /tmp/serving-cert-3652440615/serving-signer.crt, /tmp/serving-cert-3652440615/serving-signer.key\\\\nI1211 10:11:26.711906 1 observer_polling.go:159] Starting file observer\\\\nW1211 10:11:26.714018 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1211 10:11:26.714220 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1211 10:11:26.715195 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3652440615/tls.crt::/tmp/serving-cert-3652440615/tls.key\\\\\\\"\\\\nF1211 10:11:37.220702 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:26Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2348bd7a336966cd91aa6ba1cf71771e7fd111085acbb0481adee82d7a6e109\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4536100178c4611eea8cfe2b15d79f55fdd32b7004de523cc2694bd656ce5be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4536100178c4611eea8cfe2b15d79f55fdd32b7004de523cc2694bd656ce5be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:04Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:04 crc kubenswrapper[4953]: I1211 10:12:04.478922 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:04Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:04 crc kubenswrapper[4953]: I1211 10:12:04.489809 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e57ec14864d78b0463b4bd4af9dfa21aec61df60a63a38b7d98ba4871716edfc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:04Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:04 crc kubenswrapper[4953]: I1211 10:12:04.504065 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h4dvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"644e1d40-ab80-469e-94b4-540e52b8e2c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f734acf34a05a9425f305c809775bae58615ae1d5f89e3b519e54d7e7abb8bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbwwr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h4dvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:04Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:04 crc kubenswrapper[4953]: I1211 10:12:04.505621 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:04 crc kubenswrapper[4953]: I1211 10:12:04.505723 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:04 crc kubenswrapper[4953]: I1211 10:12:04.505787 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:04 crc kubenswrapper[4953]: I1211 10:12:04.505857 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:04 crc kubenswrapper[4953]: I1211 10:12:04.505918 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:04Z","lastTransitionTime":"2025-12-11T10:12:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:04 crc kubenswrapper[4953]: I1211 10:12:04.607896 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:04 crc kubenswrapper[4953]: I1211 10:12:04.607975 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:04 crc kubenswrapper[4953]: I1211 10:12:04.608000 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:04 crc kubenswrapper[4953]: I1211 10:12:04.608030 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:04 crc kubenswrapper[4953]: I1211 10:12:04.608053 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:04Z","lastTransitionTime":"2025-12-11T10:12:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:04 crc kubenswrapper[4953]: I1211 10:12:04.710877 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:04 crc kubenswrapper[4953]: I1211 10:12:04.710924 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:04 crc kubenswrapper[4953]: I1211 10:12:04.710937 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:04 crc kubenswrapper[4953]: I1211 10:12:04.710955 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:04 crc kubenswrapper[4953]: I1211 10:12:04.710967 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:04Z","lastTransitionTime":"2025-12-11T10:12:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:04 crc kubenswrapper[4953]: I1211 10:12:04.814151 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:04 crc kubenswrapper[4953]: I1211 10:12:04.814209 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:04 crc kubenswrapper[4953]: I1211 10:12:04.814224 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:04 crc kubenswrapper[4953]: I1211 10:12:04.814241 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:04 crc kubenswrapper[4953]: I1211 10:12:04.814253 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:04Z","lastTransitionTime":"2025-12-11T10:12:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:04 crc kubenswrapper[4953]: I1211 10:12:04.918935 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:04 crc kubenswrapper[4953]: I1211 10:12:04.919005 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:04 crc kubenswrapper[4953]: I1211 10:12:04.919015 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:04 crc kubenswrapper[4953]: I1211 10:12:04.919031 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:04 crc kubenswrapper[4953]: I1211 10:12:04.919043 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:04Z","lastTransitionTime":"2025-12-11T10:12:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:05 crc kubenswrapper[4953]: I1211 10:12:05.021255 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:05 crc kubenswrapper[4953]: I1211 10:12:05.021302 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:05 crc kubenswrapper[4953]: I1211 10:12:05.021311 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:05 crc kubenswrapper[4953]: I1211 10:12:05.021323 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:05 crc kubenswrapper[4953]: I1211 10:12:05.021332 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:05Z","lastTransitionTime":"2025-12-11T10:12:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:05 crc kubenswrapper[4953]: I1211 10:12:05.124136 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:05 crc kubenswrapper[4953]: I1211 10:12:05.124166 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:05 crc kubenswrapper[4953]: I1211 10:12:05.124175 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:05 crc kubenswrapper[4953]: I1211 10:12:05.124187 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:05 crc kubenswrapper[4953]: I1211 10:12:05.124198 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:05Z","lastTransitionTime":"2025-12-11T10:12:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:05 crc kubenswrapper[4953]: I1211 10:12:05.227414 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:05 crc kubenswrapper[4953]: I1211 10:12:05.227445 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:05 crc kubenswrapper[4953]: I1211 10:12:05.227454 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:05 crc kubenswrapper[4953]: I1211 10:12:05.227468 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:05 crc kubenswrapper[4953]: I1211 10:12:05.227480 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:05Z","lastTransitionTime":"2025-12-11T10:12:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:05 crc kubenswrapper[4953]: I1211 10:12:05.284377 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x6f57_c09d8243-6693-433e-bce1-8a99e5e37b95/ovnkube-controller/1.log" Dec 11 10:12:05 crc kubenswrapper[4953]: I1211 10:12:05.285416 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x6f57_c09d8243-6693-433e-bce1-8a99e5e37b95/ovnkube-controller/0.log" Dec 11 10:12:05 crc kubenswrapper[4953]: I1211 10:12:05.288613 4953 generic.go:334] "Generic (PLEG): container finished" podID="c09d8243-6693-433e-bce1-8a99e5e37b95" containerID="f9438288b4f1630934ea4ec24d43b8c123a9bb536442289988101e87cc72425d" exitCode=1 Dec 11 10:12:05 crc kubenswrapper[4953]: I1211 10:12:05.288700 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x6f57" event={"ID":"c09d8243-6693-433e-bce1-8a99e5e37b95","Type":"ContainerDied","Data":"f9438288b4f1630934ea4ec24d43b8c123a9bb536442289988101e87cc72425d"} Dec 11 10:12:05 crc kubenswrapper[4953]: I1211 10:12:05.288859 4953 scope.go:117] "RemoveContainer" containerID="dd7fc5afd00221721e6c142aa8ec3ec7cd5d5e2eb757952f34c702fa0aa2f9fc" Dec 11 10:12:05 crc kubenswrapper[4953]: I1211 10:12:05.289350 4953 scope.go:117] "RemoveContainer" containerID="f9438288b4f1630934ea4ec24d43b8c123a9bb536442289988101e87cc72425d" Dec 11 10:12:05 crc kubenswrapper[4953]: E1211 10:12:05.289549 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-x6f57_openshift-ovn-kubernetes(c09d8243-6693-433e-bce1-8a99e5e37b95)\"" pod="openshift-ovn-kubernetes/ovnkube-node-x6f57" podUID="c09d8243-6693-433e-bce1-8a99e5e37b95" Dec 11 10:12:05 crc kubenswrapper[4953]: I1211 10:12:05.307469 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:05Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:05 crc kubenswrapper[4953]: I1211 10:12:05.324276 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:05Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:05 crc kubenswrapper[4953]: I1211 10:12:05.329232 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/86f65b63-32e0-49cc-bc96-272ecfb987ed-metrics-certs\") pod \"network-metrics-daemon-qm4mr\" (UID: \"86f65b63-32e0-49cc-bc96-272ecfb987ed\") " pod="openshift-multus/network-metrics-daemon-qm4mr" Dec 11 10:12:05 crc kubenswrapper[4953]: E1211 10:12:05.329366 4953 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 11 10:12:05 crc kubenswrapper[4953]: E1211 10:12:05.329436 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/86f65b63-32e0-49cc-bc96-272ecfb987ed-metrics-certs podName:86f65b63-32e0-49cc-bc96-272ecfb987ed nodeName:}" failed. No retries permitted until 2025-12-11 10:12:09.329418358 +0000 UTC m=+47.353277391 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/86f65b63-32e0-49cc-bc96-272ecfb987ed-metrics-certs") pod "network-metrics-daemon-qm4mr" (UID: "86f65b63-32e0-49cc-bc96-272ecfb987ed") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 11 10:12:05 crc kubenswrapper[4953]: I1211 10:12:05.330922 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:05 crc kubenswrapper[4953]: I1211 10:12:05.331045 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:05 crc kubenswrapper[4953]: I1211 10:12:05.331142 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:05 crc kubenswrapper[4953]: I1211 10:12:05.331268 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:05 crc kubenswrapper[4953]: I1211 10:12:05.331362 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:05Z","lastTransitionTime":"2025-12-11T10:12:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:05 crc kubenswrapper[4953]: I1211 10:12:05.343839 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7cgmm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5c6cf50-ac15-4b65-98a3-24d5b55a9f5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e8c3b294febaab8650ca738b055222b11b0f3502da927fb9bb1f2f30b97c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrv98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7cgmm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:05Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:05 crc kubenswrapper[4953]: I1211 10:12:05.357091 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ps59j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed9da9e3-3f97-49f6-9774-3c2f06987b9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8b7289e76184818bc11ef0e99cd573244647de790af79ac277a91ebf305bc1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vngds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ps59j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:05Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:05 crc kubenswrapper[4953]: I1211 10:12:05.376432 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pqtrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d80d6bd6-dd9c-433e-93cb-2be48e4cea72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7525c3e73b38b27709833d8bf03853f82b08bafa8734d97890332f8aff9d3317\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c456b7fb076e4e1f8ebe23ee26c07f2038800afe76c0e183ea82aebebe5e20d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c456b7fb076e4e1f8ebe23ee26c07f2038800afe76c0e183ea82aebebe5e20d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1199214a609bfaf541ddc0e45714a86f4165bdb240eda61b8ee35764cc2b6c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1199214a609bfaf541ddc0e45714a86f4165bdb240eda61b8ee35764cc2b6c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79056708a2a0ccc6dca203be18aec98296471e9c95fa3124c910deb77f04cb23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79056708a2a0ccc6dca203be18aec98296471e9c95fa3124c910deb77f04cb23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdf32e1c2b4bd849d12dcee505be41ebbd38d03660e37c1b36d65f81e55d5160\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdf32e1c2b4bd849d12dcee505be41ebbd38d03660e37c1b36d65f81e55d5160\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6dd8c365888d82936ae2eeef058fd79b7134d40d2096eeb655fc79faa658ce6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6dd8c365888d82936ae2eeef058fd79b7134d40d2096eeb655fc79faa658ce6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22373c7e841c5b2889f89395496fcd5cf912db482ef228c680812c667bead5da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22373c7e841c5b2889f89395496fcd5cf912db482ef228c680812c667bead5da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pqtrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:05Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:05 crc kubenswrapper[4953]: I1211 10:12:05.392651 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qm4mr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86f65b63-32e0-49cc-bc96-272ecfb987ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqpb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqpb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:12:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qm4mr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:05Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:05 crc kubenswrapper[4953]: I1211 10:12:05.409988 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d98f6e58-767e-4e80-8dc7-bf97cdc14997\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec306b9048e81de45ce4e5ae1f564ab611980d56edf94f34c48cba7299dd754e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7453febb17d4aadef8c87c8d256a0339b441e2bed33a20a3f7cf88b4d0ce5a83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5c40bd3d558c5cff3d458a0b5a993371c3e8b6afc0035a64a21ffc0cc6c2357\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b22d8239ad9f5511dc6ae773c7ea181c4e194b0847b58332e716953d9deb9cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:05Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:05 crc kubenswrapper[4953]: I1211 10:12:05.431293 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7f8ca70-14ac-499f-9a73-c03f1cb9d3f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afbf1d478a1ccbd17c29483adf2e39e60be93dfde72d96dd4c45ee2b81c7db7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89487ecc0b25583d92a2adb537e660618a1f0477d9b0ca805c7d5cc120a38ef5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5850c59617cbc5cbf3d86246bfb8d7645964fdb32f406648e47de3d2e1dcca39\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b38e6fc7946d99ff7570627e9bfd01e9f5e029ad3f3e2cda276461f222d7950\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a91255550d88dd1963fef1112d90d2c1e779fc3e2dd1e7c824640879b8c6a58e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T10:11:37Z\\\",\\\"message\\\":\\\"W1211 10:11:26.311312 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1211 10:11:26.312053 1 crypto.go:601] Generating new CA for check-endpoints-signer@1765447886 cert, and key in /tmp/serving-cert-3652440615/serving-signer.crt, /tmp/serving-cert-3652440615/serving-signer.key\\\\nI1211 10:11:26.711906 1 observer_polling.go:159] Starting file observer\\\\nW1211 10:11:26.714018 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1211 10:11:26.714220 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1211 10:11:26.715195 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3652440615/tls.crt::/tmp/serving-cert-3652440615/tls.key\\\\\\\"\\\\nF1211 10:11:37.220702 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:26Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2348bd7a336966cd91aa6ba1cf71771e7fd111085acbb0481adee82d7a6e109\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4536100178c4611eea8cfe2b15d79f55fdd32b7004de523cc2694bd656ce5be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4536100178c4611eea8cfe2b15d79f55fdd32b7004de523cc2694bd656ce5be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:05Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:05 crc kubenswrapper[4953]: I1211 10:12:05.434095 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:05 crc kubenswrapper[4953]: I1211 10:12:05.434119 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:05 crc kubenswrapper[4953]: I1211 10:12:05.434126 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:05 crc kubenswrapper[4953]: I1211 10:12:05.434140 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:05 crc kubenswrapper[4953]: I1211 10:12:05.434150 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:05Z","lastTransitionTime":"2025-12-11T10:12:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:05 crc kubenswrapper[4953]: I1211 10:12:05.444805 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:05Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:05 crc kubenswrapper[4953]: I1211 10:12:05.458896 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e57ec14864d78b0463b4bd4af9dfa21aec61df60a63a38b7d98ba4871716edfc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:05Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:05 crc kubenswrapper[4953]: I1211 10:12:05.472229 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h4dvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"644e1d40-ab80-469e-94b4-540e52b8e2c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f734acf34a05a9425f305c809775bae58615ae1d5f89e3b519e54d7e7abb8bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbwwr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h4dvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:05Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:05 crc kubenswrapper[4953]: I1211 10:12:05.472349 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 10:12:05 crc kubenswrapper[4953]: I1211 10:12:05.472305 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 10:12:05 crc kubenswrapper[4953]: E1211 10:12:05.472517 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 10:12:05 crc kubenswrapper[4953]: E1211 10:12:05.472637 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 10:12:05 crc kubenswrapper[4953]: I1211 10:12:05.472695 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qm4mr" Dec 11 10:12:05 crc kubenswrapper[4953]: I1211 10:12:05.472739 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 10:12:05 crc kubenswrapper[4953]: E1211 10:12:05.472825 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qm4mr" podUID="86f65b63-32e0-49cc-bc96-272ecfb987ed" Dec 11 10:12:05 crc kubenswrapper[4953]: E1211 10:12:05.472942 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 10:12:05 crc kubenswrapper[4953]: I1211 10:12:05.490452 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb9312a7af4fcd14d64411afec83b7315dbe399254aab23665cccfa0b04a62db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:05Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:05 crc kubenswrapper[4953]: I1211 10:12:05.503078 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q2898" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed741fb7-1326-48b7-a713-17c9f0243eac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c91690c6fc715e967f98fc731db9ff317a21946b0903480ee2534f5e71ae7ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5z9nk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd6810974250266a6a2efbea13db5cb6f52a4bbdec05955f7b9f58e55d7a8c4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5z9nk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q2898\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:05Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:05 crc kubenswrapper[4953]: I1211 10:12:05.524610 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x6f57" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c09d8243-6693-433e-bce1-8a99e5e37b95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b88a1ca7bd533da2486525e0a6c3dc45272433e743146be01eb8013265f02d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://622e6781ae06491000d6c0e3d2922942c3709bb09483a8e0d2ab723972c2aa78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c34fbadf5e4606e6ad395137eff9b9edc19b5d02125e818a6d8a19d0d20649e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2587a8f43682f31a01fae073769712321abb001f975f5ac5413264489a110f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42759c1ba178a28ddf9b11b221249364f6993c1288309dff6b1c50be13c3b6fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99f628cfb5844a18bd0013d2cd5f7f545ef7a561017498bdfa4a6b6fc4686e54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9438288b4f1630934ea4ec24d43b8c123a9bb536442289988101e87cc72425d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd7fc5afd00221721e6c142aa8ec3ec7cd5d5e2eb757952f34c702fa0aa2f9fc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T10:12:02Z\\\",\\\"message\\\":\\\"11 10:12:01.786503 6156 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1211 10:12:01.786662 6156 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1211 10:12:01.786726 6156 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1211 10:12:01.786790 6156 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1211 10:12:01.786831 6156 handler.go:208] Removed *v1.Node event handler 2\\\\nI1211 10:12:01.786873 6156 handler.go:208] Removed *v1.Node event handler 7\\\\nI1211 10:12:01.786915 6156 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1211 10:12:01.797637 6156 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1211 10:12:01.797695 6156 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1211 10:12:01.797728 6156 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1211 10:12:01.797813 6156 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1211 10:12:01.797837 6156 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1211 10:12:01.797766 6156 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1211 10:12:01.797787 6156 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1211 10:12:01.797857 6156 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1211 10:12:01.997915 6156 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9438288b4f1630934ea4ec24d43b8c123a9bb536442289988101e87cc72425d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T10:12:04Z\\\",\\\"message\\\":\\\"ller-manager_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.239:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {18746a4d-8a63-458a-b7e3-8fb89ff95fc0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1211 10:12:04.078006 6414 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1211 10:12:04.077924 6414 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T10:12:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8da892281a7b8dea449aee461dcb35bef204e2a768689e751abcb21204091aaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c84dce3b218bcc630289f0bff286bad09d4481e1787ef2c048799bfc6c97108f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c84dce3b218bcc630289f0bff286bad09d4481e1787ef2c048799bfc6c97108f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x6f57\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:05Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:05 crc kubenswrapper[4953]: I1211 10:12:05.536939 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:05 crc kubenswrapper[4953]: I1211 10:12:05.536991 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:05 crc kubenswrapper[4953]: I1211 10:12:05.537008 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:05 crc kubenswrapper[4953]: I1211 10:12:05.537028 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:05 crc kubenswrapper[4953]: I1211 10:12:05.537042 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:05Z","lastTransitionTime":"2025-12-11T10:12:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:05 crc kubenswrapper[4953]: I1211 10:12:05.541226 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bjhsd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c4cea1-0872-4490-8195-2a195090982c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e2ab3c73fffd4d07174524dd41c285309cc588049ea3896875e75982d072ced\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnnf2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1469f484fec8f5c7863ebaa62188bc38d6553fe3ef65e315a928924306724842\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnnf2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bjhsd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:05Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:05 crc kubenswrapper[4953]: I1211 10:12:05.557722 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e38e7bec81ab11b9afe5c592d5c57aa1c0527e5e4031265a00a99ef8cb3c6dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da0ab06260b0bf565e089d1d1a78ae71e0ce94f0d5e867393dafc543f9014367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:05Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:05 crc kubenswrapper[4953]: I1211 10:12:05.639366 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:05 crc kubenswrapper[4953]: I1211 10:12:05.639408 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:05 crc kubenswrapper[4953]: I1211 10:12:05.639417 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:05 crc kubenswrapper[4953]: I1211 10:12:05.639432 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:05 crc kubenswrapper[4953]: I1211 10:12:05.639444 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:05Z","lastTransitionTime":"2025-12-11T10:12:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:05 crc kubenswrapper[4953]: I1211 10:12:05.741984 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:05 crc kubenswrapper[4953]: I1211 10:12:05.742040 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:05 crc kubenswrapper[4953]: I1211 10:12:05.742055 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:05 crc kubenswrapper[4953]: I1211 10:12:05.742075 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:05 crc kubenswrapper[4953]: I1211 10:12:05.742090 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:05Z","lastTransitionTime":"2025-12-11T10:12:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:05 crc kubenswrapper[4953]: I1211 10:12:05.845097 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:05 crc kubenswrapper[4953]: I1211 10:12:05.845511 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:05 crc kubenswrapper[4953]: I1211 10:12:05.845752 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:05 crc kubenswrapper[4953]: I1211 10:12:05.845932 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:05 crc kubenswrapper[4953]: I1211 10:12:05.846130 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:05Z","lastTransitionTime":"2025-12-11T10:12:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:05 crc kubenswrapper[4953]: I1211 10:12:05.949973 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:05 crc kubenswrapper[4953]: I1211 10:12:05.950359 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:05 crc kubenswrapper[4953]: I1211 10:12:05.950637 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:05 crc kubenswrapper[4953]: I1211 10:12:05.951048 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:05 crc kubenswrapper[4953]: I1211 10:12:05.951373 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:05Z","lastTransitionTime":"2025-12-11T10:12:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:06 crc kubenswrapper[4953]: I1211 10:12:06.054510 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:06 crc kubenswrapper[4953]: I1211 10:12:06.054552 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:06 crc kubenswrapper[4953]: I1211 10:12:06.054563 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:06 crc kubenswrapper[4953]: I1211 10:12:06.054593 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:06 crc kubenswrapper[4953]: I1211 10:12:06.054606 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:06Z","lastTransitionTime":"2025-12-11T10:12:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:06 crc kubenswrapper[4953]: I1211 10:12:06.157348 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:06 crc kubenswrapper[4953]: I1211 10:12:06.157413 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:06 crc kubenswrapper[4953]: I1211 10:12:06.157431 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:06 crc kubenswrapper[4953]: I1211 10:12:06.157457 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:06 crc kubenswrapper[4953]: I1211 10:12:06.157475 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:06Z","lastTransitionTime":"2025-12-11T10:12:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:06 crc kubenswrapper[4953]: I1211 10:12:06.260461 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:06 crc kubenswrapper[4953]: I1211 10:12:06.260523 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:06 crc kubenswrapper[4953]: I1211 10:12:06.260548 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:06 crc kubenswrapper[4953]: I1211 10:12:06.260610 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:06 crc kubenswrapper[4953]: I1211 10:12:06.260637 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:06Z","lastTransitionTime":"2025-12-11T10:12:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:06 crc kubenswrapper[4953]: I1211 10:12:06.298526 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x6f57_c09d8243-6693-433e-bce1-8a99e5e37b95/ovnkube-controller/1.log" Dec 11 10:12:06 crc kubenswrapper[4953]: I1211 10:12:06.302929 4953 scope.go:117] "RemoveContainer" containerID="f9438288b4f1630934ea4ec24d43b8c123a9bb536442289988101e87cc72425d" Dec 11 10:12:06 crc kubenswrapper[4953]: E1211 10:12:06.303167 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-x6f57_openshift-ovn-kubernetes(c09d8243-6693-433e-bce1-8a99e5e37b95)\"" pod="openshift-ovn-kubernetes/ovnkube-node-x6f57" podUID="c09d8243-6693-433e-bce1-8a99e5e37b95" Dec 11 10:12:06 crc kubenswrapper[4953]: I1211 10:12:06.324940 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h4dvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"644e1d40-ab80-469e-94b4-540e52b8e2c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f734acf34a05a9425f305c809775bae58615ae1d5f89e3b519e54d7e7abb8bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbwwr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h4dvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:06Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:06 crc kubenswrapper[4953]: I1211 10:12:06.340394 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d98f6e58-767e-4e80-8dc7-bf97cdc14997\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec306b9048e81de45ce4e5ae1f564ab611980d56edf94f34c48cba7299dd754e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7453febb17d4aadef8c87c8d256a0339b441e2bed33a20a3f7cf88b4d0ce5a83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5c40bd3d558c5cff3d458a0b5a993371c3e8b6afc0035a64a21ffc0cc6c2357\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b22d8239ad9f5511dc6ae773c7ea181c4e194b0847b58332e716953d9deb9cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:06Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:06 crc kubenswrapper[4953]: I1211 10:12:06.355330 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7f8ca70-14ac-499f-9a73-c03f1cb9d3f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afbf1d478a1ccbd17c29483adf2e39e60be93dfde72d96dd4c45ee2b81c7db7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89487ecc0b25583d92a2adb537e660618a1f0477d9b0ca805c7d5cc120a38ef5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5850c59617cbc5cbf3d86246bfb8d7645964fdb32f406648e47de3d2e1dcca39\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b38e6fc7946d99ff7570627e9bfd01e9f5e029ad3f3e2cda276461f222d7950\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a91255550d88dd1963fef1112d90d2c1e779fc3e2dd1e7c824640879b8c6a58e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T10:11:37Z\\\",\\\"message\\\":\\\"W1211 10:11:26.311312 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1211 10:11:26.312053 1 crypto.go:601] Generating new CA for check-endpoints-signer@1765447886 cert, and key in /tmp/serving-cert-3652440615/serving-signer.crt, /tmp/serving-cert-3652440615/serving-signer.key\\\\nI1211 10:11:26.711906 1 observer_polling.go:159] Starting file observer\\\\nW1211 10:11:26.714018 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1211 10:11:26.714220 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1211 10:11:26.715195 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3652440615/tls.crt::/tmp/serving-cert-3652440615/tls.key\\\\\\\"\\\\nF1211 10:11:37.220702 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:26Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2348bd7a336966cd91aa6ba1cf71771e7fd111085acbb0481adee82d7a6e109\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4536100178c4611eea8cfe2b15d79f55fdd32b7004de523cc2694bd656ce5be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4536100178c4611eea8cfe2b15d79f55fdd32b7004de523cc2694bd656ce5be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:06Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:06 crc kubenswrapper[4953]: I1211 10:12:06.363148 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:06 crc kubenswrapper[4953]: I1211 10:12:06.363209 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:06 crc kubenswrapper[4953]: I1211 10:12:06.363222 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:06 crc kubenswrapper[4953]: I1211 10:12:06.363241 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:06 crc kubenswrapper[4953]: I1211 10:12:06.363255 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:06Z","lastTransitionTime":"2025-12-11T10:12:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:06 crc kubenswrapper[4953]: I1211 10:12:06.371621 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:06Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:06 crc kubenswrapper[4953]: I1211 10:12:06.383615 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e57ec14864d78b0463b4bd4af9dfa21aec61df60a63a38b7d98ba4871716edfc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:06Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:06 crc kubenswrapper[4953]: I1211 10:12:06.398478 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb9312a7af4fcd14d64411afec83b7315dbe399254aab23665cccfa0b04a62db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:06Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:06 crc kubenswrapper[4953]: I1211 10:12:06.412955 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q2898" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed741fb7-1326-48b7-a713-17c9f0243eac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c91690c6fc715e967f98fc731db9ff317a21946b0903480ee2534f5e71ae7ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5z9nk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd6810974250266a6a2efbea13db5cb6f52a4bbdec05955f7b9f58e55d7a8c4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5z9nk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q2898\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:06Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:06 crc kubenswrapper[4953]: I1211 10:12:06.434678 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x6f57" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c09d8243-6693-433e-bce1-8a99e5e37b95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b88a1ca7bd533da2486525e0a6c3dc45272433e743146be01eb8013265f02d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://622e6781ae06491000d6c0e3d2922942c3709bb09483a8e0d2ab723972c2aa78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c34fbadf5e4606e6ad395137eff9b9edc19b5d02125e818a6d8a19d0d20649e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2587a8f43682f31a01fae073769712321abb001f975f5ac5413264489a110f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42759c1ba178a28ddf9b11b221249364f6993c1288309dff6b1c50be13c3b6fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99f628cfb5844a18bd0013d2cd5f7f545ef7a561017498bdfa4a6b6fc4686e54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9438288b4f1630934ea4ec24d43b8c123a9bb536442289988101e87cc72425d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9438288b4f1630934ea4ec24d43b8c123a9bb536442289988101e87cc72425d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T10:12:04Z\\\",\\\"message\\\":\\\"ller-manager_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.239:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {18746a4d-8a63-458a-b7e3-8fb89ff95fc0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1211 10:12:04.078006 6414 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1211 10:12:04.077924 6414 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T10:12:03Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-x6f57_openshift-ovn-kubernetes(c09d8243-6693-433e-bce1-8a99e5e37b95)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8da892281a7b8dea449aee461dcb35bef204e2a768689e751abcb21204091aaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c84dce3b218bcc630289f0bff286bad09d4481e1787ef2c048799bfc6c97108f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c84dce3b218bcc630289f0bff286bad09d4481e1787ef2c048799bfc6c97108f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x6f57\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:06Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:06 crc kubenswrapper[4953]: I1211 10:12:06.447870 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bjhsd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c4cea1-0872-4490-8195-2a195090982c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e2ab3c73fffd4d07174524dd41c285309cc588049ea3896875e75982d072ced\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnnf2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1469f484fec8f5c7863ebaa62188bc38d6553fe3ef65e315a928924306724842\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnnf2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bjhsd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:06Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:06 crc kubenswrapper[4953]: I1211 10:12:06.459899 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e38e7bec81ab11b9afe5c592d5c57aa1c0527e5e4031265a00a99ef8cb3c6dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da0ab06260b0bf565e089d1d1a78ae71e0ce94f0d5e867393dafc543f9014367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:06Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:06 crc kubenswrapper[4953]: I1211 10:12:06.468352 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:06 crc kubenswrapper[4953]: I1211 10:12:06.468454 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:06 crc kubenswrapper[4953]: I1211 10:12:06.468474 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:06 crc kubenswrapper[4953]: I1211 10:12:06.468496 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:06 crc kubenswrapper[4953]: I1211 10:12:06.468506 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:06Z","lastTransitionTime":"2025-12-11T10:12:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:06 crc kubenswrapper[4953]: I1211 10:12:06.475299 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qm4mr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86f65b63-32e0-49cc-bc96-272ecfb987ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqpb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqpb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:12:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qm4mr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:06Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:06 crc kubenswrapper[4953]: I1211 10:12:06.487277 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:06Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:06 crc kubenswrapper[4953]: I1211 10:12:06.498426 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:06Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:06 crc kubenswrapper[4953]: I1211 10:12:06.510867 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7cgmm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5c6cf50-ac15-4b65-98a3-24d5b55a9f5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e8c3b294febaab8650ca738b055222b11b0f3502da927fb9bb1f2f30b97c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrv98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7cgmm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:06Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:06 crc kubenswrapper[4953]: I1211 10:12:06.523675 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ps59j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed9da9e3-3f97-49f6-9774-3c2f06987b9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8b7289e76184818bc11ef0e99cd573244647de790af79ac277a91ebf305bc1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vngds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ps59j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:06Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:06 crc kubenswrapper[4953]: I1211 10:12:06.539705 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pqtrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d80d6bd6-dd9c-433e-93cb-2be48e4cea72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7525c3e73b38b27709833d8bf03853f82b08bafa8734d97890332f8aff9d3317\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c456b7fb076e4e1f8ebe23ee26c07f2038800afe76c0e183ea82aebebe5e20d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c456b7fb076e4e1f8ebe23ee26c07f2038800afe76c0e183ea82aebebe5e20d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1199214a609bfaf541ddc0e45714a86f4165bdb240eda61b8ee35764cc2b6c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1199214a609bfaf541ddc0e45714a86f4165bdb240eda61b8ee35764cc2b6c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79056708a2a0ccc6dca203be18aec98296471e9c95fa3124c910deb77f04cb23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79056708a2a0ccc6dca203be18aec98296471e9c95fa3124c910deb77f04cb23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdf32e1c2b4bd849d12dcee505be41ebbd38d03660e37c1b36d65f81e55d5160\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdf32e1c2b4bd849d12dcee505be41ebbd38d03660e37c1b36d65f81e55d5160\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6dd8c365888d82936ae2eeef058fd79b7134d40d2096eeb655fc79faa658ce6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6dd8c365888d82936ae2eeef058fd79b7134d40d2096eeb655fc79faa658ce6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22373c7e841c5b2889f89395496fcd5cf912db482ef228c680812c667bead5da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22373c7e841c5b2889f89395496fcd5cf912db482ef228c680812c667bead5da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pqtrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:06Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:06 crc kubenswrapper[4953]: I1211 10:12:06.571304 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:06 crc kubenswrapper[4953]: I1211 10:12:06.571369 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:06 crc kubenswrapper[4953]: I1211 10:12:06.571382 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:06 crc kubenswrapper[4953]: I1211 10:12:06.571400 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:06 crc kubenswrapper[4953]: I1211 10:12:06.571758 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:06Z","lastTransitionTime":"2025-12-11T10:12:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:06 crc kubenswrapper[4953]: I1211 10:12:06.675009 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:06 crc kubenswrapper[4953]: I1211 10:12:06.675306 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:06 crc kubenswrapper[4953]: I1211 10:12:06.675395 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:06 crc kubenswrapper[4953]: I1211 10:12:06.675473 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:06 crc kubenswrapper[4953]: I1211 10:12:06.675538 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:06Z","lastTransitionTime":"2025-12-11T10:12:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:06 crc kubenswrapper[4953]: I1211 10:12:06.778616 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:06 crc kubenswrapper[4953]: I1211 10:12:06.778660 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:06 crc kubenswrapper[4953]: I1211 10:12:06.778669 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:06 crc kubenswrapper[4953]: I1211 10:12:06.778683 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:06 crc kubenswrapper[4953]: I1211 10:12:06.778692 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:06Z","lastTransitionTime":"2025-12-11T10:12:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:06 crc kubenswrapper[4953]: I1211 10:12:06.882065 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:06 crc kubenswrapper[4953]: I1211 10:12:06.882352 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:06 crc kubenswrapper[4953]: I1211 10:12:06.882452 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:06 crc kubenswrapper[4953]: I1211 10:12:06.882529 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:06 crc kubenswrapper[4953]: I1211 10:12:06.882625 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:06Z","lastTransitionTime":"2025-12-11T10:12:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:06 crc kubenswrapper[4953]: I1211 10:12:06.986028 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:06 crc kubenswrapper[4953]: I1211 10:12:06.986072 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:06 crc kubenswrapper[4953]: I1211 10:12:06.986081 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:06 crc kubenswrapper[4953]: I1211 10:12:06.986095 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:06 crc kubenswrapper[4953]: I1211 10:12:06.986105 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:06Z","lastTransitionTime":"2025-12-11T10:12:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:07 crc kubenswrapper[4953]: I1211 10:12:07.089389 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:07 crc kubenswrapper[4953]: I1211 10:12:07.089469 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:07 crc kubenswrapper[4953]: I1211 10:12:07.089496 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:07 crc kubenswrapper[4953]: I1211 10:12:07.089527 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:07 crc kubenswrapper[4953]: I1211 10:12:07.089554 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:07Z","lastTransitionTime":"2025-12-11T10:12:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:07 crc kubenswrapper[4953]: I1211 10:12:07.191859 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:07 crc kubenswrapper[4953]: I1211 10:12:07.191936 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:07 crc kubenswrapper[4953]: I1211 10:12:07.191949 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:07 crc kubenswrapper[4953]: I1211 10:12:07.191966 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:07 crc kubenswrapper[4953]: I1211 10:12:07.191977 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:07Z","lastTransitionTime":"2025-12-11T10:12:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:07 crc kubenswrapper[4953]: I1211 10:12:07.294940 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:07 crc kubenswrapper[4953]: I1211 10:12:07.295019 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:07 crc kubenswrapper[4953]: I1211 10:12:07.295073 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:07 crc kubenswrapper[4953]: I1211 10:12:07.295104 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:07 crc kubenswrapper[4953]: I1211 10:12:07.295126 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:07Z","lastTransitionTime":"2025-12-11T10:12:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:07 crc kubenswrapper[4953]: I1211 10:12:07.399485 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:07 crc kubenswrapper[4953]: I1211 10:12:07.399567 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:07 crc kubenswrapper[4953]: I1211 10:12:07.399646 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:07 crc kubenswrapper[4953]: I1211 10:12:07.399677 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:07 crc kubenswrapper[4953]: I1211 10:12:07.399713 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:07Z","lastTransitionTime":"2025-12-11T10:12:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:07 crc kubenswrapper[4953]: I1211 10:12:07.472482 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qm4mr" Dec 11 10:12:07 crc kubenswrapper[4953]: I1211 10:12:07.472526 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 10:12:07 crc kubenswrapper[4953]: I1211 10:12:07.472669 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 10:12:07 crc kubenswrapper[4953]: I1211 10:12:07.472674 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 10:12:07 crc kubenswrapper[4953]: E1211 10:12:07.472743 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qm4mr" podUID="86f65b63-32e0-49cc-bc96-272ecfb987ed" Dec 11 10:12:07 crc kubenswrapper[4953]: E1211 10:12:07.472938 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 10:12:07 crc kubenswrapper[4953]: E1211 10:12:07.472988 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 10:12:07 crc kubenswrapper[4953]: E1211 10:12:07.473100 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 10:12:07 crc kubenswrapper[4953]: I1211 10:12:07.502000 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:07 crc kubenswrapper[4953]: I1211 10:12:07.502034 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:07 crc kubenswrapper[4953]: I1211 10:12:07.502043 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:07 crc kubenswrapper[4953]: I1211 10:12:07.502056 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:07 crc kubenswrapper[4953]: I1211 10:12:07.502064 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:07Z","lastTransitionTime":"2025-12-11T10:12:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:07 crc kubenswrapper[4953]: I1211 10:12:07.604505 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:07 crc kubenswrapper[4953]: I1211 10:12:07.604776 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:07 crc kubenswrapper[4953]: I1211 10:12:07.604895 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:07 crc kubenswrapper[4953]: I1211 10:12:07.604970 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:07 crc kubenswrapper[4953]: I1211 10:12:07.605047 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:07Z","lastTransitionTime":"2025-12-11T10:12:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:07 crc kubenswrapper[4953]: I1211 10:12:07.707537 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:07 crc kubenswrapper[4953]: I1211 10:12:07.707840 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:07 crc kubenswrapper[4953]: I1211 10:12:07.707946 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:07 crc kubenswrapper[4953]: I1211 10:12:07.708050 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:07 crc kubenswrapper[4953]: I1211 10:12:07.708134 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:07Z","lastTransitionTime":"2025-12-11T10:12:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:07 crc kubenswrapper[4953]: I1211 10:12:07.810568 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:07 crc kubenswrapper[4953]: I1211 10:12:07.810639 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:07 crc kubenswrapper[4953]: I1211 10:12:07.810664 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:07 crc kubenswrapper[4953]: I1211 10:12:07.810677 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:07 crc kubenswrapper[4953]: I1211 10:12:07.810686 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:07Z","lastTransitionTime":"2025-12-11T10:12:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:07 crc kubenswrapper[4953]: I1211 10:12:07.912761 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:07 crc kubenswrapper[4953]: I1211 10:12:07.913116 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:07 crc kubenswrapper[4953]: I1211 10:12:07.913205 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:07 crc kubenswrapper[4953]: I1211 10:12:07.913309 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:07 crc kubenswrapper[4953]: I1211 10:12:07.913406 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:07Z","lastTransitionTime":"2025-12-11T10:12:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:08 crc kubenswrapper[4953]: I1211 10:12:08.015901 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:08 crc kubenswrapper[4953]: I1211 10:12:08.015961 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:08 crc kubenswrapper[4953]: I1211 10:12:08.015976 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:08 crc kubenswrapper[4953]: I1211 10:12:08.015993 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:08 crc kubenswrapper[4953]: I1211 10:12:08.016006 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:08Z","lastTransitionTime":"2025-12-11T10:12:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:08 crc kubenswrapper[4953]: I1211 10:12:08.118237 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:08 crc kubenswrapper[4953]: I1211 10:12:08.118273 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:08 crc kubenswrapper[4953]: I1211 10:12:08.118284 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:08 crc kubenswrapper[4953]: I1211 10:12:08.118298 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:08 crc kubenswrapper[4953]: I1211 10:12:08.118310 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:08Z","lastTransitionTime":"2025-12-11T10:12:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:08 crc kubenswrapper[4953]: I1211 10:12:08.221376 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:08 crc kubenswrapper[4953]: I1211 10:12:08.221416 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:08 crc kubenswrapper[4953]: I1211 10:12:08.221424 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:08 crc kubenswrapper[4953]: I1211 10:12:08.221441 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:08 crc kubenswrapper[4953]: I1211 10:12:08.221457 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:08Z","lastTransitionTime":"2025-12-11T10:12:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:08 crc kubenswrapper[4953]: I1211 10:12:08.324829 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:08 crc kubenswrapper[4953]: I1211 10:12:08.324877 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:08 crc kubenswrapper[4953]: I1211 10:12:08.324892 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:08 crc kubenswrapper[4953]: I1211 10:12:08.324911 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:08 crc kubenswrapper[4953]: I1211 10:12:08.324922 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:08Z","lastTransitionTime":"2025-12-11T10:12:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:08 crc kubenswrapper[4953]: I1211 10:12:08.427747 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:08 crc kubenswrapper[4953]: I1211 10:12:08.427791 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:08 crc kubenswrapper[4953]: I1211 10:12:08.427802 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:08 crc kubenswrapper[4953]: I1211 10:12:08.427817 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:08 crc kubenswrapper[4953]: I1211 10:12:08.427830 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:08Z","lastTransitionTime":"2025-12-11T10:12:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:08 crc kubenswrapper[4953]: I1211 10:12:08.530122 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:08 crc kubenswrapper[4953]: I1211 10:12:08.530189 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:08 crc kubenswrapper[4953]: I1211 10:12:08.530208 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:08 crc kubenswrapper[4953]: I1211 10:12:08.530233 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:08 crc kubenswrapper[4953]: I1211 10:12:08.530251 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:08Z","lastTransitionTime":"2025-12-11T10:12:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:08 crc kubenswrapper[4953]: I1211 10:12:08.633357 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:08 crc kubenswrapper[4953]: I1211 10:12:08.633729 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:08 crc kubenswrapper[4953]: I1211 10:12:08.633836 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:08 crc kubenswrapper[4953]: I1211 10:12:08.633930 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:08 crc kubenswrapper[4953]: I1211 10:12:08.634002 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:08Z","lastTransitionTime":"2025-12-11T10:12:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:08 crc kubenswrapper[4953]: I1211 10:12:08.736531 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:08 crc kubenswrapper[4953]: I1211 10:12:08.736824 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:08 crc kubenswrapper[4953]: I1211 10:12:08.736908 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:08 crc kubenswrapper[4953]: I1211 10:12:08.736992 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:08 crc kubenswrapper[4953]: I1211 10:12:08.737080 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:08Z","lastTransitionTime":"2025-12-11T10:12:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:08 crc kubenswrapper[4953]: I1211 10:12:08.839610 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:08 crc kubenswrapper[4953]: I1211 10:12:08.839646 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:08 crc kubenswrapper[4953]: I1211 10:12:08.839655 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:08 crc kubenswrapper[4953]: I1211 10:12:08.839670 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:08 crc kubenswrapper[4953]: I1211 10:12:08.839679 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:08Z","lastTransitionTime":"2025-12-11T10:12:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:08 crc kubenswrapper[4953]: I1211 10:12:08.942353 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:08 crc kubenswrapper[4953]: I1211 10:12:08.942651 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:08 crc kubenswrapper[4953]: I1211 10:12:08.942741 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:08 crc kubenswrapper[4953]: I1211 10:12:08.942828 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:08 crc kubenswrapper[4953]: I1211 10:12:08.942892 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:08Z","lastTransitionTime":"2025-12-11T10:12:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:09 crc kubenswrapper[4953]: I1211 10:12:09.045944 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:09 crc kubenswrapper[4953]: I1211 10:12:09.045976 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:09 crc kubenswrapper[4953]: I1211 10:12:09.045985 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:09 crc kubenswrapper[4953]: I1211 10:12:09.046016 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:09 crc kubenswrapper[4953]: I1211 10:12:09.046027 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:09Z","lastTransitionTime":"2025-12-11T10:12:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:09 crc kubenswrapper[4953]: I1211 10:12:09.147993 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:09 crc kubenswrapper[4953]: I1211 10:12:09.148035 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:09 crc kubenswrapper[4953]: I1211 10:12:09.148050 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:09 crc kubenswrapper[4953]: I1211 10:12:09.148066 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:09 crc kubenswrapper[4953]: I1211 10:12:09.148076 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:09Z","lastTransitionTime":"2025-12-11T10:12:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:09 crc kubenswrapper[4953]: I1211 10:12:09.250819 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:09 crc kubenswrapper[4953]: I1211 10:12:09.250854 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:09 crc kubenswrapper[4953]: I1211 10:12:09.250865 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:09 crc kubenswrapper[4953]: I1211 10:12:09.250880 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:09 crc kubenswrapper[4953]: I1211 10:12:09.250891 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:09Z","lastTransitionTime":"2025-12-11T10:12:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:09 crc kubenswrapper[4953]: I1211 10:12:09.354238 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:09 crc kubenswrapper[4953]: I1211 10:12:09.354607 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:09 crc kubenswrapper[4953]: I1211 10:12:09.354616 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:09 crc kubenswrapper[4953]: I1211 10:12:09.354629 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:09 crc kubenswrapper[4953]: I1211 10:12:09.354638 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:09Z","lastTransitionTime":"2025-12-11T10:12:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:09 crc kubenswrapper[4953]: I1211 10:12:09.367906 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/86f65b63-32e0-49cc-bc96-272ecfb987ed-metrics-certs\") pod \"network-metrics-daemon-qm4mr\" (UID: \"86f65b63-32e0-49cc-bc96-272ecfb987ed\") " pod="openshift-multus/network-metrics-daemon-qm4mr" Dec 11 10:12:09 crc kubenswrapper[4953]: E1211 10:12:09.368195 4953 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 11 10:12:09 crc kubenswrapper[4953]: E1211 10:12:09.368244 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/86f65b63-32e0-49cc-bc96-272ecfb987ed-metrics-certs podName:86f65b63-32e0-49cc-bc96-272ecfb987ed nodeName:}" failed. No retries permitted until 2025-12-11 10:12:17.368230332 +0000 UTC m=+55.392089365 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/86f65b63-32e0-49cc-bc96-272ecfb987ed-metrics-certs") pod "network-metrics-daemon-qm4mr" (UID: "86f65b63-32e0-49cc-bc96-272ecfb987ed") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 11 10:12:09 crc kubenswrapper[4953]: I1211 10:12:09.456779 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:09 crc kubenswrapper[4953]: I1211 10:12:09.456856 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:09 crc kubenswrapper[4953]: I1211 10:12:09.456879 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:09 crc kubenswrapper[4953]: I1211 10:12:09.456907 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:09 crc kubenswrapper[4953]: I1211 10:12:09.456931 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:09Z","lastTransitionTime":"2025-12-11T10:12:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:09 crc kubenswrapper[4953]: I1211 10:12:09.472499 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 10:12:09 crc kubenswrapper[4953]: I1211 10:12:09.472558 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 10:12:09 crc kubenswrapper[4953]: E1211 10:12:09.472624 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 10:12:09 crc kubenswrapper[4953]: I1211 10:12:09.472656 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qm4mr" Dec 11 10:12:09 crc kubenswrapper[4953]: E1211 10:12:09.472769 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 10:12:09 crc kubenswrapper[4953]: I1211 10:12:09.472823 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 10:12:09 crc kubenswrapper[4953]: E1211 10:12:09.472933 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qm4mr" podUID="86f65b63-32e0-49cc-bc96-272ecfb987ed" Dec 11 10:12:09 crc kubenswrapper[4953]: E1211 10:12:09.473046 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 10:12:09 crc kubenswrapper[4953]: I1211 10:12:09.559711 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:09 crc kubenswrapper[4953]: I1211 10:12:09.559757 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:09 crc kubenswrapper[4953]: I1211 10:12:09.559783 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:09 crc kubenswrapper[4953]: I1211 10:12:09.559815 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:09 crc kubenswrapper[4953]: I1211 10:12:09.559839 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:09Z","lastTransitionTime":"2025-12-11T10:12:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:09 crc kubenswrapper[4953]: I1211 10:12:09.663168 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:09 crc kubenswrapper[4953]: I1211 10:12:09.663239 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:09 crc kubenswrapper[4953]: I1211 10:12:09.663368 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:09 crc kubenswrapper[4953]: I1211 10:12:09.663403 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:09 crc kubenswrapper[4953]: I1211 10:12:09.663431 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:09Z","lastTransitionTime":"2025-12-11T10:12:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:09 crc kubenswrapper[4953]: I1211 10:12:09.766282 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:09 crc kubenswrapper[4953]: I1211 10:12:09.766565 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:09 crc kubenswrapper[4953]: I1211 10:12:09.766727 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:09 crc kubenswrapper[4953]: I1211 10:12:09.766845 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:09 crc kubenswrapper[4953]: I1211 10:12:09.766938 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:09Z","lastTransitionTime":"2025-12-11T10:12:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:09 crc kubenswrapper[4953]: I1211 10:12:09.869891 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:09 crc kubenswrapper[4953]: I1211 10:12:09.870234 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:09 crc kubenswrapper[4953]: I1211 10:12:09.870363 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:09 crc kubenswrapper[4953]: I1211 10:12:09.870471 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:09 crc kubenswrapper[4953]: I1211 10:12:09.870551 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:09Z","lastTransitionTime":"2025-12-11T10:12:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:09 crc kubenswrapper[4953]: I1211 10:12:09.973779 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:09 crc kubenswrapper[4953]: I1211 10:12:09.973869 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:09 crc kubenswrapper[4953]: I1211 10:12:09.973894 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:09 crc kubenswrapper[4953]: I1211 10:12:09.973928 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:09 crc kubenswrapper[4953]: I1211 10:12:09.973962 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:09Z","lastTransitionTime":"2025-12-11T10:12:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:10 crc kubenswrapper[4953]: I1211 10:12:10.074036 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:10 crc kubenswrapper[4953]: I1211 10:12:10.074125 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:10 crc kubenswrapper[4953]: I1211 10:12:10.074143 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:10 crc kubenswrapper[4953]: I1211 10:12:10.074167 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:10 crc kubenswrapper[4953]: I1211 10:12:10.074185 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:10Z","lastTransitionTime":"2025-12-11T10:12:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:10 crc kubenswrapper[4953]: E1211 10:12:10.097236 4953 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T10:12:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T10:12:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T10:12:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T10:12:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5fa37296-71b7-4540-87a3-260b8ecb76f4\\\",\\\"systemUUID\\\":\\\"28c30a59-aa99-484b-82a7-0daea6b2659e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:10Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:10 crc kubenswrapper[4953]: I1211 10:12:10.102350 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:10 crc kubenswrapper[4953]: I1211 10:12:10.102420 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:10 crc kubenswrapper[4953]: I1211 10:12:10.102432 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:10 crc kubenswrapper[4953]: I1211 10:12:10.102449 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:10 crc kubenswrapper[4953]: I1211 10:12:10.102461 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:10Z","lastTransitionTime":"2025-12-11T10:12:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:10 crc kubenswrapper[4953]: E1211 10:12:10.123716 4953 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T10:12:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T10:12:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T10:12:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T10:12:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5fa37296-71b7-4540-87a3-260b8ecb76f4\\\",\\\"systemUUID\\\":\\\"28c30a59-aa99-484b-82a7-0daea6b2659e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:10Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:10 crc kubenswrapper[4953]: I1211 10:12:10.128775 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:10 crc kubenswrapper[4953]: I1211 10:12:10.128812 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:10 crc kubenswrapper[4953]: I1211 10:12:10.128823 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:10 crc kubenswrapper[4953]: I1211 10:12:10.128839 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:10 crc kubenswrapper[4953]: I1211 10:12:10.128868 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:10Z","lastTransitionTime":"2025-12-11T10:12:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:10 crc kubenswrapper[4953]: E1211 10:12:10.144881 4953 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T10:12:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T10:12:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T10:12:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T10:12:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5fa37296-71b7-4540-87a3-260b8ecb76f4\\\",\\\"systemUUID\\\":\\\"28c30a59-aa99-484b-82a7-0daea6b2659e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:10Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:10 crc kubenswrapper[4953]: I1211 10:12:10.149163 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:10 crc kubenswrapper[4953]: I1211 10:12:10.149204 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:10 crc kubenswrapper[4953]: I1211 10:12:10.149217 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:10 crc kubenswrapper[4953]: I1211 10:12:10.149235 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:10 crc kubenswrapper[4953]: I1211 10:12:10.149248 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:10Z","lastTransitionTime":"2025-12-11T10:12:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:10 crc kubenswrapper[4953]: E1211 10:12:10.162591 4953 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T10:12:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T10:12:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T10:12:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T10:12:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5fa37296-71b7-4540-87a3-260b8ecb76f4\\\",\\\"systemUUID\\\":\\\"28c30a59-aa99-484b-82a7-0daea6b2659e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:10Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:10 crc kubenswrapper[4953]: I1211 10:12:10.166760 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:10 crc kubenswrapper[4953]: I1211 10:12:10.166800 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:10 crc kubenswrapper[4953]: I1211 10:12:10.166815 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:10 crc kubenswrapper[4953]: I1211 10:12:10.166833 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:10 crc kubenswrapper[4953]: I1211 10:12:10.166845 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:10Z","lastTransitionTime":"2025-12-11T10:12:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:10 crc kubenswrapper[4953]: E1211 10:12:10.182990 4953 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T10:12:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T10:12:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T10:12:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T10:12:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5fa37296-71b7-4540-87a3-260b8ecb76f4\\\",\\\"systemUUID\\\":\\\"28c30a59-aa99-484b-82a7-0daea6b2659e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:10Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:10 crc kubenswrapper[4953]: E1211 10:12:10.183124 4953 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 11 10:12:10 crc kubenswrapper[4953]: I1211 10:12:10.184719 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:10 crc kubenswrapper[4953]: I1211 10:12:10.184758 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:10 crc kubenswrapper[4953]: I1211 10:12:10.184767 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:10 crc kubenswrapper[4953]: I1211 10:12:10.184781 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:10 crc kubenswrapper[4953]: I1211 10:12:10.184792 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:10Z","lastTransitionTime":"2025-12-11T10:12:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:10 crc kubenswrapper[4953]: I1211 10:12:10.286609 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:10 crc kubenswrapper[4953]: I1211 10:12:10.286887 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:10 crc kubenswrapper[4953]: I1211 10:12:10.286979 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:10 crc kubenswrapper[4953]: I1211 10:12:10.287050 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:10 crc kubenswrapper[4953]: I1211 10:12:10.287119 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:10Z","lastTransitionTime":"2025-12-11T10:12:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:10 crc kubenswrapper[4953]: I1211 10:12:10.390156 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:10 crc kubenswrapper[4953]: I1211 10:12:10.390216 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:10 crc kubenswrapper[4953]: I1211 10:12:10.390234 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:10 crc kubenswrapper[4953]: I1211 10:12:10.390258 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:10 crc kubenswrapper[4953]: I1211 10:12:10.390275 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:10Z","lastTransitionTime":"2025-12-11T10:12:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:10 crc kubenswrapper[4953]: I1211 10:12:10.492557 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:10 crc kubenswrapper[4953]: I1211 10:12:10.492608 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:10 crc kubenswrapper[4953]: I1211 10:12:10.492617 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:10 crc kubenswrapper[4953]: I1211 10:12:10.492632 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:10 crc kubenswrapper[4953]: I1211 10:12:10.492645 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:10Z","lastTransitionTime":"2025-12-11T10:12:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:10 crc kubenswrapper[4953]: I1211 10:12:10.595528 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:10 crc kubenswrapper[4953]: I1211 10:12:10.595645 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:10 crc kubenswrapper[4953]: I1211 10:12:10.595675 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:10 crc kubenswrapper[4953]: I1211 10:12:10.595701 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:10 crc kubenswrapper[4953]: I1211 10:12:10.595719 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:10Z","lastTransitionTime":"2025-12-11T10:12:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:10 crc kubenswrapper[4953]: I1211 10:12:10.698005 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:10 crc kubenswrapper[4953]: I1211 10:12:10.698044 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:10 crc kubenswrapper[4953]: I1211 10:12:10.698082 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:10 crc kubenswrapper[4953]: I1211 10:12:10.698099 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:10 crc kubenswrapper[4953]: I1211 10:12:10.698111 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:10Z","lastTransitionTime":"2025-12-11T10:12:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:10 crc kubenswrapper[4953]: I1211 10:12:10.801046 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:10 crc kubenswrapper[4953]: I1211 10:12:10.801114 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:10 crc kubenswrapper[4953]: I1211 10:12:10.801131 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:10 crc kubenswrapper[4953]: I1211 10:12:10.801156 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:10 crc kubenswrapper[4953]: I1211 10:12:10.801175 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:10Z","lastTransitionTime":"2025-12-11T10:12:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:10 crc kubenswrapper[4953]: I1211 10:12:10.903451 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:10 crc kubenswrapper[4953]: I1211 10:12:10.903490 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:10 crc kubenswrapper[4953]: I1211 10:12:10.903500 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:10 crc kubenswrapper[4953]: I1211 10:12:10.903516 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:10 crc kubenswrapper[4953]: I1211 10:12:10.903527 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:10Z","lastTransitionTime":"2025-12-11T10:12:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:11 crc kubenswrapper[4953]: I1211 10:12:11.006631 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:11 crc kubenswrapper[4953]: I1211 10:12:11.006674 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:11 crc kubenswrapper[4953]: I1211 10:12:11.006685 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:11 crc kubenswrapper[4953]: I1211 10:12:11.006700 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:11 crc kubenswrapper[4953]: I1211 10:12:11.006712 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:11Z","lastTransitionTime":"2025-12-11T10:12:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:11 crc kubenswrapper[4953]: I1211 10:12:11.110375 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:11 crc kubenswrapper[4953]: I1211 10:12:11.110443 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:11 crc kubenswrapper[4953]: I1211 10:12:11.110452 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:11 crc kubenswrapper[4953]: I1211 10:12:11.110472 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:11 crc kubenswrapper[4953]: I1211 10:12:11.110483 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:11Z","lastTransitionTime":"2025-12-11T10:12:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:11 crc kubenswrapper[4953]: I1211 10:12:11.212846 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:11 crc kubenswrapper[4953]: I1211 10:12:11.212905 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:11 crc kubenswrapper[4953]: I1211 10:12:11.212921 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:11 crc kubenswrapper[4953]: I1211 10:12:11.212941 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:11 crc kubenswrapper[4953]: I1211 10:12:11.212956 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:11Z","lastTransitionTime":"2025-12-11T10:12:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:11 crc kubenswrapper[4953]: I1211 10:12:11.317073 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:11 crc kubenswrapper[4953]: I1211 10:12:11.317148 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:11 crc kubenswrapper[4953]: I1211 10:12:11.317167 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:11 crc kubenswrapper[4953]: I1211 10:12:11.317199 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:11 crc kubenswrapper[4953]: I1211 10:12:11.317213 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:11Z","lastTransitionTime":"2025-12-11T10:12:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:11 crc kubenswrapper[4953]: I1211 10:12:11.420005 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:11 crc kubenswrapper[4953]: I1211 10:12:11.420052 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:11 crc kubenswrapper[4953]: I1211 10:12:11.420067 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:11 crc kubenswrapper[4953]: I1211 10:12:11.420083 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:11 crc kubenswrapper[4953]: I1211 10:12:11.420095 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:11Z","lastTransitionTime":"2025-12-11T10:12:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:11 crc kubenswrapper[4953]: I1211 10:12:11.473040 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 10:12:11 crc kubenswrapper[4953]: I1211 10:12:11.473089 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 10:12:11 crc kubenswrapper[4953]: I1211 10:12:11.473228 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 10:12:11 crc kubenswrapper[4953]: E1211 10:12:11.473343 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 10:12:11 crc kubenswrapper[4953]: I1211 10:12:11.473414 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qm4mr" Dec 11 10:12:11 crc kubenswrapper[4953]: E1211 10:12:11.473453 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 10:12:11 crc kubenswrapper[4953]: E1211 10:12:11.473684 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qm4mr" podUID="86f65b63-32e0-49cc-bc96-272ecfb987ed" Dec 11 10:12:11 crc kubenswrapper[4953]: E1211 10:12:11.473875 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 10:12:11 crc kubenswrapper[4953]: I1211 10:12:11.523783 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:11 crc kubenswrapper[4953]: I1211 10:12:11.523858 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:11 crc kubenswrapper[4953]: I1211 10:12:11.523880 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:11 crc kubenswrapper[4953]: I1211 10:12:11.523916 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:11 crc kubenswrapper[4953]: I1211 10:12:11.523940 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:11Z","lastTransitionTime":"2025-12-11T10:12:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:11 crc kubenswrapper[4953]: I1211 10:12:11.626248 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:11 crc kubenswrapper[4953]: I1211 10:12:11.626297 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:11 crc kubenswrapper[4953]: I1211 10:12:11.626310 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:11 crc kubenswrapper[4953]: I1211 10:12:11.626326 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:11 crc kubenswrapper[4953]: I1211 10:12:11.626337 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:11Z","lastTransitionTime":"2025-12-11T10:12:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:11 crc kubenswrapper[4953]: I1211 10:12:11.729607 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:11 crc kubenswrapper[4953]: I1211 10:12:11.729697 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:11 crc kubenswrapper[4953]: I1211 10:12:11.729718 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:11 crc kubenswrapper[4953]: I1211 10:12:11.729746 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:11 crc kubenswrapper[4953]: I1211 10:12:11.729766 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:11Z","lastTransitionTime":"2025-12-11T10:12:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:11 crc kubenswrapper[4953]: I1211 10:12:11.832807 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:11 crc kubenswrapper[4953]: I1211 10:12:11.832869 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:11 crc kubenswrapper[4953]: I1211 10:12:11.832879 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:11 crc kubenswrapper[4953]: I1211 10:12:11.832900 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:11 crc kubenswrapper[4953]: I1211 10:12:11.832912 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:11Z","lastTransitionTime":"2025-12-11T10:12:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:11 crc kubenswrapper[4953]: I1211 10:12:11.935946 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:11 crc kubenswrapper[4953]: I1211 10:12:11.936013 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:11 crc kubenswrapper[4953]: I1211 10:12:11.936030 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:11 crc kubenswrapper[4953]: I1211 10:12:11.936055 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:11 crc kubenswrapper[4953]: I1211 10:12:11.936074 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:11Z","lastTransitionTime":"2025-12-11T10:12:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:12 crc kubenswrapper[4953]: I1211 10:12:12.039639 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:12 crc kubenswrapper[4953]: I1211 10:12:12.039708 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:12 crc kubenswrapper[4953]: I1211 10:12:12.039717 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:12 crc kubenswrapper[4953]: I1211 10:12:12.039738 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:12 crc kubenswrapper[4953]: I1211 10:12:12.039749 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:12Z","lastTransitionTime":"2025-12-11T10:12:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:12 crc kubenswrapper[4953]: I1211 10:12:12.143970 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:12 crc kubenswrapper[4953]: I1211 10:12:12.144012 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:12 crc kubenswrapper[4953]: I1211 10:12:12.144025 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:12 crc kubenswrapper[4953]: I1211 10:12:12.144056 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:12 crc kubenswrapper[4953]: I1211 10:12:12.144068 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:12Z","lastTransitionTime":"2025-12-11T10:12:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:12 crc kubenswrapper[4953]: I1211 10:12:12.249932 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:12 crc kubenswrapper[4953]: I1211 10:12:12.249990 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:12 crc kubenswrapper[4953]: I1211 10:12:12.250003 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:12 crc kubenswrapper[4953]: I1211 10:12:12.250032 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:12 crc kubenswrapper[4953]: I1211 10:12:12.250043 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:12Z","lastTransitionTime":"2025-12-11T10:12:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:12 crc kubenswrapper[4953]: I1211 10:12:12.353739 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:12 crc kubenswrapper[4953]: I1211 10:12:12.353780 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:12 crc kubenswrapper[4953]: I1211 10:12:12.353788 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:12 crc kubenswrapper[4953]: I1211 10:12:12.353802 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:12 crc kubenswrapper[4953]: I1211 10:12:12.353812 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:12Z","lastTransitionTime":"2025-12-11T10:12:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:12 crc kubenswrapper[4953]: I1211 10:12:12.456296 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:12 crc kubenswrapper[4953]: I1211 10:12:12.456342 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:12 crc kubenswrapper[4953]: I1211 10:12:12.456353 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:12 crc kubenswrapper[4953]: I1211 10:12:12.456367 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:12 crc kubenswrapper[4953]: I1211 10:12:12.456377 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:12Z","lastTransitionTime":"2025-12-11T10:12:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:12 crc kubenswrapper[4953]: I1211 10:12:12.491304 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pqtrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d80d6bd6-dd9c-433e-93cb-2be48e4cea72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7525c3e73b38b27709833d8bf03853f82b08bafa8734d97890332f8aff9d3317\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c456b7fb076e4e1f8ebe23ee26c07f2038800afe76c0e183ea82aebebe5e20d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c456b7fb076e4e1f8ebe23ee26c07f2038800afe76c0e183ea82aebebe5e20d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1199214a609bfaf541ddc0e45714a86f4165bdb240eda61b8ee35764cc2b6c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1199214a609bfaf541ddc0e45714a86f4165bdb240eda61b8ee35764cc2b6c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79056708a2a0ccc6dca203be18aec98296471e9c95fa3124c910deb77f04cb23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79056708a2a0ccc6dca203be18aec98296471e9c95fa3124c910deb77f04cb23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdf32e1c2b4bd849d12dcee505be41ebbd38d03660e37c1b36d65f81e55d5160\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdf32e1c2b4bd849d12dcee505be41ebbd38d03660e37c1b36d65f81e55d5160\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6dd8c365888d82936ae2eeef058fd79b7134d40d2096eeb655fc79faa658ce6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6dd8c365888d82936ae2eeef058fd79b7134d40d2096eeb655fc79faa658ce6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22373c7e841c5b2889f89395496fcd5cf912db482ef228c680812c667bead5da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22373c7e841c5b2889f89395496fcd5cf912db482ef228c680812c667bead5da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pqtrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:12Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:12 crc kubenswrapper[4953]: I1211 10:12:12.503536 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qm4mr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86f65b63-32e0-49cc-bc96-272ecfb987ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqpb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqpb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:12:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qm4mr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:12Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:12 crc kubenswrapper[4953]: I1211 10:12:12.523851 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:12Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:12 crc kubenswrapper[4953]: I1211 10:12:12.538356 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:12Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:12 crc kubenswrapper[4953]: I1211 10:12:12.549909 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7cgmm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5c6cf50-ac15-4b65-98a3-24d5b55a9f5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e8c3b294febaab8650ca738b055222b11b0f3502da927fb9bb1f2f30b97c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrv98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7cgmm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:12Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:12 crc kubenswrapper[4953]: I1211 10:12:12.559073 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:12 crc kubenswrapper[4953]: I1211 10:12:12.559113 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:12 crc kubenswrapper[4953]: I1211 10:12:12.559125 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:12 crc kubenswrapper[4953]: I1211 10:12:12.559141 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:12 crc kubenswrapper[4953]: I1211 10:12:12.559154 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:12Z","lastTransitionTime":"2025-12-11T10:12:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:12 crc kubenswrapper[4953]: I1211 10:12:12.562463 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ps59j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed9da9e3-3f97-49f6-9774-3c2f06987b9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8b7289e76184818bc11ef0e99cd573244647de790af79ac277a91ebf305bc1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vngds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ps59j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:12Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:12 crc kubenswrapper[4953]: I1211 10:12:12.575467 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e57ec14864d78b0463b4bd4af9dfa21aec61df60a63a38b7d98ba4871716edfc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:12Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:12 crc kubenswrapper[4953]: I1211 10:12:12.587762 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h4dvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"644e1d40-ab80-469e-94b4-540e52b8e2c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f734acf34a05a9425f305c809775bae58615ae1d5f89e3b519e54d7e7abb8bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbwwr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h4dvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:12Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:12 crc kubenswrapper[4953]: I1211 10:12:12.603988 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d98f6e58-767e-4e80-8dc7-bf97cdc14997\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec306b9048e81de45ce4e5ae1f564ab611980d56edf94f34c48cba7299dd754e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7453febb17d4aadef8c87c8d256a0339b441e2bed33a20a3f7cf88b4d0ce5a83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5c40bd3d558c5cff3d458a0b5a993371c3e8b6afc0035a64a21ffc0cc6c2357\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b22d8239ad9f5511dc6ae773c7ea181c4e194b0847b58332e716953d9deb9cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:12Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:12 crc kubenswrapper[4953]: I1211 10:12:12.620847 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7f8ca70-14ac-499f-9a73-c03f1cb9d3f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afbf1d478a1ccbd17c29483adf2e39e60be93dfde72d96dd4c45ee2b81c7db7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89487ecc0b25583d92a2adb537e660618a1f0477d9b0ca805c7d5cc120a38ef5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5850c59617cbc5cbf3d86246bfb8d7645964fdb32f406648e47de3d2e1dcca39\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b38e6fc7946d99ff7570627e9bfd01e9f5e029ad3f3e2cda276461f222d7950\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a91255550d88dd1963fef1112d90d2c1e779fc3e2dd1e7c824640879b8c6a58e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T10:11:37Z\\\",\\\"message\\\":\\\"W1211 10:11:26.311312 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1211 10:11:26.312053 1 crypto.go:601] Generating new CA for check-endpoints-signer@1765447886 cert, and key in /tmp/serving-cert-3652440615/serving-signer.crt, /tmp/serving-cert-3652440615/serving-signer.key\\\\nI1211 10:11:26.711906 1 observer_polling.go:159] Starting file observer\\\\nW1211 10:11:26.714018 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1211 10:11:26.714220 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1211 10:11:26.715195 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3652440615/tls.crt::/tmp/serving-cert-3652440615/tls.key\\\\\\\"\\\\nF1211 10:11:37.220702 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:26Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2348bd7a336966cd91aa6ba1cf71771e7fd111085acbb0481adee82d7a6e109\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4536100178c4611eea8cfe2b15d79f55fdd32b7004de523cc2694bd656ce5be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4536100178c4611eea8cfe2b15d79f55fdd32b7004de523cc2694bd656ce5be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:12Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:12 crc kubenswrapper[4953]: I1211 10:12:12.634321 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:12Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:12 crc kubenswrapper[4953]: I1211 10:12:12.647353 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb9312a7af4fcd14d64411afec83b7315dbe399254aab23665cccfa0b04a62db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:12Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:12 crc kubenswrapper[4953]: I1211 10:12:12.661786 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:12 crc kubenswrapper[4953]: I1211 10:12:12.661830 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:12 crc kubenswrapper[4953]: I1211 10:12:12.661844 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:12 crc kubenswrapper[4953]: I1211 10:12:12.661865 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:12 crc kubenswrapper[4953]: I1211 10:12:12.661882 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:12Z","lastTransitionTime":"2025-12-11T10:12:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:12 crc kubenswrapper[4953]: I1211 10:12:12.662502 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q2898" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed741fb7-1326-48b7-a713-17c9f0243eac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c91690c6fc715e967f98fc731db9ff317a21946b0903480ee2534f5e71ae7ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5z9nk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd6810974250266a6a2efbea13db5cb6f52a4bbdec05955f7b9f58e55d7a8c4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5z9nk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q2898\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:12Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:12 crc kubenswrapper[4953]: I1211 10:12:12.680102 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x6f57" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c09d8243-6693-433e-bce1-8a99e5e37b95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b88a1ca7bd533da2486525e0a6c3dc45272433e743146be01eb8013265f02d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://622e6781ae06491000d6c0e3d2922942c3709bb09483a8e0d2ab723972c2aa78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c34fbadf5e4606e6ad395137eff9b9edc19b5d02125e818a6d8a19d0d20649e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2587a8f43682f31a01fae073769712321abb001f975f5ac5413264489a110f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42759c1ba178a28ddf9b11b221249364f6993c1288309dff6b1c50be13c3b6fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99f628cfb5844a18bd0013d2cd5f7f545ef7a561017498bdfa4a6b6fc4686e54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9438288b4f1630934ea4ec24d43b8c123a9bb536442289988101e87cc72425d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9438288b4f1630934ea4ec24d43b8c123a9bb536442289988101e87cc72425d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T10:12:04Z\\\",\\\"message\\\":\\\"ller-manager_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.239:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {18746a4d-8a63-458a-b7e3-8fb89ff95fc0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1211 10:12:04.078006 6414 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1211 10:12:04.077924 6414 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T10:12:03Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-x6f57_openshift-ovn-kubernetes(c09d8243-6693-433e-bce1-8a99e5e37b95)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8da892281a7b8dea449aee461dcb35bef204e2a768689e751abcb21204091aaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c84dce3b218bcc630289f0bff286bad09d4481e1787ef2c048799bfc6c97108f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c84dce3b218bcc630289f0bff286bad09d4481e1787ef2c048799bfc6c97108f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x6f57\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:12Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:12 crc kubenswrapper[4953]: I1211 10:12:12.690214 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bjhsd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c4cea1-0872-4490-8195-2a195090982c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e2ab3c73fffd4d07174524dd41c285309cc588049ea3896875e75982d072ced\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnnf2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1469f484fec8f5c7863ebaa62188bc38d6553fe3ef65e315a928924306724842\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnnf2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bjhsd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:12Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:12 crc kubenswrapper[4953]: I1211 10:12:12.702751 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e38e7bec81ab11b9afe5c592d5c57aa1c0527e5e4031265a00a99ef8cb3c6dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da0ab06260b0bf565e089d1d1a78ae71e0ce94f0d5e867393dafc543f9014367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:12Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:12 crc kubenswrapper[4953]: I1211 10:12:12.764735 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:12 crc kubenswrapper[4953]: I1211 10:12:12.765054 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:12 crc kubenswrapper[4953]: I1211 10:12:12.765121 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:12 crc kubenswrapper[4953]: I1211 10:12:12.765194 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:12 crc kubenswrapper[4953]: I1211 10:12:12.765270 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:12Z","lastTransitionTime":"2025-12-11T10:12:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:12 crc kubenswrapper[4953]: I1211 10:12:12.867904 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:12 crc kubenswrapper[4953]: I1211 10:12:12.867945 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:12 crc kubenswrapper[4953]: I1211 10:12:12.867957 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:12 crc kubenswrapper[4953]: I1211 10:12:12.867973 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:12 crc kubenswrapper[4953]: I1211 10:12:12.867987 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:12Z","lastTransitionTime":"2025-12-11T10:12:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:12 crc kubenswrapper[4953]: I1211 10:12:12.970693 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:12 crc kubenswrapper[4953]: I1211 10:12:12.970729 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:12 crc kubenswrapper[4953]: I1211 10:12:12.970741 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:12 crc kubenswrapper[4953]: I1211 10:12:12.970757 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:12 crc kubenswrapper[4953]: I1211 10:12:12.970767 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:12Z","lastTransitionTime":"2025-12-11T10:12:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:13 crc kubenswrapper[4953]: I1211 10:12:13.073974 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:13 crc kubenswrapper[4953]: I1211 10:12:13.074017 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:13 crc kubenswrapper[4953]: I1211 10:12:13.074026 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:13 crc kubenswrapper[4953]: I1211 10:12:13.074043 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:13 crc kubenswrapper[4953]: I1211 10:12:13.074053 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:13Z","lastTransitionTime":"2025-12-11T10:12:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:13 crc kubenswrapper[4953]: I1211 10:12:13.177778 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:13 crc kubenswrapper[4953]: I1211 10:12:13.177848 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:13 crc kubenswrapper[4953]: I1211 10:12:13.177872 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:13 crc kubenswrapper[4953]: I1211 10:12:13.177903 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:13 crc kubenswrapper[4953]: I1211 10:12:13.177927 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:13Z","lastTransitionTime":"2025-12-11T10:12:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:13 crc kubenswrapper[4953]: I1211 10:12:13.280726 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:13 crc kubenswrapper[4953]: I1211 10:12:13.280793 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:13 crc kubenswrapper[4953]: I1211 10:12:13.280815 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:13 crc kubenswrapper[4953]: I1211 10:12:13.280838 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:13 crc kubenswrapper[4953]: I1211 10:12:13.280859 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:13Z","lastTransitionTime":"2025-12-11T10:12:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:13 crc kubenswrapper[4953]: I1211 10:12:13.383528 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:13 crc kubenswrapper[4953]: I1211 10:12:13.383599 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:13 crc kubenswrapper[4953]: I1211 10:12:13.383613 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:13 crc kubenswrapper[4953]: I1211 10:12:13.383631 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:13 crc kubenswrapper[4953]: I1211 10:12:13.383644 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:13Z","lastTransitionTime":"2025-12-11T10:12:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:13 crc kubenswrapper[4953]: I1211 10:12:13.472417 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 10:12:13 crc kubenswrapper[4953]: I1211 10:12:13.472500 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 10:12:13 crc kubenswrapper[4953]: I1211 10:12:13.472551 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 10:12:13 crc kubenswrapper[4953]: I1211 10:12:13.472591 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qm4mr" Dec 11 10:12:13 crc kubenswrapper[4953]: E1211 10:12:13.472669 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 10:12:13 crc kubenswrapper[4953]: E1211 10:12:13.472852 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 10:12:13 crc kubenswrapper[4953]: E1211 10:12:13.472957 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 10:12:13 crc kubenswrapper[4953]: E1211 10:12:13.473136 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qm4mr" podUID="86f65b63-32e0-49cc-bc96-272ecfb987ed" Dec 11 10:12:13 crc kubenswrapper[4953]: I1211 10:12:13.487154 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:13 crc kubenswrapper[4953]: I1211 10:12:13.487216 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:13 crc kubenswrapper[4953]: I1211 10:12:13.487274 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:13 crc kubenswrapper[4953]: I1211 10:12:13.487306 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:13 crc kubenswrapper[4953]: I1211 10:12:13.487329 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:13Z","lastTransitionTime":"2025-12-11T10:12:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:13 crc kubenswrapper[4953]: I1211 10:12:13.530693 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 11 10:12:13 crc kubenswrapper[4953]: I1211 10:12:13.551743 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Dec 11 10:12:13 crc kubenswrapper[4953]: I1211 10:12:13.557342 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e38e7bec81ab11b9afe5c592d5c57aa1c0527e5e4031265a00a99ef8cb3c6dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da0ab06260b0bf565e089d1d1a78ae71e0ce94f0d5e867393dafc543f9014367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:13Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:13 crc kubenswrapper[4953]: I1211 10:12:13.575441 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7cgmm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5c6cf50-ac15-4b65-98a3-24d5b55a9f5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e8c3b294febaab8650ca738b055222b11b0f3502da927fb9bb1f2f30b97c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrv98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7cgmm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:13Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:13 crc kubenswrapper[4953]: I1211 10:12:13.589337 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ps59j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed9da9e3-3f97-49f6-9774-3c2f06987b9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8b7289e76184818bc11ef0e99cd573244647de790af79ac277a91ebf305bc1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vngds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ps59j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:13Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:13 crc kubenswrapper[4953]: I1211 10:12:13.589829 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:13 crc kubenswrapper[4953]: I1211 10:12:13.589854 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:13 crc kubenswrapper[4953]: I1211 10:12:13.589863 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:13 crc kubenswrapper[4953]: I1211 10:12:13.589879 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:13 crc kubenswrapper[4953]: I1211 10:12:13.589889 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:13Z","lastTransitionTime":"2025-12-11T10:12:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:13 crc kubenswrapper[4953]: I1211 10:12:13.605904 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pqtrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d80d6bd6-dd9c-433e-93cb-2be48e4cea72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7525c3e73b38b27709833d8bf03853f82b08bafa8734d97890332f8aff9d3317\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c456b7fb076e4e1f8ebe23ee26c07f2038800afe76c0e183ea82aebebe5e20d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c456b7fb076e4e1f8ebe23ee26c07f2038800afe76c0e183ea82aebebe5e20d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1199214a609bfaf541ddc0e45714a86f4165bdb240eda61b8ee35764cc2b6c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1199214a609bfaf541ddc0e45714a86f4165bdb240eda61b8ee35764cc2b6c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79056708a2a0ccc6dca203be18aec98296471e9c95fa3124c910deb77f04cb23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79056708a2a0ccc6dca203be18aec98296471e9c95fa3124c910deb77f04cb23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdf32e1c2b4bd849d12dcee505be41ebbd38d03660e37c1b36d65f81e55d5160\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdf32e1c2b4bd849d12dcee505be41ebbd38d03660e37c1b36d65f81e55d5160\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6dd8c365888d82936ae2eeef058fd79b7134d40d2096eeb655fc79faa658ce6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6dd8c365888d82936ae2eeef058fd79b7134d40d2096eeb655fc79faa658ce6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22373c7e841c5b2889f89395496fcd5cf912db482ef228c680812c667bead5da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22373c7e841c5b2889f89395496fcd5cf912db482ef228c680812c667bead5da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pqtrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:13Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:13 crc kubenswrapper[4953]: I1211 10:12:13.617851 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qm4mr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86f65b63-32e0-49cc-bc96-272ecfb987ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqpb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqpb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:12:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qm4mr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:13Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:13 crc kubenswrapper[4953]: I1211 10:12:13.634603 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:13Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:13 crc kubenswrapper[4953]: I1211 10:12:13.647987 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:13Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:13 crc kubenswrapper[4953]: I1211 10:12:13.663117 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7f8ca70-14ac-499f-9a73-c03f1cb9d3f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afbf1d478a1ccbd17c29483adf2e39e60be93dfde72d96dd4c45ee2b81c7db7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89487ecc0b25583d92a2adb537e660618a1f0477d9b0ca805c7d5cc120a38ef5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5850c59617cbc5cbf3d86246bfb8d7645964fdb32f406648e47de3d2e1dcca39\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b38e6fc7946d99ff7570627e9bfd01e9f5e029ad3f3e2cda276461f222d7950\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a91255550d88dd1963fef1112d90d2c1e779fc3e2dd1e7c824640879b8c6a58e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T10:11:37Z\\\",\\\"message\\\":\\\"W1211 10:11:26.311312 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1211 10:11:26.312053 1 crypto.go:601] Generating new CA for check-endpoints-signer@1765447886 cert, and key in /tmp/serving-cert-3652440615/serving-signer.crt, /tmp/serving-cert-3652440615/serving-signer.key\\\\nI1211 10:11:26.711906 1 observer_polling.go:159] Starting file observer\\\\nW1211 10:11:26.714018 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1211 10:11:26.714220 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1211 10:11:26.715195 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3652440615/tls.crt::/tmp/serving-cert-3652440615/tls.key\\\\\\\"\\\\nF1211 10:11:37.220702 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:26Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2348bd7a336966cd91aa6ba1cf71771e7fd111085acbb0481adee82d7a6e109\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4536100178c4611eea8cfe2b15d79f55fdd32b7004de523cc2694bd656ce5be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4536100178c4611eea8cfe2b15d79f55fdd32b7004de523cc2694bd656ce5be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:13Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:13 crc kubenswrapper[4953]: I1211 10:12:13.680494 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:13Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:13 crc kubenswrapper[4953]: I1211 10:12:13.691857 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:13 crc kubenswrapper[4953]: I1211 10:12:13.691911 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:13 crc kubenswrapper[4953]: I1211 10:12:13.691928 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:13 crc kubenswrapper[4953]: I1211 10:12:13.691952 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:13 crc kubenswrapper[4953]: I1211 10:12:13.691969 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:13Z","lastTransitionTime":"2025-12-11T10:12:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:13 crc kubenswrapper[4953]: I1211 10:12:13.694545 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e57ec14864d78b0463b4bd4af9dfa21aec61df60a63a38b7d98ba4871716edfc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:13Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:13 crc kubenswrapper[4953]: I1211 10:12:13.706768 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h4dvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"644e1d40-ab80-469e-94b4-540e52b8e2c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f734acf34a05a9425f305c809775bae58615ae1d5f89e3b519e54d7e7abb8bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbwwr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h4dvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:13Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:13 crc kubenswrapper[4953]: I1211 10:12:13.720239 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d98f6e58-767e-4e80-8dc7-bf97cdc14997\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec306b9048e81de45ce4e5ae1f564ab611980d56edf94f34c48cba7299dd754e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7453febb17d4aadef8c87c8d256a0339b441e2bed33a20a3f7cf88b4d0ce5a83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5c40bd3d558c5cff3d458a0b5a993371c3e8b6afc0035a64a21ffc0cc6c2357\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b22d8239ad9f5511dc6ae773c7ea181c4e194b0847b58332e716953d9deb9cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:13Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:13 crc kubenswrapper[4953]: I1211 10:12:13.738112 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x6f57" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c09d8243-6693-433e-bce1-8a99e5e37b95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b88a1ca7bd533da2486525e0a6c3dc45272433e743146be01eb8013265f02d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://622e6781ae06491000d6c0e3d2922942c3709bb09483a8e0d2ab723972c2aa78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c34fbadf5e4606e6ad395137eff9b9edc19b5d02125e818a6d8a19d0d20649e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2587a8f43682f31a01fae073769712321abb001f975f5ac5413264489a110f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42759c1ba178a28ddf9b11b221249364f6993c1288309dff6b1c50be13c3b6fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99f628cfb5844a18bd0013d2cd5f7f545ef7a561017498bdfa4a6b6fc4686e54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9438288b4f1630934ea4ec24d43b8c123a9bb536442289988101e87cc72425d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9438288b4f1630934ea4ec24d43b8c123a9bb536442289988101e87cc72425d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T10:12:04Z\\\",\\\"message\\\":\\\"ller-manager_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.239:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {18746a4d-8a63-458a-b7e3-8fb89ff95fc0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1211 10:12:04.078006 6414 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1211 10:12:04.077924 6414 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T10:12:03Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-x6f57_openshift-ovn-kubernetes(c09d8243-6693-433e-bce1-8a99e5e37b95)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8da892281a7b8dea449aee461dcb35bef204e2a768689e751abcb21204091aaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c84dce3b218bcc630289f0bff286bad09d4481e1787ef2c048799bfc6c97108f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c84dce3b218bcc630289f0bff286bad09d4481e1787ef2c048799bfc6c97108f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x6f57\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:13Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:13 crc kubenswrapper[4953]: I1211 10:12:13.749140 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bjhsd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c4cea1-0872-4490-8195-2a195090982c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e2ab3c73fffd4d07174524dd41c285309cc588049ea3896875e75982d072ced\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnnf2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1469f484fec8f5c7863ebaa62188bc38d6553fe3ef65e315a928924306724842\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnnf2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bjhsd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:13Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:13 crc kubenswrapper[4953]: I1211 10:12:13.769630 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb9312a7af4fcd14d64411afec83b7315dbe399254aab23665cccfa0b04a62db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:13Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:13 crc kubenswrapper[4953]: I1211 10:12:13.787596 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q2898" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed741fb7-1326-48b7-a713-17c9f0243eac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c91690c6fc715e967f98fc731db9ff317a21946b0903480ee2534f5e71ae7ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5z9nk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd6810974250266a6a2efbea13db5cb6f52a4bbdec05955f7b9f58e55d7a8c4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5z9nk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q2898\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:13Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:13 crc kubenswrapper[4953]: I1211 10:12:13.794527 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:13 crc kubenswrapper[4953]: I1211 10:12:13.794598 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:13 crc kubenswrapper[4953]: I1211 10:12:13.794611 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:13 crc kubenswrapper[4953]: I1211 10:12:13.794625 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:13 crc kubenswrapper[4953]: I1211 10:12:13.794635 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:13Z","lastTransitionTime":"2025-12-11T10:12:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:13 crc kubenswrapper[4953]: I1211 10:12:13.896951 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:13 crc kubenswrapper[4953]: I1211 10:12:13.896989 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:13 crc kubenswrapper[4953]: I1211 10:12:13.897001 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:13 crc kubenswrapper[4953]: I1211 10:12:13.897017 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:13 crc kubenswrapper[4953]: I1211 10:12:13.897029 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:13Z","lastTransitionTime":"2025-12-11T10:12:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:13 crc kubenswrapper[4953]: I1211 10:12:13.999531 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:13 crc kubenswrapper[4953]: I1211 10:12:13.999604 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:13 crc kubenswrapper[4953]: I1211 10:12:13.999617 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:13 crc kubenswrapper[4953]: I1211 10:12:13.999636 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:14 crc kubenswrapper[4953]: I1211 10:12:13.999647 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:13Z","lastTransitionTime":"2025-12-11T10:12:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:14 crc kubenswrapper[4953]: I1211 10:12:14.102814 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:14 crc kubenswrapper[4953]: I1211 10:12:14.102857 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:14 crc kubenswrapper[4953]: I1211 10:12:14.102870 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:14 crc kubenswrapper[4953]: I1211 10:12:14.102890 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:14 crc kubenswrapper[4953]: I1211 10:12:14.102904 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:14Z","lastTransitionTime":"2025-12-11T10:12:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:14 crc kubenswrapper[4953]: I1211 10:12:14.205089 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:14 crc kubenswrapper[4953]: I1211 10:12:14.205149 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:14 crc kubenswrapper[4953]: I1211 10:12:14.205162 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:14 crc kubenswrapper[4953]: I1211 10:12:14.205179 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:14 crc kubenswrapper[4953]: I1211 10:12:14.205190 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:14Z","lastTransitionTime":"2025-12-11T10:12:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:14 crc kubenswrapper[4953]: I1211 10:12:14.308031 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:14 crc kubenswrapper[4953]: I1211 10:12:14.308107 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:14 crc kubenswrapper[4953]: I1211 10:12:14.308118 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:14 crc kubenswrapper[4953]: I1211 10:12:14.308137 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:14 crc kubenswrapper[4953]: I1211 10:12:14.308167 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:14Z","lastTransitionTime":"2025-12-11T10:12:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:14 crc kubenswrapper[4953]: I1211 10:12:14.410839 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:14 crc kubenswrapper[4953]: I1211 10:12:14.410881 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:14 crc kubenswrapper[4953]: I1211 10:12:14.410890 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:14 crc kubenswrapper[4953]: I1211 10:12:14.410905 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:14 crc kubenswrapper[4953]: I1211 10:12:14.410914 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:14Z","lastTransitionTime":"2025-12-11T10:12:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:14 crc kubenswrapper[4953]: I1211 10:12:14.513626 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:14 crc kubenswrapper[4953]: I1211 10:12:14.513664 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:14 crc kubenswrapper[4953]: I1211 10:12:14.513673 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:14 crc kubenswrapper[4953]: I1211 10:12:14.513688 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:14 crc kubenswrapper[4953]: I1211 10:12:14.513697 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:14Z","lastTransitionTime":"2025-12-11T10:12:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:14 crc kubenswrapper[4953]: I1211 10:12:14.616381 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:14 crc kubenswrapper[4953]: I1211 10:12:14.616440 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:14 crc kubenswrapper[4953]: I1211 10:12:14.616459 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:14 crc kubenswrapper[4953]: I1211 10:12:14.616478 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:14 crc kubenswrapper[4953]: I1211 10:12:14.616490 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:14Z","lastTransitionTime":"2025-12-11T10:12:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:14 crc kubenswrapper[4953]: I1211 10:12:14.719373 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:14 crc kubenswrapper[4953]: I1211 10:12:14.719429 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:14 crc kubenswrapper[4953]: I1211 10:12:14.719441 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:14 crc kubenswrapper[4953]: I1211 10:12:14.719457 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:14 crc kubenswrapper[4953]: I1211 10:12:14.719469 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:14Z","lastTransitionTime":"2025-12-11T10:12:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:14 crc kubenswrapper[4953]: I1211 10:12:14.822010 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:14 crc kubenswrapper[4953]: I1211 10:12:14.822106 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:14 crc kubenswrapper[4953]: I1211 10:12:14.822128 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:14 crc kubenswrapper[4953]: I1211 10:12:14.822154 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:14 crc kubenswrapper[4953]: I1211 10:12:14.822172 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:14Z","lastTransitionTime":"2025-12-11T10:12:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:14 crc kubenswrapper[4953]: I1211 10:12:14.925879 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:14 crc kubenswrapper[4953]: I1211 10:12:14.925916 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:14 crc kubenswrapper[4953]: I1211 10:12:14.925925 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:14 crc kubenswrapper[4953]: I1211 10:12:14.925939 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:14 crc kubenswrapper[4953]: I1211 10:12:14.925948 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:14Z","lastTransitionTime":"2025-12-11T10:12:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:15 crc kubenswrapper[4953]: I1211 10:12:15.030270 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:15 crc kubenswrapper[4953]: I1211 10:12:15.030383 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:15 crc kubenswrapper[4953]: I1211 10:12:15.030403 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:15 crc kubenswrapper[4953]: I1211 10:12:15.030428 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:15 crc kubenswrapper[4953]: I1211 10:12:15.030452 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:15Z","lastTransitionTime":"2025-12-11T10:12:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:15 crc kubenswrapper[4953]: I1211 10:12:15.133487 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:15 crc kubenswrapper[4953]: I1211 10:12:15.133538 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:15 crc kubenswrapper[4953]: I1211 10:12:15.133557 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:15 crc kubenswrapper[4953]: I1211 10:12:15.133607 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:15 crc kubenswrapper[4953]: I1211 10:12:15.133628 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:15Z","lastTransitionTime":"2025-12-11T10:12:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:15 crc kubenswrapper[4953]: I1211 10:12:15.233778 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 10:12:15 crc kubenswrapper[4953]: E1211 10:12:15.233902 4953 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 11 10:12:15 crc kubenswrapper[4953]: E1211 10:12:15.233952 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-11 10:12:47.23393979 +0000 UTC m=+85.257798823 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 11 10:12:15 crc kubenswrapper[4953]: I1211 10:12:15.235542 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:15 crc kubenswrapper[4953]: I1211 10:12:15.235590 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:15 crc kubenswrapper[4953]: I1211 10:12:15.235602 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:15 crc kubenswrapper[4953]: I1211 10:12:15.235614 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:15 crc kubenswrapper[4953]: I1211 10:12:15.235624 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:15Z","lastTransitionTime":"2025-12-11T10:12:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:15 crc kubenswrapper[4953]: I1211 10:12:15.338710 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:15 crc kubenswrapper[4953]: I1211 10:12:15.338767 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:15 crc kubenswrapper[4953]: I1211 10:12:15.338781 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:15 crc kubenswrapper[4953]: I1211 10:12:15.338798 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:15 crc kubenswrapper[4953]: I1211 10:12:15.338813 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:15Z","lastTransitionTime":"2025-12-11T10:12:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:15 crc kubenswrapper[4953]: I1211 10:12:15.442125 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:15 crc kubenswrapper[4953]: I1211 10:12:15.442181 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:15 crc kubenswrapper[4953]: I1211 10:12:15.442198 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:15 crc kubenswrapper[4953]: I1211 10:12:15.442220 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:15 crc kubenswrapper[4953]: I1211 10:12:15.442237 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:15Z","lastTransitionTime":"2025-12-11T10:12:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:15 crc kubenswrapper[4953]: I1211 10:12:15.473065 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qm4mr" Dec 11 10:12:15 crc kubenswrapper[4953]: I1211 10:12:15.473114 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 10:12:15 crc kubenswrapper[4953]: I1211 10:12:15.473084 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 10:12:15 crc kubenswrapper[4953]: I1211 10:12:15.473068 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 10:12:15 crc kubenswrapper[4953]: E1211 10:12:15.473234 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qm4mr" podUID="86f65b63-32e0-49cc-bc96-272ecfb987ed" Dec 11 10:12:15 crc kubenswrapper[4953]: E1211 10:12:15.473308 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 10:12:15 crc kubenswrapper[4953]: E1211 10:12:15.473420 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 10:12:15 crc kubenswrapper[4953]: E1211 10:12:15.473557 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 10:12:15 crc kubenswrapper[4953]: I1211 10:12:15.544992 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:15 crc kubenswrapper[4953]: I1211 10:12:15.545048 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:15 crc kubenswrapper[4953]: I1211 10:12:15.545060 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:15 crc kubenswrapper[4953]: I1211 10:12:15.545077 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:15 crc kubenswrapper[4953]: I1211 10:12:15.545090 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:15Z","lastTransitionTime":"2025-12-11T10:12:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:15 crc kubenswrapper[4953]: I1211 10:12:15.637758 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 10:12:15 crc kubenswrapper[4953]: I1211 10:12:15.637952 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 10:12:15 crc kubenswrapper[4953]: I1211 10:12:15.638016 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 10:12:15 crc kubenswrapper[4953]: I1211 10:12:15.638061 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 10:12:15 crc kubenswrapper[4953]: E1211 10:12:15.638205 4953 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 11 10:12:15 crc kubenswrapper[4953]: E1211 10:12:15.638275 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-11 10:12:47.638254384 +0000 UTC m=+85.662113437 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 11 10:12:15 crc kubenswrapper[4953]: E1211 10:12:15.638546 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 10:12:47.638534264 +0000 UTC m=+85.662393307 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:12:15 crc kubenswrapper[4953]: E1211 10:12:15.638690 4953 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 11 10:12:15 crc kubenswrapper[4953]: E1211 10:12:15.638719 4953 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 11 10:12:15 crc kubenswrapper[4953]: E1211 10:12:15.638744 4953 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 10:12:15 crc kubenswrapper[4953]: E1211 10:12:15.638796 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-11 10:12:47.638776312 +0000 UTC m=+85.662635365 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 10:12:15 crc kubenswrapper[4953]: E1211 10:12:15.638959 4953 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 11 10:12:15 crc kubenswrapper[4953]: E1211 10:12:15.639023 4953 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 11 10:12:15 crc kubenswrapper[4953]: E1211 10:12:15.639042 4953 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 10:12:15 crc kubenswrapper[4953]: E1211 10:12:15.639127 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-11 10:12:47.639104314 +0000 UTC m=+85.662963357 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 10:12:15 crc kubenswrapper[4953]: I1211 10:12:15.647970 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:15 crc kubenswrapper[4953]: I1211 10:12:15.648055 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:15 crc kubenswrapper[4953]: I1211 10:12:15.648088 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:15 crc kubenswrapper[4953]: I1211 10:12:15.648121 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:15 crc kubenswrapper[4953]: I1211 10:12:15.648145 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:15Z","lastTransitionTime":"2025-12-11T10:12:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:15 crc kubenswrapper[4953]: I1211 10:12:15.751504 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:15 crc kubenswrapper[4953]: I1211 10:12:15.751544 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:15 crc kubenswrapper[4953]: I1211 10:12:15.751554 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:15 crc kubenswrapper[4953]: I1211 10:12:15.751629 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:15 crc kubenswrapper[4953]: I1211 10:12:15.751640 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:15Z","lastTransitionTime":"2025-12-11T10:12:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:15 crc kubenswrapper[4953]: I1211 10:12:15.853483 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:15 crc kubenswrapper[4953]: I1211 10:12:15.853518 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:15 crc kubenswrapper[4953]: I1211 10:12:15.853528 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:15 crc kubenswrapper[4953]: I1211 10:12:15.853543 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:15 crc kubenswrapper[4953]: I1211 10:12:15.853554 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:15Z","lastTransitionTime":"2025-12-11T10:12:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:15 crc kubenswrapper[4953]: I1211 10:12:15.956752 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:15 crc kubenswrapper[4953]: I1211 10:12:15.956798 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:15 crc kubenswrapper[4953]: I1211 10:12:15.956810 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:15 crc kubenswrapper[4953]: I1211 10:12:15.956827 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:15 crc kubenswrapper[4953]: I1211 10:12:15.956842 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:15Z","lastTransitionTime":"2025-12-11T10:12:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:16 crc kubenswrapper[4953]: I1211 10:12:16.059389 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:16 crc kubenswrapper[4953]: I1211 10:12:16.059451 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:16 crc kubenswrapper[4953]: I1211 10:12:16.059478 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:16 crc kubenswrapper[4953]: I1211 10:12:16.059502 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:16 crc kubenswrapper[4953]: I1211 10:12:16.059516 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:16Z","lastTransitionTime":"2025-12-11T10:12:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:16 crc kubenswrapper[4953]: I1211 10:12:16.162112 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:16 crc kubenswrapper[4953]: I1211 10:12:16.162140 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:16 crc kubenswrapper[4953]: I1211 10:12:16.162149 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:16 crc kubenswrapper[4953]: I1211 10:12:16.162160 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:16 crc kubenswrapper[4953]: I1211 10:12:16.162168 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:16Z","lastTransitionTime":"2025-12-11T10:12:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:16 crc kubenswrapper[4953]: I1211 10:12:16.265001 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:16 crc kubenswrapper[4953]: I1211 10:12:16.265044 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:16 crc kubenswrapper[4953]: I1211 10:12:16.265056 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:16 crc kubenswrapper[4953]: I1211 10:12:16.265072 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:16 crc kubenswrapper[4953]: I1211 10:12:16.265085 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:16Z","lastTransitionTime":"2025-12-11T10:12:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:16 crc kubenswrapper[4953]: I1211 10:12:16.367159 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:16 crc kubenswrapper[4953]: I1211 10:12:16.367455 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:16 crc kubenswrapper[4953]: I1211 10:12:16.367630 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:16 crc kubenswrapper[4953]: I1211 10:12:16.367746 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:16 crc kubenswrapper[4953]: I1211 10:12:16.367836 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:16Z","lastTransitionTime":"2025-12-11T10:12:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:16 crc kubenswrapper[4953]: I1211 10:12:16.469985 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:16 crc kubenswrapper[4953]: I1211 10:12:16.470025 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:16 crc kubenswrapper[4953]: I1211 10:12:16.470038 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:16 crc kubenswrapper[4953]: I1211 10:12:16.470055 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:16 crc kubenswrapper[4953]: I1211 10:12:16.470067 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:16Z","lastTransitionTime":"2025-12-11T10:12:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:16 crc kubenswrapper[4953]: I1211 10:12:16.573470 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:16 crc kubenswrapper[4953]: I1211 10:12:16.573514 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:16 crc kubenswrapper[4953]: I1211 10:12:16.573525 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:16 crc kubenswrapper[4953]: I1211 10:12:16.573542 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:16 crc kubenswrapper[4953]: I1211 10:12:16.573557 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:16Z","lastTransitionTime":"2025-12-11T10:12:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:16 crc kubenswrapper[4953]: I1211 10:12:16.676054 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:16 crc kubenswrapper[4953]: I1211 10:12:16.676148 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:16 crc kubenswrapper[4953]: I1211 10:12:16.676172 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:16 crc kubenswrapper[4953]: I1211 10:12:16.676205 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:16 crc kubenswrapper[4953]: I1211 10:12:16.676233 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:16Z","lastTransitionTime":"2025-12-11T10:12:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:16 crc kubenswrapper[4953]: I1211 10:12:16.779110 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:16 crc kubenswrapper[4953]: I1211 10:12:16.779152 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:16 crc kubenswrapper[4953]: I1211 10:12:16.779168 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:16 crc kubenswrapper[4953]: I1211 10:12:16.779189 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:16 crc kubenswrapper[4953]: I1211 10:12:16.779201 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:16Z","lastTransitionTime":"2025-12-11T10:12:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:16 crc kubenswrapper[4953]: I1211 10:12:16.881266 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:16 crc kubenswrapper[4953]: I1211 10:12:16.881317 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:16 crc kubenswrapper[4953]: I1211 10:12:16.881328 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:16 crc kubenswrapper[4953]: I1211 10:12:16.881343 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:16 crc kubenswrapper[4953]: I1211 10:12:16.881355 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:16Z","lastTransitionTime":"2025-12-11T10:12:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:17 crc kubenswrapper[4953]: I1211 10:12:17.257268 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:17 crc kubenswrapper[4953]: I1211 10:12:17.257324 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:17 crc kubenswrapper[4953]: I1211 10:12:17.257341 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:17 crc kubenswrapper[4953]: I1211 10:12:17.257361 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:17 crc kubenswrapper[4953]: I1211 10:12:17.257377 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:17Z","lastTransitionTime":"2025-12-11T10:12:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:17 crc kubenswrapper[4953]: I1211 10:12:17.360829 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:17 crc kubenswrapper[4953]: I1211 10:12:17.360929 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:17 crc kubenswrapper[4953]: I1211 10:12:17.360949 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:17 crc kubenswrapper[4953]: I1211 10:12:17.360969 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:17 crc kubenswrapper[4953]: I1211 10:12:17.360983 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:17Z","lastTransitionTime":"2025-12-11T10:12:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:17 crc kubenswrapper[4953]: I1211 10:12:17.460054 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/86f65b63-32e0-49cc-bc96-272ecfb987ed-metrics-certs\") pod \"network-metrics-daemon-qm4mr\" (UID: \"86f65b63-32e0-49cc-bc96-272ecfb987ed\") " pod="openshift-multus/network-metrics-daemon-qm4mr" Dec 11 10:12:17 crc kubenswrapper[4953]: E1211 10:12:17.460259 4953 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 11 10:12:17 crc kubenswrapper[4953]: E1211 10:12:17.460329 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/86f65b63-32e0-49cc-bc96-272ecfb987ed-metrics-certs podName:86f65b63-32e0-49cc-bc96-272ecfb987ed nodeName:}" failed. No retries permitted until 2025-12-11 10:12:33.460308983 +0000 UTC m=+71.484168026 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/86f65b63-32e0-49cc-bc96-272ecfb987ed-metrics-certs") pod "network-metrics-daemon-qm4mr" (UID: "86f65b63-32e0-49cc-bc96-272ecfb987ed") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 11 10:12:17 crc kubenswrapper[4953]: I1211 10:12:17.463712 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:17 crc kubenswrapper[4953]: I1211 10:12:17.463887 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:17 crc kubenswrapper[4953]: I1211 10:12:17.463920 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:17 crc kubenswrapper[4953]: I1211 10:12:17.463951 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:17 crc kubenswrapper[4953]: I1211 10:12:17.463974 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:17Z","lastTransitionTime":"2025-12-11T10:12:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:17 crc kubenswrapper[4953]: I1211 10:12:17.472495 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 10:12:17 crc kubenswrapper[4953]: I1211 10:12:17.472499 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qm4mr" Dec 11 10:12:17 crc kubenswrapper[4953]: I1211 10:12:17.472501 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 10:12:17 crc kubenswrapper[4953]: I1211 10:12:17.472615 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 10:12:17 crc kubenswrapper[4953]: E1211 10:12:17.472740 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 10:12:17 crc kubenswrapper[4953]: E1211 10:12:17.472960 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 10:12:17 crc kubenswrapper[4953]: E1211 10:12:17.473396 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 10:12:17 crc kubenswrapper[4953]: E1211 10:12:17.473552 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qm4mr" podUID="86f65b63-32e0-49cc-bc96-272ecfb987ed" Dec 11 10:12:17 crc kubenswrapper[4953]: I1211 10:12:17.473813 4953 scope.go:117] "RemoveContainer" containerID="f9438288b4f1630934ea4ec24d43b8c123a9bb536442289988101e87cc72425d" Dec 11 10:12:17 crc kubenswrapper[4953]: I1211 10:12:17.566565 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:17 crc kubenswrapper[4953]: I1211 10:12:17.566610 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:17 crc kubenswrapper[4953]: I1211 10:12:17.566620 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:17 crc kubenswrapper[4953]: I1211 10:12:17.566639 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:17 crc kubenswrapper[4953]: I1211 10:12:17.566658 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:17Z","lastTransitionTime":"2025-12-11T10:12:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:17 crc kubenswrapper[4953]: I1211 10:12:17.668879 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:17 crc kubenswrapper[4953]: I1211 10:12:17.668941 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:17 crc kubenswrapper[4953]: I1211 10:12:17.668963 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:17 crc kubenswrapper[4953]: I1211 10:12:17.668988 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:17 crc kubenswrapper[4953]: I1211 10:12:17.669006 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:17Z","lastTransitionTime":"2025-12-11T10:12:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:17 crc kubenswrapper[4953]: I1211 10:12:17.771914 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:17 crc kubenswrapper[4953]: I1211 10:12:17.771956 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:17 crc kubenswrapper[4953]: I1211 10:12:17.771966 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:17 crc kubenswrapper[4953]: I1211 10:12:17.771978 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:17 crc kubenswrapper[4953]: I1211 10:12:17.771987 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:17Z","lastTransitionTime":"2025-12-11T10:12:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:17 crc kubenswrapper[4953]: I1211 10:12:17.874220 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:17 crc kubenswrapper[4953]: I1211 10:12:17.874280 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:17 crc kubenswrapper[4953]: I1211 10:12:17.874296 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:17 crc kubenswrapper[4953]: I1211 10:12:17.874317 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:17 crc kubenswrapper[4953]: I1211 10:12:17.874330 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:17Z","lastTransitionTime":"2025-12-11T10:12:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:17 crc kubenswrapper[4953]: I1211 10:12:17.976521 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:17 crc kubenswrapper[4953]: I1211 10:12:17.976565 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:17 crc kubenswrapper[4953]: I1211 10:12:17.976594 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:17 crc kubenswrapper[4953]: I1211 10:12:17.976613 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:17 crc kubenswrapper[4953]: I1211 10:12:17.976623 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:17Z","lastTransitionTime":"2025-12-11T10:12:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:18 crc kubenswrapper[4953]: I1211 10:12:18.078643 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:18 crc kubenswrapper[4953]: I1211 10:12:18.078693 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:18 crc kubenswrapper[4953]: I1211 10:12:18.078713 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:18 crc kubenswrapper[4953]: I1211 10:12:18.078735 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:18 crc kubenswrapper[4953]: I1211 10:12:18.078749 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:18Z","lastTransitionTime":"2025-12-11T10:12:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:18 crc kubenswrapper[4953]: I1211 10:12:18.181190 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:18 crc kubenswrapper[4953]: I1211 10:12:18.181218 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:18 crc kubenswrapper[4953]: I1211 10:12:18.181225 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:18 crc kubenswrapper[4953]: I1211 10:12:18.181238 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:18 crc kubenswrapper[4953]: I1211 10:12:18.181439 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:18Z","lastTransitionTime":"2025-12-11T10:12:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:18 crc kubenswrapper[4953]: I1211 10:12:18.283781 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:18 crc kubenswrapper[4953]: I1211 10:12:18.283811 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:18 crc kubenswrapper[4953]: I1211 10:12:18.283818 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:18 crc kubenswrapper[4953]: I1211 10:12:18.283834 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:18 crc kubenswrapper[4953]: I1211 10:12:18.283843 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:18Z","lastTransitionTime":"2025-12-11T10:12:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:18 crc kubenswrapper[4953]: I1211 10:12:18.342973 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x6f57_c09d8243-6693-433e-bce1-8a99e5e37b95/ovnkube-controller/1.log" Dec 11 10:12:18 crc kubenswrapper[4953]: I1211 10:12:18.346746 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x6f57" event={"ID":"c09d8243-6693-433e-bce1-8a99e5e37b95","Type":"ContainerStarted","Data":"60a64421d848d2d5154604bb89edadbac944c141172896eb9bc48b6fab7e7b77"} Dec 11 10:12:18 crc kubenswrapper[4953]: I1211 10:12:18.347784 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-x6f57" Dec 11 10:12:18 crc kubenswrapper[4953]: I1211 10:12:18.387946 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d98f6e58-767e-4e80-8dc7-bf97cdc14997\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec306b9048e81de45ce4e5ae1f564ab611980d56edf94f34c48cba7299dd754e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7453febb17d4aadef8c87c8d256a0339b441e2bed33a20a3f7cf88b4d0ce5a83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5c40bd3d558c5cff3d458a0b5a993371c3e8b6afc0035a64a21ffc0cc6c2357\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b22d8239ad9f5511dc6ae773c7ea181c4e194b0847b58332e716953d9deb9cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:18Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:18 crc kubenswrapper[4953]: I1211 10:12:18.388052 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:18 crc kubenswrapper[4953]: I1211 10:12:18.388087 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:18 crc kubenswrapper[4953]: I1211 10:12:18.388095 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:18 crc kubenswrapper[4953]: I1211 10:12:18.388120 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:18 crc kubenswrapper[4953]: I1211 10:12:18.388131 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:18Z","lastTransitionTime":"2025-12-11T10:12:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:18 crc kubenswrapper[4953]: I1211 10:12:18.409487 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7f8ca70-14ac-499f-9a73-c03f1cb9d3f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afbf1d478a1ccbd17c29483adf2e39e60be93dfde72d96dd4c45ee2b81c7db7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89487ecc0b25583d92a2adb537e660618a1f0477d9b0ca805c7d5cc120a38ef5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5850c59617cbc5cbf3d86246bfb8d7645964fdb32f406648e47de3d2e1dcca39\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b38e6fc7946d99ff7570627e9bfd01e9f5e029ad3f3e2cda276461f222d7950\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a91255550d88dd1963fef1112d90d2c1e779fc3e2dd1e7c824640879b8c6a58e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T10:11:37Z\\\",\\\"message\\\":\\\"W1211 10:11:26.311312 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1211 10:11:26.312053 1 crypto.go:601] Generating new CA for check-endpoints-signer@1765447886 cert, and key in /tmp/serving-cert-3652440615/serving-signer.crt, /tmp/serving-cert-3652440615/serving-signer.key\\\\nI1211 10:11:26.711906 1 observer_polling.go:159] Starting file observer\\\\nW1211 10:11:26.714018 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1211 10:11:26.714220 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1211 10:11:26.715195 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3652440615/tls.crt::/tmp/serving-cert-3652440615/tls.key\\\\\\\"\\\\nF1211 10:11:37.220702 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:26Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2348bd7a336966cd91aa6ba1cf71771e7fd111085acbb0481adee82d7a6e109\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4536100178c4611eea8cfe2b15d79f55fdd32b7004de523cc2694bd656ce5be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4536100178c4611eea8cfe2b15d79f55fdd32b7004de523cc2694bd656ce5be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:18Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:18 crc kubenswrapper[4953]: I1211 10:12:18.427775 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:18Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:18 crc kubenswrapper[4953]: I1211 10:12:18.439339 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e57ec14864d78b0463b4bd4af9dfa21aec61df60a63a38b7d98ba4871716edfc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:18Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:18 crc kubenswrapper[4953]: I1211 10:12:18.454803 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h4dvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"644e1d40-ab80-469e-94b4-540e52b8e2c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f734acf34a05a9425f305c809775bae58615ae1d5f89e3b519e54d7e7abb8bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbwwr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h4dvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:18Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:18 crc kubenswrapper[4953]: I1211 10:12:18.467289 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb9312a7af4fcd14d64411afec83b7315dbe399254aab23665cccfa0b04a62db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:18Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:18 crc kubenswrapper[4953]: I1211 10:12:18.478926 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q2898" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed741fb7-1326-48b7-a713-17c9f0243eac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c91690c6fc715e967f98fc731db9ff317a21946b0903480ee2534f5e71ae7ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5z9nk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd6810974250266a6a2efbea13db5cb6f52a4bbdec05955f7b9f58e55d7a8c4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5z9nk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q2898\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:18Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:18 crc kubenswrapper[4953]: I1211 10:12:18.490372 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:18 crc kubenswrapper[4953]: I1211 10:12:18.490419 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:18 crc kubenswrapper[4953]: I1211 10:12:18.490431 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:18 crc kubenswrapper[4953]: I1211 10:12:18.490445 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:18 crc kubenswrapper[4953]: I1211 10:12:18.490455 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:18Z","lastTransitionTime":"2025-12-11T10:12:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:18 crc kubenswrapper[4953]: I1211 10:12:18.502309 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x6f57" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c09d8243-6693-433e-bce1-8a99e5e37b95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b88a1ca7bd533da2486525e0a6c3dc45272433e743146be01eb8013265f02d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://622e6781ae06491000d6c0e3d2922942c3709bb09483a8e0d2ab723972c2aa78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c34fbadf5e4606e6ad395137eff9b9edc19b5d02125e818a6d8a19d0d20649e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2587a8f43682f31a01fae073769712321abb001f975f5ac5413264489a110f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42759c1ba178a28ddf9b11b221249364f6993c1288309dff6b1c50be13c3b6fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99f628cfb5844a18bd0013d2cd5f7f545ef7a561017498bdfa4a6b6fc4686e54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60a64421d848d2d5154604bb89edadbac944c141172896eb9bc48b6fab7e7b77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9438288b4f1630934ea4ec24d43b8c123a9bb536442289988101e87cc72425d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T10:12:04Z\\\",\\\"message\\\":\\\"ller-manager_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.239:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {18746a4d-8a63-458a-b7e3-8fb89ff95fc0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1211 10:12:04.078006 6414 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1211 10:12:04.077924 6414 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T10:12:03Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8da892281a7b8dea449aee461dcb35bef204e2a768689e751abcb21204091aaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c84dce3b218bcc630289f0bff286bad09d4481e1787ef2c048799bfc6c97108f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c84dce3b218bcc630289f0bff286bad09d4481e1787ef2c048799bfc6c97108f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x6f57\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:18Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:18 crc kubenswrapper[4953]: I1211 10:12:18.521101 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bjhsd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c4cea1-0872-4490-8195-2a195090982c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e2ab3c73fffd4d07174524dd41c285309cc588049ea3896875e75982d072ced\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnnf2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1469f484fec8f5c7863ebaa62188bc38d6553fe3ef65e315a928924306724842\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnnf2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bjhsd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:18Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:18 crc kubenswrapper[4953]: I1211 10:12:18.533406 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6787fb31-272a-4dd9-b0f2-bfb5630d6901\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42af1d5ca92f02433468753b3f0f0cb74ef360928733d71e4316fb8ed77aea63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ec7f5911594475d4a03216b385df264254e50cb55ef7eee3d2ac0a88e8ef1af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e43d812b41951ea02ea6aeaf53d101e762a3bc0513865818ff2dcc6506a24d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b17d140523000135ca46bbc525af1160b82222469a9ca408985ab27c2514f82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b17d140523000135ca46bbc525af1160b82222469a9ca408985ab27c2514f82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:23Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:22Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:18Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:18 crc kubenswrapper[4953]: I1211 10:12:18.548326 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e38e7bec81ab11b9afe5c592d5c57aa1c0527e5e4031265a00a99ef8cb3c6dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da0ab06260b0bf565e089d1d1a78ae71e0ce94f0d5e867393dafc543f9014367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:18Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:18 crc kubenswrapper[4953]: I1211 10:12:18.561847 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:18Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:18 crc kubenswrapper[4953]: I1211 10:12:18.572445 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:18Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:18 crc kubenswrapper[4953]: I1211 10:12:18.581742 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7cgmm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5c6cf50-ac15-4b65-98a3-24d5b55a9f5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e8c3b294febaab8650ca738b055222b11b0f3502da927fb9bb1f2f30b97c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrv98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7cgmm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:18Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:18 crc kubenswrapper[4953]: I1211 10:12:18.591259 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ps59j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed9da9e3-3f97-49f6-9774-3c2f06987b9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8b7289e76184818bc11ef0e99cd573244647de790af79ac277a91ebf305bc1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vngds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ps59j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:18Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:18 crc kubenswrapper[4953]: I1211 10:12:18.592789 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:18 crc kubenswrapper[4953]: I1211 10:12:18.592824 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:18 crc kubenswrapper[4953]: I1211 10:12:18.592853 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:18 crc kubenswrapper[4953]: I1211 10:12:18.592867 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:18 crc kubenswrapper[4953]: I1211 10:12:18.592876 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:18Z","lastTransitionTime":"2025-12-11T10:12:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:18 crc kubenswrapper[4953]: I1211 10:12:18.603721 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pqtrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d80d6bd6-dd9c-433e-93cb-2be48e4cea72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7525c3e73b38b27709833d8bf03853f82b08bafa8734d97890332f8aff9d3317\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c456b7fb076e4e1f8ebe23ee26c07f2038800afe76c0e183ea82aebebe5e20d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c456b7fb076e4e1f8ebe23ee26c07f2038800afe76c0e183ea82aebebe5e20d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1199214a609bfaf541ddc0e45714a86f4165bdb240eda61b8ee35764cc2b6c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1199214a609bfaf541ddc0e45714a86f4165bdb240eda61b8ee35764cc2b6c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79056708a2a0ccc6dca203be18aec98296471e9c95fa3124c910deb77f04cb23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79056708a2a0ccc6dca203be18aec98296471e9c95fa3124c910deb77f04cb23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdf32e1c2b4bd849d12dcee505be41ebbd38d03660e37c1b36d65f81e55d5160\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdf32e1c2b4bd849d12dcee505be41ebbd38d03660e37c1b36d65f81e55d5160\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6dd8c365888d82936ae2eeef058fd79b7134d40d2096eeb655fc79faa658ce6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6dd8c365888d82936ae2eeef058fd79b7134d40d2096eeb655fc79faa658ce6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22373c7e841c5b2889f89395496fcd5cf912db482ef228c680812c667bead5da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22373c7e841c5b2889f89395496fcd5cf912db482ef228c680812c667bead5da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pqtrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:18Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:18 crc kubenswrapper[4953]: I1211 10:12:18.612693 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qm4mr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86f65b63-32e0-49cc-bc96-272ecfb987ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqpb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqpb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:12:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qm4mr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:18Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:18 crc kubenswrapper[4953]: I1211 10:12:18.694637 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:18 crc kubenswrapper[4953]: I1211 10:12:18.694694 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:18 crc kubenswrapper[4953]: I1211 10:12:18.694711 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:18 crc kubenswrapper[4953]: I1211 10:12:18.694733 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:18 crc kubenswrapper[4953]: I1211 10:12:18.694751 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:18Z","lastTransitionTime":"2025-12-11T10:12:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:18 crc kubenswrapper[4953]: I1211 10:12:18.799000 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:18 crc kubenswrapper[4953]: I1211 10:12:18.799036 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:18 crc kubenswrapper[4953]: I1211 10:12:18.799048 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:18 crc kubenswrapper[4953]: I1211 10:12:18.799064 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:18 crc kubenswrapper[4953]: I1211 10:12:18.799076 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:18Z","lastTransitionTime":"2025-12-11T10:12:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:18 crc kubenswrapper[4953]: I1211 10:12:18.902385 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:18 crc kubenswrapper[4953]: I1211 10:12:18.902437 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:18 crc kubenswrapper[4953]: I1211 10:12:18.902454 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:18 crc kubenswrapper[4953]: I1211 10:12:18.902476 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:18 crc kubenswrapper[4953]: I1211 10:12:18.902493 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:18Z","lastTransitionTime":"2025-12-11T10:12:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:19 crc kubenswrapper[4953]: I1211 10:12:19.006223 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:19 crc kubenswrapper[4953]: I1211 10:12:19.006290 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:19 crc kubenswrapper[4953]: I1211 10:12:19.006305 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:19 crc kubenswrapper[4953]: I1211 10:12:19.006325 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:19 crc kubenswrapper[4953]: I1211 10:12:19.006336 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:19Z","lastTransitionTime":"2025-12-11T10:12:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:19 crc kubenswrapper[4953]: I1211 10:12:19.109191 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:19 crc kubenswrapper[4953]: I1211 10:12:19.109247 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:19 crc kubenswrapper[4953]: I1211 10:12:19.109260 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:19 crc kubenswrapper[4953]: I1211 10:12:19.109277 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:19 crc kubenswrapper[4953]: I1211 10:12:19.109291 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:19Z","lastTransitionTime":"2025-12-11T10:12:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:19 crc kubenswrapper[4953]: I1211 10:12:19.211204 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:19 crc kubenswrapper[4953]: I1211 10:12:19.211324 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:19 crc kubenswrapper[4953]: I1211 10:12:19.211340 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:19 crc kubenswrapper[4953]: I1211 10:12:19.211354 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:19 crc kubenswrapper[4953]: I1211 10:12:19.211362 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:19Z","lastTransitionTime":"2025-12-11T10:12:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:19 crc kubenswrapper[4953]: I1211 10:12:19.314056 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:19 crc kubenswrapper[4953]: I1211 10:12:19.314107 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:19 crc kubenswrapper[4953]: I1211 10:12:19.314117 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:19 crc kubenswrapper[4953]: I1211 10:12:19.314131 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:19 crc kubenswrapper[4953]: I1211 10:12:19.314141 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:19Z","lastTransitionTime":"2025-12-11T10:12:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:19 crc kubenswrapper[4953]: I1211 10:12:19.352669 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x6f57_c09d8243-6693-433e-bce1-8a99e5e37b95/ovnkube-controller/2.log" Dec 11 10:12:19 crc kubenswrapper[4953]: I1211 10:12:19.353401 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x6f57_c09d8243-6693-433e-bce1-8a99e5e37b95/ovnkube-controller/1.log" Dec 11 10:12:19 crc kubenswrapper[4953]: I1211 10:12:19.356526 4953 generic.go:334] "Generic (PLEG): container finished" podID="c09d8243-6693-433e-bce1-8a99e5e37b95" containerID="60a64421d848d2d5154604bb89edadbac944c141172896eb9bc48b6fab7e7b77" exitCode=1 Dec 11 10:12:19 crc kubenswrapper[4953]: I1211 10:12:19.356590 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x6f57" event={"ID":"c09d8243-6693-433e-bce1-8a99e5e37b95","Type":"ContainerDied","Data":"60a64421d848d2d5154604bb89edadbac944c141172896eb9bc48b6fab7e7b77"} Dec 11 10:12:19 crc kubenswrapper[4953]: I1211 10:12:19.356620 4953 scope.go:117] "RemoveContainer" containerID="f9438288b4f1630934ea4ec24d43b8c123a9bb536442289988101e87cc72425d" Dec 11 10:12:19 crc kubenswrapper[4953]: I1211 10:12:19.357286 4953 scope.go:117] "RemoveContainer" containerID="60a64421d848d2d5154604bb89edadbac944c141172896eb9bc48b6fab7e7b77" Dec 11 10:12:19 crc kubenswrapper[4953]: E1211 10:12:19.357467 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-x6f57_openshift-ovn-kubernetes(c09d8243-6693-433e-bce1-8a99e5e37b95)\"" pod="openshift-ovn-kubernetes/ovnkube-node-x6f57" podUID="c09d8243-6693-433e-bce1-8a99e5e37b95" Dec 11 10:12:19 crc kubenswrapper[4953]: I1211 10:12:19.373966 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qm4mr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86f65b63-32e0-49cc-bc96-272ecfb987ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqpb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqpb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:12:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qm4mr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:19Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:19 crc kubenswrapper[4953]: I1211 10:12:19.390632 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:19Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:19 crc kubenswrapper[4953]: I1211 10:12:19.405131 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:19Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:19 crc kubenswrapper[4953]: I1211 10:12:19.416027 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7cgmm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5c6cf50-ac15-4b65-98a3-24d5b55a9f5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e8c3b294febaab8650ca738b055222b11b0f3502da927fb9bb1f2f30b97c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrv98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7cgmm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:19Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:19 crc kubenswrapper[4953]: I1211 10:12:19.416908 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:19 crc kubenswrapper[4953]: I1211 10:12:19.416956 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:19 crc kubenswrapper[4953]: I1211 10:12:19.416972 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:19 crc kubenswrapper[4953]: I1211 10:12:19.416990 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:19 crc kubenswrapper[4953]: I1211 10:12:19.417002 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:19Z","lastTransitionTime":"2025-12-11T10:12:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:19 crc kubenswrapper[4953]: I1211 10:12:19.426020 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ps59j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed9da9e3-3f97-49f6-9774-3c2f06987b9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8b7289e76184818bc11ef0e99cd573244647de790af79ac277a91ebf305bc1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vngds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ps59j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:19Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:19 crc kubenswrapper[4953]: I1211 10:12:19.440565 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pqtrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d80d6bd6-dd9c-433e-93cb-2be48e4cea72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7525c3e73b38b27709833d8bf03853f82b08bafa8734d97890332f8aff9d3317\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c456b7fb076e4e1f8ebe23ee26c07f2038800afe76c0e183ea82aebebe5e20d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c456b7fb076e4e1f8ebe23ee26c07f2038800afe76c0e183ea82aebebe5e20d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1199214a609bfaf541ddc0e45714a86f4165bdb240eda61b8ee35764cc2b6c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1199214a609bfaf541ddc0e45714a86f4165bdb240eda61b8ee35764cc2b6c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79056708a2a0ccc6dca203be18aec98296471e9c95fa3124c910deb77f04cb23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79056708a2a0ccc6dca203be18aec98296471e9c95fa3124c910deb77f04cb23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdf32e1c2b4bd849d12dcee505be41ebbd38d03660e37c1b36d65f81e55d5160\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdf32e1c2b4bd849d12dcee505be41ebbd38d03660e37c1b36d65f81e55d5160\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6dd8c365888d82936ae2eeef058fd79b7134d40d2096eeb655fc79faa658ce6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6dd8c365888d82936ae2eeef058fd79b7134d40d2096eeb655fc79faa658ce6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22373c7e841c5b2889f89395496fcd5cf912db482ef228c680812c667bead5da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22373c7e841c5b2889f89395496fcd5cf912db482ef228c680812c667bead5da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pqtrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:19Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:19 crc kubenswrapper[4953]: I1211 10:12:19.454001 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h4dvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"644e1d40-ab80-469e-94b4-540e52b8e2c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f734acf34a05a9425f305c809775bae58615ae1d5f89e3b519e54d7e7abb8bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbwwr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h4dvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:19Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:19 crc kubenswrapper[4953]: I1211 10:12:19.466637 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d98f6e58-767e-4e80-8dc7-bf97cdc14997\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec306b9048e81de45ce4e5ae1f564ab611980d56edf94f34c48cba7299dd754e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7453febb17d4aadef8c87c8d256a0339b441e2bed33a20a3f7cf88b4d0ce5a83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5c40bd3d558c5cff3d458a0b5a993371c3e8b6afc0035a64a21ffc0cc6c2357\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b22d8239ad9f5511dc6ae773c7ea181c4e194b0847b58332e716953d9deb9cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:19Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:19 crc kubenswrapper[4953]: I1211 10:12:19.473009 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 10:12:19 crc kubenswrapper[4953]: I1211 10:12:19.473012 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 10:12:19 crc kubenswrapper[4953]: E1211 10:12:19.473244 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 10:12:19 crc kubenswrapper[4953]: I1211 10:12:19.473047 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 10:12:19 crc kubenswrapper[4953]: E1211 10:12:19.473346 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 10:12:19 crc kubenswrapper[4953]: I1211 10:12:19.473024 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qm4mr" Dec 11 10:12:19 crc kubenswrapper[4953]: E1211 10:12:19.473155 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 10:12:19 crc kubenswrapper[4953]: E1211 10:12:19.473663 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qm4mr" podUID="86f65b63-32e0-49cc-bc96-272ecfb987ed" Dec 11 10:12:19 crc kubenswrapper[4953]: I1211 10:12:19.482282 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7f8ca70-14ac-499f-9a73-c03f1cb9d3f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afbf1d478a1ccbd17c29483adf2e39e60be93dfde72d96dd4c45ee2b81c7db7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89487ecc0b25583d92a2adb537e660618a1f0477d9b0ca805c7d5cc120a38ef5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5850c59617cbc5cbf3d86246bfb8d7645964fdb32f406648e47de3d2e1dcca39\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b38e6fc7946d99ff7570627e9bfd01e9f5e029ad3f3e2cda276461f222d7950\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a91255550d88dd1963fef1112d90d2c1e779fc3e2dd1e7c824640879b8c6a58e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T10:11:37Z\\\",\\\"message\\\":\\\"W1211 10:11:26.311312 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1211 10:11:26.312053 1 crypto.go:601] Generating new CA for check-endpoints-signer@1765447886 cert, and key in /tmp/serving-cert-3652440615/serving-signer.crt, /tmp/serving-cert-3652440615/serving-signer.key\\\\nI1211 10:11:26.711906 1 observer_polling.go:159] Starting file observer\\\\nW1211 10:11:26.714018 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1211 10:11:26.714220 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1211 10:11:26.715195 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3652440615/tls.crt::/tmp/serving-cert-3652440615/tls.key\\\\\\\"\\\\nF1211 10:11:37.220702 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:26Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2348bd7a336966cd91aa6ba1cf71771e7fd111085acbb0481adee82d7a6e109\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4536100178c4611eea8cfe2b15d79f55fdd32b7004de523cc2694bd656ce5be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4536100178c4611eea8cfe2b15d79f55fdd32b7004de523cc2694bd656ce5be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:19Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:19 crc kubenswrapper[4953]: I1211 10:12:19.493877 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:19Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:19 crc kubenswrapper[4953]: I1211 10:12:19.505430 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e57ec14864d78b0463b4bd4af9dfa21aec61df60a63a38b7d98ba4871716edfc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:19Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:19 crc kubenswrapper[4953]: I1211 10:12:19.519377 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:19 crc kubenswrapper[4953]: I1211 10:12:19.519419 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:19 crc kubenswrapper[4953]: I1211 10:12:19.519430 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:19 crc kubenswrapper[4953]: I1211 10:12:19.519448 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:19 crc kubenswrapper[4953]: I1211 10:12:19.519462 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:19Z","lastTransitionTime":"2025-12-11T10:12:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:19 crc kubenswrapper[4953]: I1211 10:12:19.519614 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb9312a7af4fcd14d64411afec83b7315dbe399254aab23665cccfa0b04a62db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:19Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:19 crc kubenswrapper[4953]: I1211 10:12:19.530057 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q2898" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed741fb7-1326-48b7-a713-17c9f0243eac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c91690c6fc715e967f98fc731db9ff317a21946b0903480ee2534f5e71ae7ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5z9nk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd6810974250266a6a2efbea13db5cb6f52a4bbdec05955f7b9f58e55d7a8c4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5z9nk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q2898\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:19Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:19 crc kubenswrapper[4953]: I1211 10:12:19.546900 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x6f57" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c09d8243-6693-433e-bce1-8a99e5e37b95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b88a1ca7bd533da2486525e0a6c3dc45272433e743146be01eb8013265f02d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://622e6781ae06491000d6c0e3d2922942c3709bb09483a8e0d2ab723972c2aa78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c34fbadf5e4606e6ad395137eff9b9edc19b5d02125e818a6d8a19d0d20649e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2587a8f43682f31a01fae073769712321abb001f975f5ac5413264489a110f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42759c1ba178a28ddf9b11b221249364f6993c1288309dff6b1c50be13c3b6fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99f628cfb5844a18bd0013d2cd5f7f545ef7a561017498bdfa4a6b6fc4686e54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60a64421d848d2d5154604bb89edadbac944c141172896eb9bc48b6fab7e7b77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9438288b4f1630934ea4ec24d43b8c123a9bb536442289988101e87cc72425d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T10:12:04Z\\\",\\\"message\\\":\\\"ller-manager_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.239:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {18746a4d-8a63-458a-b7e3-8fb89ff95fc0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1211 10:12:04.078006 6414 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1211 10:12:04.077924 6414 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T10:12:03Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60a64421d848d2d5154604bb89edadbac944c141172896eb9bc48b6fab7e7b77\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T10:12:18Z\\\",\\\"message\\\":\\\"r.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1211 10:12:18.510892 6563 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1211 10:12:18.511234 6563 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1211 10:12:18.511255 6563 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1211 10:12:18.511261 6563 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1211 10:12:18.511504 6563 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1211 10:12:18.511509 6563 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1211 10:12:18.511523 6563 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1211 10:12:18.511549 6563 factory.go:656] Stopping watch factory\\\\nI1211 10:12:18.511589 6563 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1211 10:12:18.511599 6563 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1211 10:12:18.511606 6563 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1211 10:12:18.511613 6563 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1211 10:12:18.511620 6563 handler.go:208] Removed *v1.Node event handler 2\\\\nI1211 10:12:18.511628 6563 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T10:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8da892281a7b8dea449aee461dcb35bef204e2a768689e751abcb21204091aaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c84dce3b218bcc630289f0bff286bad09d4481e1787ef2c048799bfc6c97108f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c84dce3b218bcc630289f0bff286bad09d4481e1787ef2c048799bfc6c97108f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x6f57\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:19Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:19 crc kubenswrapper[4953]: I1211 10:12:19.563743 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bjhsd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c4cea1-0872-4490-8195-2a195090982c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e2ab3c73fffd4d07174524dd41c285309cc588049ea3896875e75982d072ced\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnnf2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1469f484fec8f5c7863ebaa62188bc38d6553fe3ef65e315a928924306724842\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnnf2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bjhsd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:19Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:19 crc kubenswrapper[4953]: I1211 10:12:19.576591 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6787fb31-272a-4dd9-b0f2-bfb5630d6901\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42af1d5ca92f02433468753b3f0f0cb74ef360928733d71e4316fb8ed77aea63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ec7f5911594475d4a03216b385df264254e50cb55ef7eee3d2ac0a88e8ef1af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e43d812b41951ea02ea6aeaf53d101e762a3bc0513865818ff2dcc6506a24d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b17d140523000135ca46bbc525af1160b82222469a9ca408985ab27c2514f82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b17d140523000135ca46bbc525af1160b82222469a9ca408985ab27c2514f82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:23Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:22Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:19Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:19 crc kubenswrapper[4953]: I1211 10:12:19.589287 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e38e7bec81ab11b9afe5c592d5c57aa1c0527e5e4031265a00a99ef8cb3c6dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da0ab06260b0bf565e089d1d1a78ae71e0ce94f0d5e867393dafc543f9014367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:19Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:19 crc kubenswrapper[4953]: I1211 10:12:19.622331 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:19 crc kubenswrapper[4953]: I1211 10:12:19.622370 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:19 crc kubenswrapper[4953]: I1211 10:12:19.622382 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:19 crc kubenswrapper[4953]: I1211 10:12:19.622400 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:19 crc kubenswrapper[4953]: I1211 10:12:19.622412 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:19Z","lastTransitionTime":"2025-12-11T10:12:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:19 crc kubenswrapper[4953]: I1211 10:12:19.725150 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:19 crc kubenswrapper[4953]: I1211 10:12:19.725234 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:19 crc kubenswrapper[4953]: I1211 10:12:19.725259 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:19 crc kubenswrapper[4953]: I1211 10:12:19.725291 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:19 crc kubenswrapper[4953]: I1211 10:12:19.725316 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:19Z","lastTransitionTime":"2025-12-11T10:12:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:19 crc kubenswrapper[4953]: I1211 10:12:19.828966 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:19 crc kubenswrapper[4953]: I1211 10:12:19.829024 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:19 crc kubenswrapper[4953]: I1211 10:12:19.829041 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:19 crc kubenswrapper[4953]: I1211 10:12:19.829212 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:19 crc kubenswrapper[4953]: I1211 10:12:19.829230 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:19Z","lastTransitionTime":"2025-12-11T10:12:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:19 crc kubenswrapper[4953]: I1211 10:12:19.931354 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:19 crc kubenswrapper[4953]: I1211 10:12:19.931412 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:19 crc kubenswrapper[4953]: I1211 10:12:19.931430 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:19 crc kubenswrapper[4953]: I1211 10:12:19.931450 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:19 crc kubenswrapper[4953]: I1211 10:12:19.931465 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:19Z","lastTransitionTime":"2025-12-11T10:12:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:20 crc kubenswrapper[4953]: I1211 10:12:20.034298 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:20 crc kubenswrapper[4953]: I1211 10:12:20.034393 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:20 crc kubenswrapper[4953]: I1211 10:12:20.034414 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:20 crc kubenswrapper[4953]: I1211 10:12:20.034434 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:20 crc kubenswrapper[4953]: I1211 10:12:20.034448 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:20Z","lastTransitionTime":"2025-12-11T10:12:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:20 crc kubenswrapper[4953]: I1211 10:12:20.137037 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:20 crc kubenswrapper[4953]: I1211 10:12:20.137088 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:20 crc kubenswrapper[4953]: I1211 10:12:20.137100 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:20 crc kubenswrapper[4953]: I1211 10:12:20.137130 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:20 crc kubenswrapper[4953]: I1211 10:12:20.137142 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:20Z","lastTransitionTime":"2025-12-11T10:12:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:20 crc kubenswrapper[4953]: I1211 10:12:20.239136 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:20 crc kubenswrapper[4953]: I1211 10:12:20.239188 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:20 crc kubenswrapper[4953]: I1211 10:12:20.239199 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:20 crc kubenswrapper[4953]: I1211 10:12:20.239219 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:20 crc kubenswrapper[4953]: I1211 10:12:20.239234 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:20Z","lastTransitionTime":"2025-12-11T10:12:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:20 crc kubenswrapper[4953]: I1211 10:12:20.342135 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:20 crc kubenswrapper[4953]: I1211 10:12:20.342194 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:20 crc kubenswrapper[4953]: I1211 10:12:20.342219 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:20 crc kubenswrapper[4953]: I1211 10:12:20.342249 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:20 crc kubenswrapper[4953]: I1211 10:12:20.342265 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:20Z","lastTransitionTime":"2025-12-11T10:12:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:20 crc kubenswrapper[4953]: I1211 10:12:20.361312 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x6f57_c09d8243-6693-433e-bce1-8a99e5e37b95/ovnkube-controller/2.log" Dec 11 10:12:20 crc kubenswrapper[4953]: I1211 10:12:20.364773 4953 scope.go:117] "RemoveContainer" containerID="60a64421d848d2d5154604bb89edadbac944c141172896eb9bc48b6fab7e7b77" Dec 11 10:12:20 crc kubenswrapper[4953]: E1211 10:12:20.364927 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-x6f57_openshift-ovn-kubernetes(c09d8243-6693-433e-bce1-8a99e5e37b95)\"" pod="openshift-ovn-kubernetes/ovnkube-node-x6f57" podUID="c09d8243-6693-433e-bce1-8a99e5e37b95" Dec 11 10:12:20 crc kubenswrapper[4953]: I1211 10:12:20.379894 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb9312a7af4fcd14d64411afec83b7315dbe399254aab23665cccfa0b04a62db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:20Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:20 crc kubenswrapper[4953]: I1211 10:12:20.394199 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q2898" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed741fb7-1326-48b7-a713-17c9f0243eac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c91690c6fc715e967f98fc731db9ff317a21946b0903480ee2534f5e71ae7ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5z9nk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd6810974250266a6a2efbea13db5cb6f52a4bbdec05955f7b9f58e55d7a8c4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5z9nk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q2898\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:20Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:20 crc kubenswrapper[4953]: I1211 10:12:20.414294 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x6f57" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c09d8243-6693-433e-bce1-8a99e5e37b95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b88a1ca7bd533da2486525e0a6c3dc45272433e743146be01eb8013265f02d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://622e6781ae06491000d6c0e3d2922942c3709bb09483a8e0d2ab723972c2aa78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c34fbadf5e4606e6ad395137eff9b9edc19b5d02125e818a6d8a19d0d20649e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2587a8f43682f31a01fae073769712321abb001f975f5ac5413264489a110f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42759c1ba178a28ddf9b11b221249364f6993c1288309dff6b1c50be13c3b6fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99f628cfb5844a18bd0013d2cd5f7f545ef7a561017498bdfa4a6b6fc4686e54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60a64421d848d2d5154604bb89edadbac944c141172896eb9bc48b6fab7e7b77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60a64421d848d2d5154604bb89edadbac944c141172896eb9bc48b6fab7e7b77\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T10:12:18Z\\\",\\\"message\\\":\\\"r.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1211 10:12:18.510892 6563 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1211 10:12:18.511234 6563 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1211 10:12:18.511255 6563 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1211 10:12:18.511261 6563 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1211 10:12:18.511504 6563 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1211 10:12:18.511509 6563 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1211 10:12:18.511523 6563 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1211 10:12:18.511549 6563 factory.go:656] Stopping watch factory\\\\nI1211 10:12:18.511589 6563 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1211 10:12:18.511599 6563 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1211 10:12:18.511606 6563 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1211 10:12:18.511613 6563 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1211 10:12:18.511620 6563 handler.go:208] Removed *v1.Node event handler 2\\\\nI1211 10:12:18.511628 6563 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T10:12:17Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-x6f57_openshift-ovn-kubernetes(c09d8243-6693-433e-bce1-8a99e5e37b95)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8da892281a7b8dea449aee461dcb35bef204e2a768689e751abcb21204091aaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c84dce3b218bcc630289f0bff286bad09d4481e1787ef2c048799bfc6c97108f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c84dce3b218bcc630289f0bff286bad09d4481e1787ef2c048799bfc6c97108f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x6f57\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:20Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:20 crc kubenswrapper[4953]: I1211 10:12:20.426463 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bjhsd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c4cea1-0872-4490-8195-2a195090982c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e2ab3c73fffd4d07174524dd41c285309cc588049ea3896875e75982d072ced\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnnf2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1469f484fec8f5c7863ebaa62188bc38d6553fe3ef65e315a928924306724842\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnnf2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bjhsd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:20Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:20 crc kubenswrapper[4953]: I1211 10:12:20.440075 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6787fb31-272a-4dd9-b0f2-bfb5630d6901\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42af1d5ca92f02433468753b3f0f0cb74ef360928733d71e4316fb8ed77aea63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ec7f5911594475d4a03216b385df264254e50cb55ef7eee3d2ac0a88e8ef1af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e43d812b41951ea02ea6aeaf53d101e762a3bc0513865818ff2dcc6506a24d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b17d140523000135ca46bbc525af1160b82222469a9ca408985ab27c2514f82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b17d140523000135ca46bbc525af1160b82222469a9ca408985ab27c2514f82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:23Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:22Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:20Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:20 crc kubenswrapper[4953]: I1211 10:12:20.444326 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:20 crc kubenswrapper[4953]: I1211 10:12:20.444377 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:20 crc kubenswrapper[4953]: I1211 10:12:20.444392 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:20 crc kubenswrapper[4953]: I1211 10:12:20.444409 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:20 crc kubenswrapper[4953]: I1211 10:12:20.444424 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:20Z","lastTransitionTime":"2025-12-11T10:12:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:20 crc kubenswrapper[4953]: I1211 10:12:20.453096 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e38e7bec81ab11b9afe5c592d5c57aa1c0527e5e4031265a00a99ef8cb3c6dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da0ab06260b0bf565e089d1d1a78ae71e0ce94f0d5e867393dafc543f9014367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:20Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:20 crc kubenswrapper[4953]: I1211 10:12:20.467964 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:20Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:20 crc kubenswrapper[4953]: I1211 10:12:20.482004 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:20Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:20 crc kubenswrapper[4953]: I1211 10:12:20.493076 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7cgmm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5c6cf50-ac15-4b65-98a3-24d5b55a9f5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e8c3b294febaab8650ca738b055222b11b0f3502da927fb9bb1f2f30b97c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrv98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7cgmm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:20Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:20 crc kubenswrapper[4953]: I1211 10:12:20.503487 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ps59j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed9da9e3-3f97-49f6-9774-3c2f06987b9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8b7289e76184818bc11ef0e99cd573244647de790af79ac277a91ebf305bc1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vngds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ps59j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:20Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:20 crc kubenswrapper[4953]: I1211 10:12:20.516472 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pqtrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d80d6bd6-dd9c-433e-93cb-2be48e4cea72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7525c3e73b38b27709833d8bf03853f82b08bafa8734d97890332f8aff9d3317\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c456b7fb076e4e1f8ebe23ee26c07f2038800afe76c0e183ea82aebebe5e20d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c456b7fb076e4e1f8ebe23ee26c07f2038800afe76c0e183ea82aebebe5e20d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1199214a609bfaf541ddc0e45714a86f4165bdb240eda61b8ee35764cc2b6c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1199214a609bfaf541ddc0e45714a86f4165bdb240eda61b8ee35764cc2b6c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79056708a2a0ccc6dca203be18aec98296471e9c95fa3124c910deb77f04cb23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79056708a2a0ccc6dca203be18aec98296471e9c95fa3124c910deb77f04cb23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdf32e1c2b4bd849d12dcee505be41ebbd38d03660e37c1b36d65f81e55d5160\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdf32e1c2b4bd849d12dcee505be41ebbd38d03660e37c1b36d65f81e55d5160\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6dd8c365888d82936ae2eeef058fd79b7134d40d2096eeb655fc79faa658ce6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6dd8c365888d82936ae2eeef058fd79b7134d40d2096eeb655fc79faa658ce6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22373c7e841c5b2889f89395496fcd5cf912db482ef228c680812c667bead5da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22373c7e841c5b2889f89395496fcd5cf912db482ef228c680812c667bead5da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pqtrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:20Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:20 crc kubenswrapper[4953]: I1211 10:12:20.525961 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qm4mr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86f65b63-32e0-49cc-bc96-272ecfb987ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqpb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqpb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:12:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qm4mr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:20Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:20 crc kubenswrapper[4953]: I1211 10:12:20.539708 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d98f6e58-767e-4e80-8dc7-bf97cdc14997\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec306b9048e81de45ce4e5ae1f564ab611980d56edf94f34c48cba7299dd754e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7453febb17d4aadef8c87c8d256a0339b441e2bed33a20a3f7cf88b4d0ce5a83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5c40bd3d558c5cff3d458a0b5a993371c3e8b6afc0035a64a21ffc0cc6c2357\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b22d8239ad9f5511dc6ae773c7ea181c4e194b0847b58332e716953d9deb9cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:20Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:20 crc kubenswrapper[4953]: I1211 10:12:20.545508 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:20 crc kubenswrapper[4953]: I1211 10:12:20.545603 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:20 crc kubenswrapper[4953]: I1211 10:12:20.545622 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:20 crc kubenswrapper[4953]: I1211 10:12:20.545645 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:20 crc kubenswrapper[4953]: I1211 10:12:20.545661 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:20Z","lastTransitionTime":"2025-12-11T10:12:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:20 crc kubenswrapper[4953]: I1211 10:12:20.607013 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7f8ca70-14ac-499f-9a73-c03f1cb9d3f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afbf1d478a1ccbd17c29483adf2e39e60be93dfde72d96dd4c45ee2b81c7db7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89487ecc0b25583d92a2adb537e660618a1f0477d9b0ca805c7d5cc120a38ef5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5850c59617cbc5cbf3d86246bfb8d7645964fdb32f406648e47de3d2e1dcca39\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b38e6fc7946d99ff7570627e9bfd01e9f5e029ad3f3e2cda276461f222d7950\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a91255550d88dd1963fef1112d90d2c1e779fc3e2dd1e7c824640879b8c6a58e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T10:11:37Z\\\",\\\"message\\\":\\\"W1211 10:11:26.311312 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1211 10:11:26.312053 1 crypto.go:601] Generating new CA for check-endpoints-signer@1765447886 cert, and key in /tmp/serving-cert-3652440615/serving-signer.crt, /tmp/serving-cert-3652440615/serving-signer.key\\\\nI1211 10:11:26.711906 1 observer_polling.go:159] Starting file observer\\\\nW1211 10:11:26.714018 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1211 10:11:26.714220 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1211 10:11:26.715195 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3652440615/tls.crt::/tmp/serving-cert-3652440615/tls.key\\\\\\\"\\\\nF1211 10:11:37.220702 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:26Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2348bd7a336966cd91aa6ba1cf71771e7fd111085acbb0481adee82d7a6e109\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4536100178c4611eea8cfe2b15d79f55fdd32b7004de523cc2694bd656ce5be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4536100178c4611eea8cfe2b15d79f55fdd32b7004de523cc2694bd656ce5be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:20Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:20 crc kubenswrapper[4953]: E1211 10:12:20.611207 4953 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T10:12:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T10:12:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T10:12:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T10:12:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5fa37296-71b7-4540-87a3-260b8ecb76f4\\\",\\\"systemUUID\\\":\\\"28c30a59-aa99-484b-82a7-0daea6b2659e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:20Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:20 crc kubenswrapper[4953]: I1211 10:12:20.626244 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:20 crc kubenswrapper[4953]: I1211 10:12:20.626281 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:20 crc kubenswrapper[4953]: I1211 10:12:20.626291 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:20 crc kubenswrapper[4953]: I1211 10:12:20.626304 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:20 crc kubenswrapper[4953]: I1211 10:12:20.626313 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:20Z","lastTransitionTime":"2025-12-11T10:12:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:20 crc kubenswrapper[4953]: I1211 10:12:20.626285 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:20Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:20 crc kubenswrapper[4953]: I1211 10:12:20.640516 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e57ec14864d78b0463b4bd4af9dfa21aec61df60a63a38b7d98ba4871716edfc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:20Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:20 crc kubenswrapper[4953]: E1211 10:12:20.641163 4953 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T10:12:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T10:12:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T10:12:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T10:12:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5fa37296-71b7-4540-87a3-260b8ecb76f4\\\",\\\"systemUUID\\\":\\\"28c30a59-aa99-484b-82a7-0daea6b2659e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:20Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:20 crc kubenswrapper[4953]: I1211 10:12:20.645034 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:20 crc kubenswrapper[4953]: I1211 10:12:20.645113 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:20 crc kubenswrapper[4953]: I1211 10:12:20.645124 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:20 crc kubenswrapper[4953]: I1211 10:12:20.645140 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:20 crc kubenswrapper[4953]: I1211 10:12:20.645157 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:20Z","lastTransitionTime":"2025-12-11T10:12:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:20 crc kubenswrapper[4953]: I1211 10:12:20.658062 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h4dvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"644e1d40-ab80-469e-94b4-540e52b8e2c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f734acf34a05a9425f305c809775bae58615ae1d5f89e3b519e54d7e7abb8bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbwwr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h4dvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:20Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:20 crc kubenswrapper[4953]: E1211 10:12:20.661014 4953 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T10:12:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T10:12:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T10:12:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T10:12:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5fa37296-71b7-4540-87a3-260b8ecb76f4\\\",\\\"systemUUID\\\":\\\"28c30a59-aa99-484b-82a7-0daea6b2659e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:20Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:20 crc kubenswrapper[4953]: I1211 10:12:20.664496 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:20 crc kubenswrapper[4953]: I1211 10:12:20.664550 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:20 crc kubenswrapper[4953]: I1211 10:12:20.664559 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:20 crc kubenswrapper[4953]: I1211 10:12:20.664586 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:20 crc kubenswrapper[4953]: I1211 10:12:20.664596 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:20Z","lastTransitionTime":"2025-12-11T10:12:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:20 crc kubenswrapper[4953]: E1211 10:12:20.675560 4953 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T10:12:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T10:12:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T10:12:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T10:12:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5fa37296-71b7-4540-87a3-260b8ecb76f4\\\",\\\"systemUUID\\\":\\\"28c30a59-aa99-484b-82a7-0daea6b2659e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:20Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:20 crc kubenswrapper[4953]: I1211 10:12:20.678757 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:20 crc kubenswrapper[4953]: I1211 10:12:20.678797 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:20 crc kubenswrapper[4953]: I1211 10:12:20.678806 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:20 crc kubenswrapper[4953]: I1211 10:12:20.678822 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:20 crc kubenswrapper[4953]: I1211 10:12:20.678831 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:20Z","lastTransitionTime":"2025-12-11T10:12:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:20 crc kubenswrapper[4953]: E1211 10:12:20.689827 4953 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T10:12:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T10:12:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T10:12:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T10:12:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5fa37296-71b7-4540-87a3-260b8ecb76f4\\\",\\\"systemUUID\\\":\\\"28c30a59-aa99-484b-82a7-0daea6b2659e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:20Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:20 crc kubenswrapper[4953]: E1211 10:12:20.689970 4953 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 11 10:12:20 crc kubenswrapper[4953]: I1211 10:12:20.691633 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:20 crc kubenswrapper[4953]: I1211 10:12:20.691676 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:20 crc kubenswrapper[4953]: I1211 10:12:20.691685 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:20 crc kubenswrapper[4953]: I1211 10:12:20.691703 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:20 crc kubenswrapper[4953]: I1211 10:12:20.691714 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:20Z","lastTransitionTime":"2025-12-11T10:12:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:20 crc kubenswrapper[4953]: I1211 10:12:20.793985 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:20 crc kubenswrapper[4953]: I1211 10:12:20.794028 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:20 crc kubenswrapper[4953]: I1211 10:12:20.794038 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:20 crc kubenswrapper[4953]: I1211 10:12:20.794053 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:20 crc kubenswrapper[4953]: I1211 10:12:20.794066 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:20Z","lastTransitionTime":"2025-12-11T10:12:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:20 crc kubenswrapper[4953]: I1211 10:12:20.896775 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:20 crc kubenswrapper[4953]: I1211 10:12:20.896822 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:20 crc kubenswrapper[4953]: I1211 10:12:20.896847 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:20 crc kubenswrapper[4953]: I1211 10:12:20.896868 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:20 crc kubenswrapper[4953]: I1211 10:12:20.896884 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:20Z","lastTransitionTime":"2025-12-11T10:12:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:21 crc kubenswrapper[4953]: I1211 10:12:21.000152 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:21 crc kubenswrapper[4953]: I1211 10:12:21.000203 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:21 crc kubenswrapper[4953]: I1211 10:12:21.000216 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:21 crc kubenswrapper[4953]: I1211 10:12:21.000234 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:21 crc kubenswrapper[4953]: I1211 10:12:21.000247 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:21Z","lastTransitionTime":"2025-12-11T10:12:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:21 crc kubenswrapper[4953]: I1211 10:12:21.103282 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:21 crc kubenswrapper[4953]: I1211 10:12:21.103348 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:21 crc kubenswrapper[4953]: I1211 10:12:21.103358 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:21 crc kubenswrapper[4953]: I1211 10:12:21.103371 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:21 crc kubenswrapper[4953]: I1211 10:12:21.103381 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:21Z","lastTransitionTime":"2025-12-11T10:12:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:21 crc kubenswrapper[4953]: I1211 10:12:21.205499 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:21 crc kubenswrapper[4953]: I1211 10:12:21.205542 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:21 crc kubenswrapper[4953]: I1211 10:12:21.205550 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:21 crc kubenswrapper[4953]: I1211 10:12:21.205563 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:21 crc kubenswrapper[4953]: I1211 10:12:21.205592 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:21Z","lastTransitionTime":"2025-12-11T10:12:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:21 crc kubenswrapper[4953]: I1211 10:12:21.308478 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:21 crc kubenswrapper[4953]: I1211 10:12:21.308535 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:21 crc kubenswrapper[4953]: I1211 10:12:21.308550 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:21 crc kubenswrapper[4953]: I1211 10:12:21.308596 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:21 crc kubenswrapper[4953]: I1211 10:12:21.308615 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:21Z","lastTransitionTime":"2025-12-11T10:12:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:21 crc kubenswrapper[4953]: I1211 10:12:21.410773 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:21 crc kubenswrapper[4953]: I1211 10:12:21.410835 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:21 crc kubenswrapper[4953]: I1211 10:12:21.410847 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:21 crc kubenswrapper[4953]: I1211 10:12:21.410885 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:21 crc kubenswrapper[4953]: I1211 10:12:21.410898 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:21Z","lastTransitionTime":"2025-12-11T10:12:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:21 crc kubenswrapper[4953]: I1211 10:12:21.473178 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 10:12:21 crc kubenswrapper[4953]: I1211 10:12:21.473230 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 10:12:21 crc kubenswrapper[4953]: I1211 10:12:21.473211 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 10:12:21 crc kubenswrapper[4953]: I1211 10:12:21.473192 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qm4mr" Dec 11 10:12:21 crc kubenswrapper[4953]: E1211 10:12:21.473331 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 10:12:21 crc kubenswrapper[4953]: E1211 10:12:21.473447 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 10:12:21 crc kubenswrapper[4953]: E1211 10:12:21.473490 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 10:12:21 crc kubenswrapper[4953]: E1211 10:12:21.473525 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qm4mr" podUID="86f65b63-32e0-49cc-bc96-272ecfb987ed" Dec 11 10:12:21 crc kubenswrapper[4953]: I1211 10:12:21.513647 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:21 crc kubenswrapper[4953]: I1211 10:12:21.513688 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:21 crc kubenswrapper[4953]: I1211 10:12:21.513698 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:21 crc kubenswrapper[4953]: I1211 10:12:21.513711 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:21 crc kubenswrapper[4953]: I1211 10:12:21.513720 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:21Z","lastTransitionTime":"2025-12-11T10:12:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:21 crc kubenswrapper[4953]: I1211 10:12:21.617266 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:21 crc kubenswrapper[4953]: I1211 10:12:21.617339 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:21 crc kubenswrapper[4953]: I1211 10:12:21.617364 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:21 crc kubenswrapper[4953]: I1211 10:12:21.617394 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:21 crc kubenswrapper[4953]: I1211 10:12:21.617419 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:21Z","lastTransitionTime":"2025-12-11T10:12:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:21 crc kubenswrapper[4953]: I1211 10:12:21.720577 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:21 crc kubenswrapper[4953]: I1211 10:12:21.720697 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:21 crc kubenswrapper[4953]: I1211 10:12:21.720723 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:21 crc kubenswrapper[4953]: I1211 10:12:21.720753 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:21 crc kubenswrapper[4953]: I1211 10:12:21.720775 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:21Z","lastTransitionTime":"2025-12-11T10:12:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:21 crc kubenswrapper[4953]: I1211 10:12:21.823233 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:21 crc kubenswrapper[4953]: I1211 10:12:21.823284 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:21 crc kubenswrapper[4953]: I1211 10:12:21.823298 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:21 crc kubenswrapper[4953]: I1211 10:12:21.823324 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:21 crc kubenswrapper[4953]: I1211 10:12:21.823346 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:21Z","lastTransitionTime":"2025-12-11T10:12:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:21 crc kubenswrapper[4953]: I1211 10:12:21.925870 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:21 crc kubenswrapper[4953]: I1211 10:12:21.925906 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:21 crc kubenswrapper[4953]: I1211 10:12:21.925917 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:21 crc kubenswrapper[4953]: I1211 10:12:21.925932 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:21 crc kubenswrapper[4953]: I1211 10:12:21.925943 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:21Z","lastTransitionTime":"2025-12-11T10:12:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:22 crc kubenswrapper[4953]: I1211 10:12:22.028369 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:22 crc kubenswrapper[4953]: I1211 10:12:22.028471 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:22 crc kubenswrapper[4953]: I1211 10:12:22.028497 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:22 crc kubenswrapper[4953]: I1211 10:12:22.028522 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:22 crc kubenswrapper[4953]: I1211 10:12:22.028538 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:22Z","lastTransitionTime":"2025-12-11T10:12:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:22 crc kubenswrapper[4953]: I1211 10:12:22.131384 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:22 crc kubenswrapper[4953]: I1211 10:12:22.131743 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:22 crc kubenswrapper[4953]: I1211 10:12:22.131854 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:22 crc kubenswrapper[4953]: I1211 10:12:22.131958 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:22 crc kubenswrapper[4953]: I1211 10:12:22.132061 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:22Z","lastTransitionTime":"2025-12-11T10:12:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:22 crc kubenswrapper[4953]: I1211 10:12:22.234876 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:22 crc kubenswrapper[4953]: I1211 10:12:22.234923 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:22 crc kubenswrapper[4953]: I1211 10:12:22.234935 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:22 crc kubenswrapper[4953]: I1211 10:12:22.234953 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:22 crc kubenswrapper[4953]: I1211 10:12:22.234965 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:22Z","lastTransitionTime":"2025-12-11T10:12:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:22 crc kubenswrapper[4953]: I1211 10:12:22.342288 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:22 crc kubenswrapper[4953]: I1211 10:12:22.342327 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:22 crc kubenswrapper[4953]: I1211 10:12:22.342338 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:22 crc kubenswrapper[4953]: I1211 10:12:22.342352 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:22 crc kubenswrapper[4953]: I1211 10:12:22.342362 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:22Z","lastTransitionTime":"2025-12-11T10:12:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:22 crc kubenswrapper[4953]: I1211 10:12:22.445023 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:22 crc kubenswrapper[4953]: I1211 10:12:22.445089 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:22 crc kubenswrapper[4953]: I1211 10:12:22.445111 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:22 crc kubenswrapper[4953]: I1211 10:12:22.445139 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:22 crc kubenswrapper[4953]: I1211 10:12:22.445163 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:22Z","lastTransitionTime":"2025-12-11T10:12:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:22 crc kubenswrapper[4953]: I1211 10:12:22.491764 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:22Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:22 crc kubenswrapper[4953]: I1211 10:12:22.516249 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:22Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:22 crc kubenswrapper[4953]: I1211 10:12:22.532908 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7cgmm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5c6cf50-ac15-4b65-98a3-24d5b55a9f5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e8c3b294febaab8650ca738b055222b11b0f3502da927fb9bb1f2f30b97c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrv98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7cgmm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:22Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:22 crc kubenswrapper[4953]: I1211 10:12:22.544289 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ps59j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed9da9e3-3f97-49f6-9774-3c2f06987b9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8b7289e76184818bc11ef0e99cd573244647de790af79ac277a91ebf305bc1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vngds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ps59j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:22Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:22 crc kubenswrapper[4953]: I1211 10:12:22.548171 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:22 crc kubenswrapper[4953]: I1211 10:12:22.548217 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:22 crc kubenswrapper[4953]: I1211 10:12:22.548231 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:22 crc kubenswrapper[4953]: I1211 10:12:22.548247 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:22 crc kubenswrapper[4953]: I1211 10:12:22.548262 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:22Z","lastTransitionTime":"2025-12-11T10:12:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:22 crc kubenswrapper[4953]: I1211 10:12:22.562633 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pqtrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d80d6bd6-dd9c-433e-93cb-2be48e4cea72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7525c3e73b38b27709833d8bf03853f82b08bafa8734d97890332f8aff9d3317\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c456b7fb076e4e1f8ebe23ee26c07f2038800afe76c0e183ea82aebebe5e20d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c456b7fb076e4e1f8ebe23ee26c07f2038800afe76c0e183ea82aebebe5e20d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1199214a609bfaf541ddc0e45714a86f4165bdb240eda61b8ee35764cc2b6c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1199214a609bfaf541ddc0e45714a86f4165bdb240eda61b8ee35764cc2b6c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79056708a2a0ccc6dca203be18aec98296471e9c95fa3124c910deb77f04cb23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79056708a2a0ccc6dca203be18aec98296471e9c95fa3124c910deb77f04cb23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdf32e1c2b4bd849d12dcee505be41ebbd38d03660e37c1b36d65f81e55d5160\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdf32e1c2b4bd849d12dcee505be41ebbd38d03660e37c1b36d65f81e55d5160\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6dd8c365888d82936ae2eeef058fd79b7134d40d2096eeb655fc79faa658ce6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6dd8c365888d82936ae2eeef058fd79b7134d40d2096eeb655fc79faa658ce6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22373c7e841c5b2889f89395496fcd5cf912db482ef228c680812c667bead5da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22373c7e841c5b2889f89395496fcd5cf912db482ef228c680812c667bead5da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pqtrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:22Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:22 crc kubenswrapper[4953]: I1211 10:12:22.573990 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qm4mr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86f65b63-32e0-49cc-bc96-272ecfb987ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqpb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqpb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:12:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qm4mr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:22Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:22 crc kubenswrapper[4953]: I1211 10:12:22.587781 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d98f6e58-767e-4e80-8dc7-bf97cdc14997\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec306b9048e81de45ce4e5ae1f564ab611980d56edf94f34c48cba7299dd754e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7453febb17d4aadef8c87c8d256a0339b441e2bed33a20a3f7cf88b4d0ce5a83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5c40bd3d558c5cff3d458a0b5a993371c3e8b6afc0035a64a21ffc0cc6c2357\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b22d8239ad9f5511dc6ae773c7ea181c4e194b0847b58332e716953d9deb9cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:22Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:22 crc kubenswrapper[4953]: I1211 10:12:22.603881 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7f8ca70-14ac-499f-9a73-c03f1cb9d3f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afbf1d478a1ccbd17c29483adf2e39e60be93dfde72d96dd4c45ee2b81c7db7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89487ecc0b25583d92a2adb537e660618a1f0477d9b0ca805c7d5cc120a38ef5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5850c59617cbc5cbf3d86246bfb8d7645964fdb32f406648e47de3d2e1dcca39\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b38e6fc7946d99ff7570627e9bfd01e9f5e029ad3f3e2cda276461f222d7950\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a91255550d88dd1963fef1112d90d2c1e779fc3e2dd1e7c824640879b8c6a58e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T10:11:37Z\\\",\\\"message\\\":\\\"W1211 10:11:26.311312 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1211 10:11:26.312053 1 crypto.go:601] Generating new CA for check-endpoints-signer@1765447886 cert, and key in /tmp/serving-cert-3652440615/serving-signer.crt, /tmp/serving-cert-3652440615/serving-signer.key\\\\nI1211 10:11:26.711906 1 observer_polling.go:159] Starting file observer\\\\nW1211 10:11:26.714018 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1211 10:11:26.714220 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1211 10:11:26.715195 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3652440615/tls.crt::/tmp/serving-cert-3652440615/tls.key\\\\\\\"\\\\nF1211 10:11:37.220702 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:26Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2348bd7a336966cd91aa6ba1cf71771e7fd111085acbb0481adee82d7a6e109\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4536100178c4611eea8cfe2b15d79f55fdd32b7004de523cc2694bd656ce5be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4536100178c4611eea8cfe2b15d79f55fdd32b7004de523cc2694bd656ce5be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:22Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:22 crc kubenswrapper[4953]: I1211 10:12:22.617734 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:22Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:22 crc kubenswrapper[4953]: I1211 10:12:22.629659 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e57ec14864d78b0463b4bd4af9dfa21aec61df60a63a38b7d98ba4871716edfc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:22Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:22 crc kubenswrapper[4953]: I1211 10:12:22.641365 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h4dvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"644e1d40-ab80-469e-94b4-540e52b8e2c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f734acf34a05a9425f305c809775bae58615ae1d5f89e3b519e54d7e7abb8bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbwwr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h4dvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:22Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:22 crc kubenswrapper[4953]: I1211 10:12:22.650732 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:22 crc kubenswrapper[4953]: I1211 10:12:22.650800 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:22 crc kubenswrapper[4953]: I1211 10:12:22.650813 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:22 crc kubenswrapper[4953]: I1211 10:12:22.650831 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:22 crc kubenswrapper[4953]: I1211 10:12:22.651729 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:22Z","lastTransitionTime":"2025-12-11T10:12:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:22 crc kubenswrapper[4953]: I1211 10:12:22.656150 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb9312a7af4fcd14d64411afec83b7315dbe399254aab23665cccfa0b04a62db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:22Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:22 crc kubenswrapper[4953]: I1211 10:12:22.667452 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q2898" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed741fb7-1326-48b7-a713-17c9f0243eac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c91690c6fc715e967f98fc731db9ff317a21946b0903480ee2534f5e71ae7ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5z9nk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd6810974250266a6a2efbea13db5cb6f52a4bbdec05955f7b9f58e55d7a8c4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5z9nk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q2898\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:22Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:22 crc kubenswrapper[4953]: I1211 10:12:22.686008 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x6f57" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c09d8243-6693-433e-bce1-8a99e5e37b95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b88a1ca7bd533da2486525e0a6c3dc45272433e743146be01eb8013265f02d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://622e6781ae06491000d6c0e3d2922942c3709bb09483a8e0d2ab723972c2aa78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c34fbadf5e4606e6ad395137eff9b9edc19b5d02125e818a6d8a19d0d20649e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2587a8f43682f31a01fae073769712321abb001f975f5ac5413264489a110f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42759c1ba178a28ddf9b11b221249364f6993c1288309dff6b1c50be13c3b6fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99f628cfb5844a18bd0013d2cd5f7f545ef7a561017498bdfa4a6b6fc4686e54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60a64421d848d2d5154604bb89edadbac944c141172896eb9bc48b6fab7e7b77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60a64421d848d2d5154604bb89edadbac944c141172896eb9bc48b6fab7e7b77\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T10:12:18Z\\\",\\\"message\\\":\\\"r.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1211 10:12:18.510892 6563 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1211 10:12:18.511234 6563 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1211 10:12:18.511255 6563 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1211 10:12:18.511261 6563 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1211 10:12:18.511504 6563 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1211 10:12:18.511509 6563 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1211 10:12:18.511523 6563 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1211 10:12:18.511549 6563 factory.go:656] Stopping watch factory\\\\nI1211 10:12:18.511589 6563 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1211 10:12:18.511599 6563 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1211 10:12:18.511606 6563 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1211 10:12:18.511613 6563 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1211 10:12:18.511620 6563 handler.go:208] Removed *v1.Node event handler 2\\\\nI1211 10:12:18.511628 6563 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T10:12:17Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-x6f57_openshift-ovn-kubernetes(c09d8243-6693-433e-bce1-8a99e5e37b95)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8da892281a7b8dea449aee461dcb35bef204e2a768689e751abcb21204091aaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c84dce3b218bcc630289f0bff286bad09d4481e1787ef2c048799bfc6c97108f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c84dce3b218bcc630289f0bff286bad09d4481e1787ef2c048799bfc6c97108f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x6f57\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:22Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:22 crc kubenswrapper[4953]: I1211 10:12:22.703252 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bjhsd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c4cea1-0872-4490-8195-2a195090982c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e2ab3c73fffd4d07174524dd41c285309cc588049ea3896875e75982d072ced\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnnf2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1469f484fec8f5c7863ebaa62188bc38d6553fe3ef65e315a928924306724842\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnnf2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bjhsd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:22Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:22 crc kubenswrapper[4953]: I1211 10:12:22.717339 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6787fb31-272a-4dd9-b0f2-bfb5630d6901\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42af1d5ca92f02433468753b3f0f0cb74ef360928733d71e4316fb8ed77aea63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ec7f5911594475d4a03216b385df264254e50cb55ef7eee3d2ac0a88e8ef1af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e43d812b41951ea02ea6aeaf53d101e762a3bc0513865818ff2dcc6506a24d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b17d140523000135ca46bbc525af1160b82222469a9ca408985ab27c2514f82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b17d140523000135ca46bbc525af1160b82222469a9ca408985ab27c2514f82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:23Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:22Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:22Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:22 crc kubenswrapper[4953]: I1211 10:12:22.731733 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e38e7bec81ab11b9afe5c592d5c57aa1c0527e5e4031265a00a99ef8cb3c6dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da0ab06260b0bf565e089d1d1a78ae71e0ce94f0d5e867393dafc543f9014367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:22Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:22 crc kubenswrapper[4953]: I1211 10:12:22.757303 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:22 crc kubenswrapper[4953]: I1211 10:12:22.757376 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:22 crc kubenswrapper[4953]: I1211 10:12:22.757416 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:22 crc kubenswrapper[4953]: I1211 10:12:22.757440 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:22 crc kubenswrapper[4953]: I1211 10:12:22.757459 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:22Z","lastTransitionTime":"2025-12-11T10:12:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:22 crc kubenswrapper[4953]: I1211 10:12:22.860056 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:22 crc kubenswrapper[4953]: I1211 10:12:22.860095 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:22 crc kubenswrapper[4953]: I1211 10:12:22.860108 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:22 crc kubenswrapper[4953]: I1211 10:12:22.860124 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:22 crc kubenswrapper[4953]: I1211 10:12:22.860136 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:22Z","lastTransitionTime":"2025-12-11T10:12:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:22 crc kubenswrapper[4953]: I1211 10:12:22.962547 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:22 crc kubenswrapper[4953]: I1211 10:12:22.962594 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:22 crc kubenswrapper[4953]: I1211 10:12:22.962624 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:22 crc kubenswrapper[4953]: I1211 10:12:22.962640 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:22 crc kubenswrapper[4953]: I1211 10:12:22.962652 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:22Z","lastTransitionTime":"2025-12-11T10:12:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:23 crc kubenswrapper[4953]: I1211 10:12:23.065363 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:23 crc kubenswrapper[4953]: I1211 10:12:23.065422 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:23 crc kubenswrapper[4953]: I1211 10:12:23.065442 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:23 crc kubenswrapper[4953]: I1211 10:12:23.065461 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:23 crc kubenswrapper[4953]: I1211 10:12:23.065472 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:23Z","lastTransitionTime":"2025-12-11T10:12:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:23 crc kubenswrapper[4953]: I1211 10:12:23.167681 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:23 crc kubenswrapper[4953]: I1211 10:12:23.167721 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:23 crc kubenswrapper[4953]: I1211 10:12:23.167730 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:23 crc kubenswrapper[4953]: I1211 10:12:23.167778 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:23 crc kubenswrapper[4953]: I1211 10:12:23.167788 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:23Z","lastTransitionTime":"2025-12-11T10:12:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:23 crc kubenswrapper[4953]: I1211 10:12:23.270129 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:23 crc kubenswrapper[4953]: I1211 10:12:23.270183 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:23 crc kubenswrapper[4953]: I1211 10:12:23.270194 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:23 crc kubenswrapper[4953]: I1211 10:12:23.270210 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:23 crc kubenswrapper[4953]: I1211 10:12:23.270223 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:23Z","lastTransitionTime":"2025-12-11T10:12:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:23 crc kubenswrapper[4953]: I1211 10:12:23.372553 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:23 crc kubenswrapper[4953]: I1211 10:12:23.372769 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:23 crc kubenswrapper[4953]: I1211 10:12:23.372789 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:23 crc kubenswrapper[4953]: I1211 10:12:23.372812 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:23 crc kubenswrapper[4953]: I1211 10:12:23.372831 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:23Z","lastTransitionTime":"2025-12-11T10:12:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:23 crc kubenswrapper[4953]: I1211 10:12:23.472750 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qm4mr" Dec 11 10:12:23 crc kubenswrapper[4953]: I1211 10:12:23.472806 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 10:12:23 crc kubenswrapper[4953]: I1211 10:12:23.472818 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 10:12:23 crc kubenswrapper[4953]: I1211 10:12:23.472778 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 10:12:23 crc kubenswrapper[4953]: E1211 10:12:23.472963 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qm4mr" podUID="86f65b63-32e0-49cc-bc96-272ecfb987ed" Dec 11 10:12:23 crc kubenswrapper[4953]: E1211 10:12:23.473084 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 10:12:23 crc kubenswrapper[4953]: E1211 10:12:23.473172 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 10:12:23 crc kubenswrapper[4953]: E1211 10:12:23.473228 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 10:12:23 crc kubenswrapper[4953]: I1211 10:12:23.475387 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:23 crc kubenswrapper[4953]: I1211 10:12:23.475445 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:23 crc kubenswrapper[4953]: I1211 10:12:23.475458 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:23 crc kubenswrapper[4953]: I1211 10:12:23.475478 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:23 crc kubenswrapper[4953]: I1211 10:12:23.475492 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:23Z","lastTransitionTime":"2025-12-11T10:12:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:23 crc kubenswrapper[4953]: I1211 10:12:23.578164 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:23 crc kubenswrapper[4953]: I1211 10:12:23.578204 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:23 crc kubenswrapper[4953]: I1211 10:12:23.578224 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:23 crc kubenswrapper[4953]: I1211 10:12:23.578236 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:23 crc kubenswrapper[4953]: I1211 10:12:23.578245 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:23Z","lastTransitionTime":"2025-12-11T10:12:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:23 crc kubenswrapper[4953]: I1211 10:12:23.680953 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:23 crc kubenswrapper[4953]: I1211 10:12:23.681031 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:23 crc kubenswrapper[4953]: I1211 10:12:23.681046 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:23 crc kubenswrapper[4953]: I1211 10:12:23.681069 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:23 crc kubenswrapper[4953]: I1211 10:12:23.681087 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:23Z","lastTransitionTime":"2025-12-11T10:12:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:23 crc kubenswrapper[4953]: I1211 10:12:23.783535 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:23 crc kubenswrapper[4953]: I1211 10:12:23.783638 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:23 crc kubenswrapper[4953]: I1211 10:12:23.783655 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:23 crc kubenswrapper[4953]: I1211 10:12:23.783675 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:23 crc kubenswrapper[4953]: I1211 10:12:23.783691 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:23Z","lastTransitionTime":"2025-12-11T10:12:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:23 crc kubenswrapper[4953]: I1211 10:12:23.886934 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:23 crc kubenswrapper[4953]: I1211 10:12:23.887001 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:23 crc kubenswrapper[4953]: I1211 10:12:23.887043 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:23 crc kubenswrapper[4953]: I1211 10:12:23.887076 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:23 crc kubenswrapper[4953]: I1211 10:12:23.887102 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:23Z","lastTransitionTime":"2025-12-11T10:12:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:23 crc kubenswrapper[4953]: I1211 10:12:23.989216 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:23 crc kubenswrapper[4953]: I1211 10:12:23.989295 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:23 crc kubenswrapper[4953]: I1211 10:12:23.989313 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:23 crc kubenswrapper[4953]: I1211 10:12:23.989337 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:23 crc kubenswrapper[4953]: I1211 10:12:23.989354 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:23Z","lastTransitionTime":"2025-12-11T10:12:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:24 crc kubenswrapper[4953]: I1211 10:12:24.092837 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:24 crc kubenswrapper[4953]: I1211 10:12:24.092890 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:24 crc kubenswrapper[4953]: I1211 10:12:24.092922 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:24 crc kubenswrapper[4953]: I1211 10:12:24.092941 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:24 crc kubenswrapper[4953]: I1211 10:12:24.092958 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:24Z","lastTransitionTime":"2025-12-11T10:12:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:24 crc kubenswrapper[4953]: I1211 10:12:24.195761 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:24 crc kubenswrapper[4953]: I1211 10:12:24.195808 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:24 crc kubenswrapper[4953]: I1211 10:12:24.195837 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:24 crc kubenswrapper[4953]: I1211 10:12:24.195849 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:24 crc kubenswrapper[4953]: I1211 10:12:24.195861 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:24Z","lastTransitionTime":"2025-12-11T10:12:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:24 crc kubenswrapper[4953]: I1211 10:12:24.298741 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:24 crc kubenswrapper[4953]: I1211 10:12:24.298793 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:24 crc kubenswrapper[4953]: I1211 10:12:24.298852 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:24 crc kubenswrapper[4953]: I1211 10:12:24.298869 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:24 crc kubenswrapper[4953]: I1211 10:12:24.298884 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:24Z","lastTransitionTime":"2025-12-11T10:12:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:24 crc kubenswrapper[4953]: I1211 10:12:24.401087 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:24 crc kubenswrapper[4953]: I1211 10:12:24.401117 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:24 crc kubenswrapper[4953]: I1211 10:12:24.401125 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:24 crc kubenswrapper[4953]: I1211 10:12:24.401137 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:24 crc kubenswrapper[4953]: I1211 10:12:24.401145 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:24Z","lastTransitionTime":"2025-12-11T10:12:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:24 crc kubenswrapper[4953]: I1211 10:12:24.504344 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:24 crc kubenswrapper[4953]: I1211 10:12:24.504437 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:24 crc kubenswrapper[4953]: I1211 10:12:24.504463 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:24 crc kubenswrapper[4953]: I1211 10:12:24.504492 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:24 crc kubenswrapper[4953]: I1211 10:12:24.504516 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:24Z","lastTransitionTime":"2025-12-11T10:12:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:24 crc kubenswrapper[4953]: I1211 10:12:24.607708 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:24 crc kubenswrapper[4953]: I1211 10:12:24.608049 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:24 crc kubenswrapper[4953]: I1211 10:12:24.608130 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:24 crc kubenswrapper[4953]: I1211 10:12:24.608207 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:24 crc kubenswrapper[4953]: I1211 10:12:24.608312 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:24Z","lastTransitionTime":"2025-12-11T10:12:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:24 crc kubenswrapper[4953]: I1211 10:12:24.711345 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:24 crc kubenswrapper[4953]: I1211 10:12:24.711437 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:24 crc kubenswrapper[4953]: I1211 10:12:24.711465 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:24 crc kubenswrapper[4953]: I1211 10:12:24.711492 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:24 crc kubenswrapper[4953]: I1211 10:12:24.711511 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:24Z","lastTransitionTime":"2025-12-11T10:12:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:24 crc kubenswrapper[4953]: I1211 10:12:24.815167 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:24 crc kubenswrapper[4953]: I1211 10:12:24.815659 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:24 crc kubenswrapper[4953]: I1211 10:12:24.815902 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:24 crc kubenswrapper[4953]: I1211 10:12:24.816186 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:24 crc kubenswrapper[4953]: I1211 10:12:24.816412 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:24Z","lastTransitionTime":"2025-12-11T10:12:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:24 crc kubenswrapper[4953]: I1211 10:12:24.920341 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:24 crc kubenswrapper[4953]: I1211 10:12:24.920957 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:24 crc kubenswrapper[4953]: I1211 10:12:24.921216 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:24 crc kubenswrapper[4953]: I1211 10:12:24.921479 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:24 crc kubenswrapper[4953]: I1211 10:12:24.921749 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:24Z","lastTransitionTime":"2025-12-11T10:12:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:25 crc kubenswrapper[4953]: I1211 10:12:25.025119 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:25 crc kubenswrapper[4953]: I1211 10:12:25.025167 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:25 crc kubenswrapper[4953]: I1211 10:12:25.025182 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:25 crc kubenswrapper[4953]: I1211 10:12:25.025234 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:25 crc kubenswrapper[4953]: I1211 10:12:25.025243 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:25Z","lastTransitionTime":"2025-12-11T10:12:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:25 crc kubenswrapper[4953]: I1211 10:12:25.128250 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:25 crc kubenswrapper[4953]: I1211 10:12:25.128330 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:25 crc kubenswrapper[4953]: I1211 10:12:25.128342 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:25 crc kubenswrapper[4953]: I1211 10:12:25.128360 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:25 crc kubenswrapper[4953]: I1211 10:12:25.128372 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:25Z","lastTransitionTime":"2025-12-11T10:12:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:25 crc kubenswrapper[4953]: I1211 10:12:25.231023 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:25 crc kubenswrapper[4953]: I1211 10:12:25.231068 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:25 crc kubenswrapper[4953]: I1211 10:12:25.231080 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:25 crc kubenswrapper[4953]: I1211 10:12:25.231095 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:25 crc kubenswrapper[4953]: I1211 10:12:25.231105 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:25Z","lastTransitionTime":"2025-12-11T10:12:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:25 crc kubenswrapper[4953]: I1211 10:12:25.333902 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:25 crc kubenswrapper[4953]: I1211 10:12:25.334305 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:25 crc kubenswrapper[4953]: I1211 10:12:25.334538 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:25 crc kubenswrapper[4953]: I1211 10:12:25.334842 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:25 crc kubenswrapper[4953]: I1211 10:12:25.335018 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:25Z","lastTransitionTime":"2025-12-11T10:12:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:25 crc kubenswrapper[4953]: I1211 10:12:25.437916 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:25 crc kubenswrapper[4953]: I1211 10:12:25.437963 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:25 crc kubenswrapper[4953]: I1211 10:12:25.437972 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:25 crc kubenswrapper[4953]: I1211 10:12:25.437986 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:25 crc kubenswrapper[4953]: I1211 10:12:25.437998 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:25Z","lastTransitionTime":"2025-12-11T10:12:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:25 crc kubenswrapper[4953]: I1211 10:12:25.473225 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qm4mr" Dec 11 10:12:25 crc kubenswrapper[4953]: I1211 10:12:25.473295 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 10:12:25 crc kubenswrapper[4953]: I1211 10:12:25.473296 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 10:12:25 crc kubenswrapper[4953]: E1211 10:12:25.473384 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qm4mr" podUID="86f65b63-32e0-49cc-bc96-272ecfb987ed" Dec 11 10:12:25 crc kubenswrapper[4953]: I1211 10:12:25.473573 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 10:12:25 crc kubenswrapper[4953]: E1211 10:12:25.473723 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 10:12:25 crc kubenswrapper[4953]: E1211 10:12:25.473622 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 10:12:25 crc kubenswrapper[4953]: E1211 10:12:25.474017 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 10:12:25 crc kubenswrapper[4953]: I1211 10:12:25.540358 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:25 crc kubenswrapper[4953]: I1211 10:12:25.540399 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:25 crc kubenswrapper[4953]: I1211 10:12:25.540409 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:25 crc kubenswrapper[4953]: I1211 10:12:25.540424 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:25 crc kubenswrapper[4953]: I1211 10:12:25.540437 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:25Z","lastTransitionTime":"2025-12-11T10:12:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:25 crc kubenswrapper[4953]: I1211 10:12:25.643314 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:25 crc kubenswrapper[4953]: I1211 10:12:25.643606 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:25 crc kubenswrapper[4953]: I1211 10:12:25.643691 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:25 crc kubenswrapper[4953]: I1211 10:12:25.643760 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:25 crc kubenswrapper[4953]: I1211 10:12:25.643832 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:25Z","lastTransitionTime":"2025-12-11T10:12:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:25 crc kubenswrapper[4953]: I1211 10:12:25.746558 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:25 crc kubenswrapper[4953]: I1211 10:12:25.747080 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:25 crc kubenswrapper[4953]: I1211 10:12:25.747172 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:25 crc kubenswrapper[4953]: I1211 10:12:25.747259 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:25 crc kubenswrapper[4953]: I1211 10:12:25.747355 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:25Z","lastTransitionTime":"2025-12-11T10:12:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:25 crc kubenswrapper[4953]: I1211 10:12:25.850033 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:25 crc kubenswrapper[4953]: I1211 10:12:25.850369 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:25 crc kubenswrapper[4953]: I1211 10:12:25.850526 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:25 crc kubenswrapper[4953]: I1211 10:12:25.850707 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:25 crc kubenswrapper[4953]: I1211 10:12:25.850841 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:25Z","lastTransitionTime":"2025-12-11T10:12:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:25 crc kubenswrapper[4953]: I1211 10:12:25.953699 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:25 crc kubenswrapper[4953]: I1211 10:12:25.953732 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:25 crc kubenswrapper[4953]: I1211 10:12:25.953740 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:25 crc kubenswrapper[4953]: I1211 10:12:25.953752 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:25 crc kubenswrapper[4953]: I1211 10:12:25.953762 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:25Z","lastTransitionTime":"2025-12-11T10:12:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:26 crc kubenswrapper[4953]: I1211 10:12:26.056154 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:26 crc kubenswrapper[4953]: I1211 10:12:26.056197 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:26 crc kubenswrapper[4953]: I1211 10:12:26.056209 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:26 crc kubenswrapper[4953]: I1211 10:12:26.056227 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:26 crc kubenswrapper[4953]: I1211 10:12:26.056240 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:26Z","lastTransitionTime":"2025-12-11T10:12:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:26 crc kubenswrapper[4953]: I1211 10:12:26.158737 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:26 crc kubenswrapper[4953]: I1211 10:12:26.158777 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:26 crc kubenswrapper[4953]: I1211 10:12:26.158788 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:26 crc kubenswrapper[4953]: I1211 10:12:26.158802 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:26 crc kubenswrapper[4953]: I1211 10:12:26.158815 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:26Z","lastTransitionTime":"2025-12-11T10:12:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:26 crc kubenswrapper[4953]: I1211 10:12:26.260811 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:26 crc kubenswrapper[4953]: I1211 10:12:26.260850 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:26 crc kubenswrapper[4953]: I1211 10:12:26.260861 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:26 crc kubenswrapper[4953]: I1211 10:12:26.260875 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:26 crc kubenswrapper[4953]: I1211 10:12:26.260884 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:26Z","lastTransitionTime":"2025-12-11T10:12:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:26 crc kubenswrapper[4953]: I1211 10:12:26.362803 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:26 crc kubenswrapper[4953]: I1211 10:12:26.363093 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:26 crc kubenswrapper[4953]: I1211 10:12:26.363159 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:26 crc kubenswrapper[4953]: I1211 10:12:26.363244 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:26 crc kubenswrapper[4953]: I1211 10:12:26.363305 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:26Z","lastTransitionTime":"2025-12-11T10:12:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:26 crc kubenswrapper[4953]: I1211 10:12:26.466130 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:26 crc kubenswrapper[4953]: I1211 10:12:26.466175 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:26 crc kubenswrapper[4953]: I1211 10:12:26.466184 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:26 crc kubenswrapper[4953]: I1211 10:12:26.466199 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:26 crc kubenswrapper[4953]: I1211 10:12:26.466209 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:26Z","lastTransitionTime":"2025-12-11T10:12:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:26 crc kubenswrapper[4953]: I1211 10:12:26.567951 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:26 crc kubenswrapper[4953]: I1211 10:12:26.567980 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:26 crc kubenswrapper[4953]: I1211 10:12:26.567988 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:26 crc kubenswrapper[4953]: I1211 10:12:26.568000 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:26 crc kubenswrapper[4953]: I1211 10:12:26.568008 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:26Z","lastTransitionTime":"2025-12-11T10:12:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:26 crc kubenswrapper[4953]: I1211 10:12:26.670300 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:26 crc kubenswrapper[4953]: I1211 10:12:26.670366 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:26 crc kubenswrapper[4953]: I1211 10:12:26.670376 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:26 crc kubenswrapper[4953]: I1211 10:12:26.670407 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:26 crc kubenswrapper[4953]: I1211 10:12:26.670418 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:26Z","lastTransitionTime":"2025-12-11T10:12:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:26 crc kubenswrapper[4953]: I1211 10:12:26.772609 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:26 crc kubenswrapper[4953]: I1211 10:12:26.772644 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:26 crc kubenswrapper[4953]: I1211 10:12:26.772653 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:26 crc kubenswrapper[4953]: I1211 10:12:26.772682 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:26 crc kubenswrapper[4953]: I1211 10:12:26.772692 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:26Z","lastTransitionTime":"2025-12-11T10:12:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:26 crc kubenswrapper[4953]: I1211 10:12:26.874665 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:26 crc kubenswrapper[4953]: I1211 10:12:26.874700 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:26 crc kubenswrapper[4953]: I1211 10:12:26.874708 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:26 crc kubenswrapper[4953]: I1211 10:12:26.874723 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:26 crc kubenswrapper[4953]: I1211 10:12:26.874731 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:26Z","lastTransitionTime":"2025-12-11T10:12:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:26 crc kubenswrapper[4953]: I1211 10:12:26.977293 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:26 crc kubenswrapper[4953]: I1211 10:12:26.977332 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:26 crc kubenswrapper[4953]: I1211 10:12:26.977343 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:26 crc kubenswrapper[4953]: I1211 10:12:26.977358 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:26 crc kubenswrapper[4953]: I1211 10:12:26.977370 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:26Z","lastTransitionTime":"2025-12-11T10:12:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:27 crc kubenswrapper[4953]: I1211 10:12:27.080235 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:27 crc kubenswrapper[4953]: I1211 10:12:27.080867 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:27 crc kubenswrapper[4953]: I1211 10:12:27.080938 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:27 crc kubenswrapper[4953]: I1211 10:12:27.081006 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:27 crc kubenswrapper[4953]: I1211 10:12:27.081092 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:27Z","lastTransitionTime":"2025-12-11T10:12:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:27 crc kubenswrapper[4953]: I1211 10:12:27.184431 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:27 crc kubenswrapper[4953]: I1211 10:12:27.184465 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:27 crc kubenswrapper[4953]: I1211 10:12:27.184477 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:27 crc kubenswrapper[4953]: I1211 10:12:27.184495 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:27 crc kubenswrapper[4953]: I1211 10:12:27.184516 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:27Z","lastTransitionTime":"2025-12-11T10:12:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:27 crc kubenswrapper[4953]: I1211 10:12:27.288180 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:27 crc kubenswrapper[4953]: I1211 10:12:27.288228 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:27 crc kubenswrapper[4953]: I1211 10:12:27.288237 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:27 crc kubenswrapper[4953]: I1211 10:12:27.288251 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:27 crc kubenswrapper[4953]: I1211 10:12:27.288272 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:27Z","lastTransitionTime":"2025-12-11T10:12:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:27 crc kubenswrapper[4953]: I1211 10:12:27.390772 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:27 crc kubenswrapper[4953]: I1211 10:12:27.390827 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:27 crc kubenswrapper[4953]: I1211 10:12:27.390844 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:27 crc kubenswrapper[4953]: I1211 10:12:27.390864 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:27 crc kubenswrapper[4953]: I1211 10:12:27.390880 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:27Z","lastTransitionTime":"2025-12-11T10:12:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:27 crc kubenswrapper[4953]: I1211 10:12:27.472368 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 10:12:27 crc kubenswrapper[4953]: I1211 10:12:27.472417 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 10:12:27 crc kubenswrapper[4953]: E1211 10:12:27.472467 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 10:12:27 crc kubenswrapper[4953]: I1211 10:12:27.472545 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qm4mr" Dec 11 10:12:27 crc kubenswrapper[4953]: E1211 10:12:27.472627 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 10:12:27 crc kubenswrapper[4953]: I1211 10:12:27.472601 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 10:12:27 crc kubenswrapper[4953]: E1211 10:12:27.472729 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qm4mr" podUID="86f65b63-32e0-49cc-bc96-272ecfb987ed" Dec 11 10:12:27 crc kubenswrapper[4953]: E1211 10:12:27.472785 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 10:12:27 crc kubenswrapper[4953]: I1211 10:12:27.492772 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:27 crc kubenswrapper[4953]: I1211 10:12:27.492808 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:27 crc kubenswrapper[4953]: I1211 10:12:27.492816 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:27 crc kubenswrapper[4953]: I1211 10:12:27.492830 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:27 crc kubenswrapper[4953]: I1211 10:12:27.492839 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:27Z","lastTransitionTime":"2025-12-11T10:12:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:27 crc kubenswrapper[4953]: I1211 10:12:27.595312 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:27 crc kubenswrapper[4953]: I1211 10:12:27.595390 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:27 crc kubenswrapper[4953]: I1211 10:12:27.595416 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:27 crc kubenswrapper[4953]: I1211 10:12:27.595447 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:27 crc kubenswrapper[4953]: I1211 10:12:27.595472 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:27Z","lastTransitionTime":"2025-12-11T10:12:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:27 crc kubenswrapper[4953]: I1211 10:12:27.697706 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:27 crc kubenswrapper[4953]: I1211 10:12:27.697796 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:27 crc kubenswrapper[4953]: I1211 10:12:27.697806 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:27 crc kubenswrapper[4953]: I1211 10:12:27.697820 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:27 crc kubenswrapper[4953]: I1211 10:12:27.697830 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:27Z","lastTransitionTime":"2025-12-11T10:12:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:27 crc kubenswrapper[4953]: I1211 10:12:27.799527 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:27 crc kubenswrapper[4953]: I1211 10:12:27.799566 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:27 crc kubenswrapper[4953]: I1211 10:12:27.799600 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:27 crc kubenswrapper[4953]: I1211 10:12:27.799617 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:27 crc kubenswrapper[4953]: I1211 10:12:27.799628 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:27Z","lastTransitionTime":"2025-12-11T10:12:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:27 crc kubenswrapper[4953]: I1211 10:12:27.901783 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:27 crc kubenswrapper[4953]: I1211 10:12:27.901828 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:27 crc kubenswrapper[4953]: I1211 10:12:27.901840 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:27 crc kubenswrapper[4953]: I1211 10:12:27.901855 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:27 crc kubenswrapper[4953]: I1211 10:12:27.901867 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:27Z","lastTransitionTime":"2025-12-11T10:12:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:28 crc kubenswrapper[4953]: I1211 10:12:28.004546 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:28 crc kubenswrapper[4953]: I1211 10:12:28.004614 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:28 crc kubenswrapper[4953]: I1211 10:12:28.004626 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:28 crc kubenswrapper[4953]: I1211 10:12:28.004644 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:28 crc kubenswrapper[4953]: I1211 10:12:28.004656 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:28Z","lastTransitionTime":"2025-12-11T10:12:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:28 crc kubenswrapper[4953]: I1211 10:12:28.106962 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:28 crc kubenswrapper[4953]: I1211 10:12:28.107004 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:28 crc kubenswrapper[4953]: I1211 10:12:28.107016 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:28 crc kubenswrapper[4953]: I1211 10:12:28.107030 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:28 crc kubenswrapper[4953]: I1211 10:12:28.107039 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:28Z","lastTransitionTime":"2025-12-11T10:12:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:28 crc kubenswrapper[4953]: I1211 10:12:28.209806 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:28 crc kubenswrapper[4953]: I1211 10:12:28.209849 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:28 crc kubenswrapper[4953]: I1211 10:12:28.209859 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:28 crc kubenswrapper[4953]: I1211 10:12:28.209872 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:28 crc kubenswrapper[4953]: I1211 10:12:28.209882 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:28Z","lastTransitionTime":"2025-12-11T10:12:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:28 crc kubenswrapper[4953]: I1211 10:12:28.312200 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:28 crc kubenswrapper[4953]: I1211 10:12:28.312232 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:28 crc kubenswrapper[4953]: I1211 10:12:28.312240 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:28 crc kubenswrapper[4953]: I1211 10:12:28.312256 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:28 crc kubenswrapper[4953]: I1211 10:12:28.312265 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:28Z","lastTransitionTime":"2025-12-11T10:12:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:28 crc kubenswrapper[4953]: I1211 10:12:28.414174 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:28 crc kubenswrapper[4953]: I1211 10:12:28.414214 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:28 crc kubenswrapper[4953]: I1211 10:12:28.414225 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:28 crc kubenswrapper[4953]: I1211 10:12:28.414241 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:28 crc kubenswrapper[4953]: I1211 10:12:28.414253 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:28Z","lastTransitionTime":"2025-12-11T10:12:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:28 crc kubenswrapper[4953]: I1211 10:12:28.516864 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:28 crc kubenswrapper[4953]: I1211 10:12:28.516910 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:28 crc kubenswrapper[4953]: I1211 10:12:28.516923 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:28 crc kubenswrapper[4953]: I1211 10:12:28.516939 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:28 crc kubenswrapper[4953]: I1211 10:12:28.516950 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:28Z","lastTransitionTime":"2025-12-11T10:12:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:28 crc kubenswrapper[4953]: I1211 10:12:28.619301 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:28 crc kubenswrapper[4953]: I1211 10:12:28.619617 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:28 crc kubenswrapper[4953]: I1211 10:12:28.619811 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:28 crc kubenswrapper[4953]: I1211 10:12:28.620000 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:28 crc kubenswrapper[4953]: I1211 10:12:28.620149 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:28Z","lastTransitionTime":"2025-12-11T10:12:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:28 crc kubenswrapper[4953]: I1211 10:12:28.722738 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:28 crc kubenswrapper[4953]: I1211 10:12:28.722786 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:28 crc kubenswrapper[4953]: I1211 10:12:28.722796 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:28 crc kubenswrapper[4953]: I1211 10:12:28.722810 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:28 crc kubenswrapper[4953]: I1211 10:12:28.722821 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:28Z","lastTransitionTime":"2025-12-11T10:12:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:28 crc kubenswrapper[4953]: I1211 10:12:28.825462 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:28 crc kubenswrapper[4953]: I1211 10:12:28.825508 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:28 crc kubenswrapper[4953]: I1211 10:12:28.825523 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:28 crc kubenswrapper[4953]: I1211 10:12:28.825537 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:28 crc kubenswrapper[4953]: I1211 10:12:28.825549 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:28Z","lastTransitionTime":"2025-12-11T10:12:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:28 crc kubenswrapper[4953]: I1211 10:12:28.928432 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:28 crc kubenswrapper[4953]: I1211 10:12:28.928486 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:28 crc kubenswrapper[4953]: I1211 10:12:28.928496 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:28 crc kubenswrapper[4953]: I1211 10:12:28.928516 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:28 crc kubenswrapper[4953]: I1211 10:12:28.928538 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:28Z","lastTransitionTime":"2025-12-11T10:12:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:29 crc kubenswrapper[4953]: I1211 10:12:29.030782 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:29 crc kubenswrapper[4953]: I1211 10:12:29.030820 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:29 crc kubenswrapper[4953]: I1211 10:12:29.030829 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:29 crc kubenswrapper[4953]: I1211 10:12:29.030843 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:29 crc kubenswrapper[4953]: I1211 10:12:29.030854 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:29Z","lastTransitionTime":"2025-12-11T10:12:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:29 crc kubenswrapper[4953]: I1211 10:12:29.133407 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:29 crc kubenswrapper[4953]: I1211 10:12:29.133452 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:29 crc kubenswrapper[4953]: I1211 10:12:29.133462 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:29 crc kubenswrapper[4953]: I1211 10:12:29.133480 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:29 crc kubenswrapper[4953]: I1211 10:12:29.133493 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:29Z","lastTransitionTime":"2025-12-11T10:12:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:29 crc kubenswrapper[4953]: I1211 10:12:29.236150 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:29 crc kubenswrapper[4953]: I1211 10:12:29.236213 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:29 crc kubenswrapper[4953]: I1211 10:12:29.236232 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:29 crc kubenswrapper[4953]: I1211 10:12:29.236257 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:29 crc kubenswrapper[4953]: I1211 10:12:29.236272 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:29Z","lastTransitionTime":"2025-12-11T10:12:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:29 crc kubenswrapper[4953]: I1211 10:12:29.338301 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:29 crc kubenswrapper[4953]: I1211 10:12:29.338351 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:29 crc kubenswrapper[4953]: I1211 10:12:29.338363 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:29 crc kubenswrapper[4953]: I1211 10:12:29.338380 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:29 crc kubenswrapper[4953]: I1211 10:12:29.338392 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:29Z","lastTransitionTime":"2025-12-11T10:12:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:29 crc kubenswrapper[4953]: I1211 10:12:29.440467 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:29 crc kubenswrapper[4953]: I1211 10:12:29.440524 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:29 crc kubenswrapper[4953]: I1211 10:12:29.440543 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:29 crc kubenswrapper[4953]: I1211 10:12:29.440559 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:29 crc kubenswrapper[4953]: I1211 10:12:29.440572 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:29Z","lastTransitionTime":"2025-12-11T10:12:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:29 crc kubenswrapper[4953]: I1211 10:12:29.472829 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 10:12:29 crc kubenswrapper[4953]: I1211 10:12:29.472829 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 10:12:29 crc kubenswrapper[4953]: E1211 10:12:29.472956 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 10:12:29 crc kubenswrapper[4953]: I1211 10:12:29.472854 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qm4mr" Dec 11 10:12:29 crc kubenswrapper[4953]: I1211 10:12:29.472856 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 10:12:29 crc kubenswrapper[4953]: E1211 10:12:29.473033 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 10:12:29 crc kubenswrapper[4953]: E1211 10:12:29.473101 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qm4mr" podUID="86f65b63-32e0-49cc-bc96-272ecfb987ed" Dec 11 10:12:29 crc kubenswrapper[4953]: E1211 10:12:29.473182 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 10:12:29 crc kubenswrapper[4953]: I1211 10:12:29.545745 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:29 crc kubenswrapper[4953]: I1211 10:12:29.545811 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:29 crc kubenswrapper[4953]: I1211 10:12:29.545822 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:29 crc kubenswrapper[4953]: I1211 10:12:29.545837 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:29 crc kubenswrapper[4953]: I1211 10:12:29.545852 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:29Z","lastTransitionTime":"2025-12-11T10:12:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:29 crc kubenswrapper[4953]: I1211 10:12:29.649431 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:29 crc kubenswrapper[4953]: I1211 10:12:29.649477 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:29 crc kubenswrapper[4953]: I1211 10:12:29.649488 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:29 crc kubenswrapper[4953]: I1211 10:12:29.649512 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:29 crc kubenswrapper[4953]: I1211 10:12:29.649524 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:29Z","lastTransitionTime":"2025-12-11T10:12:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:29 crc kubenswrapper[4953]: I1211 10:12:29.752075 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:29 crc kubenswrapper[4953]: I1211 10:12:29.752118 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:29 crc kubenswrapper[4953]: I1211 10:12:29.752128 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:29 crc kubenswrapper[4953]: I1211 10:12:29.752140 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:29 crc kubenswrapper[4953]: I1211 10:12:29.752149 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:29Z","lastTransitionTime":"2025-12-11T10:12:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:29 crc kubenswrapper[4953]: I1211 10:12:29.855273 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:29 crc kubenswrapper[4953]: I1211 10:12:29.855323 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:29 crc kubenswrapper[4953]: I1211 10:12:29.855337 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:29 crc kubenswrapper[4953]: I1211 10:12:29.855355 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:29 crc kubenswrapper[4953]: I1211 10:12:29.855368 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:29Z","lastTransitionTime":"2025-12-11T10:12:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:29 crc kubenswrapper[4953]: I1211 10:12:29.958723 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:29 crc kubenswrapper[4953]: I1211 10:12:29.958998 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:29 crc kubenswrapper[4953]: I1211 10:12:29.959111 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:29 crc kubenswrapper[4953]: I1211 10:12:29.959216 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:29 crc kubenswrapper[4953]: I1211 10:12:29.959302 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:29Z","lastTransitionTime":"2025-12-11T10:12:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:30 crc kubenswrapper[4953]: I1211 10:12:30.061735 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:30 crc kubenswrapper[4953]: I1211 10:12:30.061785 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:30 crc kubenswrapper[4953]: I1211 10:12:30.061798 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:30 crc kubenswrapper[4953]: I1211 10:12:30.061813 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:30 crc kubenswrapper[4953]: I1211 10:12:30.061831 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:30Z","lastTransitionTime":"2025-12-11T10:12:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:30 crc kubenswrapper[4953]: I1211 10:12:30.166074 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:30 crc kubenswrapper[4953]: I1211 10:12:30.166120 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:30 crc kubenswrapper[4953]: I1211 10:12:30.166146 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:30 crc kubenswrapper[4953]: I1211 10:12:30.166165 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:30 crc kubenswrapper[4953]: I1211 10:12:30.166179 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:30Z","lastTransitionTime":"2025-12-11T10:12:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:30 crc kubenswrapper[4953]: I1211 10:12:30.269064 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:30 crc kubenswrapper[4953]: I1211 10:12:30.269106 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:30 crc kubenswrapper[4953]: I1211 10:12:30.269118 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:30 crc kubenswrapper[4953]: I1211 10:12:30.269140 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:30 crc kubenswrapper[4953]: I1211 10:12:30.269154 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:30Z","lastTransitionTime":"2025-12-11T10:12:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:30 crc kubenswrapper[4953]: I1211 10:12:30.372238 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:30 crc kubenswrapper[4953]: I1211 10:12:30.372281 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:30 crc kubenswrapper[4953]: I1211 10:12:30.372291 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:30 crc kubenswrapper[4953]: I1211 10:12:30.372308 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:30 crc kubenswrapper[4953]: I1211 10:12:30.372318 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:30Z","lastTransitionTime":"2025-12-11T10:12:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:30 crc kubenswrapper[4953]: I1211 10:12:30.475088 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:30 crc kubenswrapper[4953]: I1211 10:12:30.475136 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:30 crc kubenswrapper[4953]: I1211 10:12:30.475145 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:30 crc kubenswrapper[4953]: I1211 10:12:30.475157 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:30 crc kubenswrapper[4953]: I1211 10:12:30.475166 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:30Z","lastTransitionTime":"2025-12-11T10:12:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:30 crc kubenswrapper[4953]: I1211 10:12:30.578125 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:30 crc kubenswrapper[4953]: I1211 10:12:30.578163 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:30 crc kubenswrapper[4953]: I1211 10:12:30.578173 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:30 crc kubenswrapper[4953]: I1211 10:12:30.578186 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:30 crc kubenswrapper[4953]: I1211 10:12:30.578195 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:30Z","lastTransitionTime":"2025-12-11T10:12:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:30 crc kubenswrapper[4953]: I1211 10:12:30.681224 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:30 crc kubenswrapper[4953]: I1211 10:12:30.681272 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:30 crc kubenswrapper[4953]: I1211 10:12:30.681284 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:30 crc kubenswrapper[4953]: I1211 10:12:30.681301 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:30 crc kubenswrapper[4953]: I1211 10:12:30.681312 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:30Z","lastTransitionTime":"2025-12-11T10:12:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:30 crc kubenswrapper[4953]: I1211 10:12:30.784226 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:30 crc kubenswrapper[4953]: I1211 10:12:30.784274 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:30 crc kubenswrapper[4953]: I1211 10:12:30.784287 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:30 crc kubenswrapper[4953]: I1211 10:12:30.784303 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:30 crc kubenswrapper[4953]: I1211 10:12:30.784314 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:30Z","lastTransitionTime":"2025-12-11T10:12:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:30 crc kubenswrapper[4953]: I1211 10:12:30.887662 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:30 crc kubenswrapper[4953]: I1211 10:12:30.887729 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:30 crc kubenswrapper[4953]: I1211 10:12:30.887752 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:30 crc kubenswrapper[4953]: I1211 10:12:30.887825 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:30 crc kubenswrapper[4953]: I1211 10:12:30.887848 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:30Z","lastTransitionTime":"2025-12-11T10:12:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:30 crc kubenswrapper[4953]: I1211 10:12:30.990507 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:30 crc kubenswrapper[4953]: I1211 10:12:30.990547 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:30 crc kubenswrapper[4953]: I1211 10:12:30.990557 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:30 crc kubenswrapper[4953]: I1211 10:12:30.990588 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:30 crc kubenswrapper[4953]: I1211 10:12:30.990600 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:30Z","lastTransitionTime":"2025-12-11T10:12:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:31 crc kubenswrapper[4953]: I1211 10:12:31.080124 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:31 crc kubenswrapper[4953]: I1211 10:12:31.080162 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:31 crc kubenswrapper[4953]: I1211 10:12:31.080173 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:31 crc kubenswrapper[4953]: I1211 10:12:31.080190 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:31 crc kubenswrapper[4953]: I1211 10:12:31.080201 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:31Z","lastTransitionTime":"2025-12-11T10:12:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:31 crc kubenswrapper[4953]: E1211 10:12:31.095967 4953 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T10:12:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T10:12:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T10:12:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T10:12:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5fa37296-71b7-4540-87a3-260b8ecb76f4\\\",\\\"systemUUID\\\":\\\"28c30a59-aa99-484b-82a7-0daea6b2659e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:31Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:31 crc kubenswrapper[4953]: I1211 10:12:31.100114 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:31 crc kubenswrapper[4953]: I1211 10:12:31.100140 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:31 crc kubenswrapper[4953]: I1211 10:12:31.100150 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:31 crc kubenswrapper[4953]: I1211 10:12:31.100171 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:31 crc kubenswrapper[4953]: I1211 10:12:31.100180 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:31Z","lastTransitionTime":"2025-12-11T10:12:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:31 crc kubenswrapper[4953]: E1211 10:12:31.113342 4953 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T10:12:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T10:12:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T10:12:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T10:12:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5fa37296-71b7-4540-87a3-260b8ecb76f4\\\",\\\"systemUUID\\\":\\\"28c30a59-aa99-484b-82a7-0daea6b2659e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:31Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:31 crc kubenswrapper[4953]: I1211 10:12:31.117827 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:31 crc kubenswrapper[4953]: I1211 10:12:31.117873 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:31 crc kubenswrapper[4953]: I1211 10:12:31.117882 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:31 crc kubenswrapper[4953]: I1211 10:12:31.117896 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:31 crc kubenswrapper[4953]: I1211 10:12:31.117921 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:31Z","lastTransitionTime":"2025-12-11T10:12:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:31 crc kubenswrapper[4953]: E1211 10:12:31.131097 4953 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T10:12:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T10:12:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T10:12:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T10:12:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5fa37296-71b7-4540-87a3-260b8ecb76f4\\\",\\\"systemUUID\\\":\\\"28c30a59-aa99-484b-82a7-0daea6b2659e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:31Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:31 crc kubenswrapper[4953]: I1211 10:12:31.135099 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:31 crc kubenswrapper[4953]: I1211 10:12:31.135140 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:31 crc kubenswrapper[4953]: I1211 10:12:31.135152 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:31 crc kubenswrapper[4953]: I1211 10:12:31.135171 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:31 crc kubenswrapper[4953]: I1211 10:12:31.135184 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:31Z","lastTransitionTime":"2025-12-11T10:12:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:31 crc kubenswrapper[4953]: E1211 10:12:31.159361 4953 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T10:12:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T10:12:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T10:12:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T10:12:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5fa37296-71b7-4540-87a3-260b8ecb76f4\\\",\\\"systemUUID\\\":\\\"28c30a59-aa99-484b-82a7-0daea6b2659e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:31Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:31 crc kubenswrapper[4953]: I1211 10:12:31.164463 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:31 crc kubenswrapper[4953]: I1211 10:12:31.164490 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:31 crc kubenswrapper[4953]: I1211 10:12:31.164498 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:31 crc kubenswrapper[4953]: I1211 10:12:31.164511 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:31 crc kubenswrapper[4953]: I1211 10:12:31.164520 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:31Z","lastTransitionTime":"2025-12-11T10:12:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:31 crc kubenswrapper[4953]: E1211 10:12:31.177911 4953 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T10:12:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T10:12:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T10:12:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T10:12:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5fa37296-71b7-4540-87a3-260b8ecb76f4\\\",\\\"systemUUID\\\":\\\"28c30a59-aa99-484b-82a7-0daea6b2659e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:31Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:31 crc kubenswrapper[4953]: E1211 10:12:31.178079 4953 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 11 10:12:31 crc kubenswrapper[4953]: I1211 10:12:31.180063 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:31 crc kubenswrapper[4953]: I1211 10:12:31.180100 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:31 crc kubenswrapper[4953]: I1211 10:12:31.180114 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:31 crc kubenswrapper[4953]: I1211 10:12:31.180131 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:31 crc kubenswrapper[4953]: I1211 10:12:31.180144 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:31Z","lastTransitionTime":"2025-12-11T10:12:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:31 crc kubenswrapper[4953]: I1211 10:12:31.284091 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:31 crc kubenswrapper[4953]: I1211 10:12:31.284142 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:31 crc kubenswrapper[4953]: I1211 10:12:31.284152 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:31 crc kubenswrapper[4953]: I1211 10:12:31.284167 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:31 crc kubenswrapper[4953]: I1211 10:12:31.284177 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:31Z","lastTransitionTime":"2025-12-11T10:12:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:31 crc kubenswrapper[4953]: I1211 10:12:31.385883 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:31 crc kubenswrapper[4953]: I1211 10:12:31.385948 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:31 crc kubenswrapper[4953]: I1211 10:12:31.385959 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:31 crc kubenswrapper[4953]: I1211 10:12:31.385975 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:31 crc kubenswrapper[4953]: I1211 10:12:31.385986 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:31Z","lastTransitionTime":"2025-12-11T10:12:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:31 crc kubenswrapper[4953]: I1211 10:12:31.472814 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 10:12:31 crc kubenswrapper[4953]: I1211 10:12:31.472833 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qm4mr" Dec 11 10:12:31 crc kubenswrapper[4953]: E1211 10:12:31.472944 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 10:12:31 crc kubenswrapper[4953]: I1211 10:12:31.472965 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 10:12:31 crc kubenswrapper[4953]: I1211 10:12:31.473003 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 10:12:31 crc kubenswrapper[4953]: E1211 10:12:31.473104 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 10:12:31 crc kubenswrapper[4953]: E1211 10:12:31.473190 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 10:12:31 crc kubenswrapper[4953]: E1211 10:12:31.473221 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qm4mr" podUID="86f65b63-32e0-49cc-bc96-272ecfb987ed" Dec 11 10:12:31 crc kubenswrapper[4953]: I1211 10:12:31.487974 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:31 crc kubenswrapper[4953]: I1211 10:12:31.488013 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:31 crc kubenswrapper[4953]: I1211 10:12:31.488022 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:31 crc kubenswrapper[4953]: I1211 10:12:31.488041 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:31 crc kubenswrapper[4953]: I1211 10:12:31.488054 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:31Z","lastTransitionTime":"2025-12-11T10:12:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:31 crc kubenswrapper[4953]: I1211 10:12:31.590266 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:31 crc kubenswrapper[4953]: I1211 10:12:31.590311 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:31 crc kubenswrapper[4953]: I1211 10:12:31.590321 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:31 crc kubenswrapper[4953]: I1211 10:12:31.590337 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:31 crc kubenswrapper[4953]: I1211 10:12:31.590348 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:31Z","lastTransitionTime":"2025-12-11T10:12:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:31 crc kubenswrapper[4953]: I1211 10:12:31.696136 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:31 crc kubenswrapper[4953]: I1211 10:12:31.696212 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:31 crc kubenswrapper[4953]: I1211 10:12:31.696230 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:31 crc kubenswrapper[4953]: I1211 10:12:31.696255 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:31 crc kubenswrapper[4953]: I1211 10:12:31.696283 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:31Z","lastTransitionTime":"2025-12-11T10:12:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:31 crc kubenswrapper[4953]: I1211 10:12:31.798408 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:31 crc kubenswrapper[4953]: I1211 10:12:31.798450 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:31 crc kubenswrapper[4953]: I1211 10:12:31.798464 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:31 crc kubenswrapper[4953]: I1211 10:12:31.798479 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:31 crc kubenswrapper[4953]: I1211 10:12:31.798491 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:31Z","lastTransitionTime":"2025-12-11T10:12:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:31 crc kubenswrapper[4953]: I1211 10:12:31.900514 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:31 crc kubenswrapper[4953]: I1211 10:12:31.900591 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:31 crc kubenswrapper[4953]: I1211 10:12:31.900604 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:31 crc kubenswrapper[4953]: I1211 10:12:31.900621 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:31 crc kubenswrapper[4953]: I1211 10:12:31.900631 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:31Z","lastTransitionTime":"2025-12-11T10:12:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:32 crc kubenswrapper[4953]: I1211 10:12:32.003195 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:32 crc kubenswrapper[4953]: I1211 10:12:32.003238 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:32 crc kubenswrapper[4953]: I1211 10:12:32.003251 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:32 crc kubenswrapper[4953]: I1211 10:12:32.003281 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:32 crc kubenswrapper[4953]: I1211 10:12:32.003290 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:32Z","lastTransitionTime":"2025-12-11T10:12:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:32 crc kubenswrapper[4953]: I1211 10:12:32.106647 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:32 crc kubenswrapper[4953]: I1211 10:12:32.106698 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:32 crc kubenswrapper[4953]: I1211 10:12:32.106710 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:32 crc kubenswrapper[4953]: I1211 10:12:32.106730 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:32 crc kubenswrapper[4953]: I1211 10:12:32.106745 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:32Z","lastTransitionTime":"2025-12-11T10:12:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:32 crc kubenswrapper[4953]: I1211 10:12:32.209149 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:32 crc kubenswrapper[4953]: I1211 10:12:32.209195 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:32 crc kubenswrapper[4953]: I1211 10:12:32.209216 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:32 crc kubenswrapper[4953]: I1211 10:12:32.209237 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:32 crc kubenswrapper[4953]: I1211 10:12:32.209251 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:32Z","lastTransitionTime":"2025-12-11T10:12:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:32 crc kubenswrapper[4953]: I1211 10:12:32.311477 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:32 crc kubenswrapper[4953]: I1211 10:12:32.311538 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:32 crc kubenswrapper[4953]: I1211 10:12:32.311555 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:32 crc kubenswrapper[4953]: I1211 10:12:32.311612 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:32 crc kubenswrapper[4953]: I1211 10:12:32.311635 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:32Z","lastTransitionTime":"2025-12-11T10:12:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:32 crc kubenswrapper[4953]: I1211 10:12:32.414044 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:32 crc kubenswrapper[4953]: I1211 10:12:32.414114 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:32 crc kubenswrapper[4953]: I1211 10:12:32.414130 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:32 crc kubenswrapper[4953]: I1211 10:12:32.414153 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:32 crc kubenswrapper[4953]: I1211 10:12:32.414168 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:32Z","lastTransitionTime":"2025-12-11T10:12:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:32 crc kubenswrapper[4953]: I1211 10:12:32.488052 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6787fb31-272a-4dd9-b0f2-bfb5630d6901\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42af1d5ca92f02433468753b3f0f0cb74ef360928733d71e4316fb8ed77aea63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ec7f5911594475d4a03216b385df264254e50cb55ef7eee3d2ac0a88e8ef1af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e43d812b41951ea02ea6aeaf53d101e762a3bc0513865818ff2dcc6506a24d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b17d140523000135ca46bbc525af1160b82222469a9ca408985ab27c2514f82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b17d140523000135ca46bbc525af1160b82222469a9ca408985ab27c2514f82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:23Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:22Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:32Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:32 crc kubenswrapper[4953]: I1211 10:12:32.502358 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e38e7bec81ab11b9afe5c592d5c57aa1c0527e5e4031265a00a99ef8cb3c6dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da0ab06260b0bf565e089d1d1a78ae71e0ce94f0d5e867393dafc543f9014367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:32Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:32 crc kubenswrapper[4953]: I1211 10:12:32.517448 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:32 crc kubenswrapper[4953]: I1211 10:12:32.517481 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:32 crc kubenswrapper[4953]: I1211 10:12:32.517490 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:32 crc kubenswrapper[4953]: I1211 10:12:32.517503 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:32 crc kubenswrapper[4953]: I1211 10:12:32.517511 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:32Z","lastTransitionTime":"2025-12-11T10:12:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:32 crc kubenswrapper[4953]: I1211 10:12:32.518959 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:32Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:32 crc kubenswrapper[4953]: I1211 10:12:32.532925 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:32Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:32 crc kubenswrapper[4953]: I1211 10:12:32.558524 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7cgmm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5c6cf50-ac15-4b65-98a3-24d5b55a9f5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e8c3b294febaab8650ca738b055222b11b0f3502da927fb9bb1f2f30b97c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrv98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7cgmm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:32Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:32 crc kubenswrapper[4953]: I1211 10:12:32.570944 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ps59j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed9da9e3-3f97-49f6-9774-3c2f06987b9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8b7289e76184818bc11ef0e99cd573244647de790af79ac277a91ebf305bc1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vngds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ps59j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:32Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:32 crc kubenswrapper[4953]: I1211 10:12:32.586309 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pqtrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d80d6bd6-dd9c-433e-93cb-2be48e4cea72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7525c3e73b38b27709833d8bf03853f82b08bafa8734d97890332f8aff9d3317\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c456b7fb076e4e1f8ebe23ee26c07f2038800afe76c0e183ea82aebebe5e20d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c456b7fb076e4e1f8ebe23ee26c07f2038800afe76c0e183ea82aebebe5e20d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1199214a609bfaf541ddc0e45714a86f4165bdb240eda61b8ee35764cc2b6c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1199214a609bfaf541ddc0e45714a86f4165bdb240eda61b8ee35764cc2b6c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79056708a2a0ccc6dca203be18aec98296471e9c95fa3124c910deb77f04cb23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79056708a2a0ccc6dca203be18aec98296471e9c95fa3124c910deb77f04cb23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdf32e1c2b4bd849d12dcee505be41ebbd38d03660e37c1b36d65f81e55d5160\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdf32e1c2b4bd849d12dcee505be41ebbd38d03660e37c1b36d65f81e55d5160\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6dd8c365888d82936ae2eeef058fd79b7134d40d2096eeb655fc79faa658ce6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6dd8c365888d82936ae2eeef058fd79b7134d40d2096eeb655fc79faa658ce6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22373c7e841c5b2889f89395496fcd5cf912db482ef228c680812c667bead5da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22373c7e841c5b2889f89395496fcd5cf912db482ef228c680812c667bead5da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pqtrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:32Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:32 crc kubenswrapper[4953]: I1211 10:12:32.601567 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qm4mr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86f65b63-32e0-49cc-bc96-272ecfb987ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqpb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqpb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:12:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qm4mr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:32Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:32 crc kubenswrapper[4953]: I1211 10:12:32.619470 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d98f6e58-767e-4e80-8dc7-bf97cdc14997\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec306b9048e81de45ce4e5ae1f564ab611980d56edf94f34c48cba7299dd754e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7453febb17d4aadef8c87c8d256a0339b441e2bed33a20a3f7cf88b4d0ce5a83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5c40bd3d558c5cff3d458a0b5a993371c3e8b6afc0035a64a21ffc0cc6c2357\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b22d8239ad9f5511dc6ae773c7ea181c4e194b0847b58332e716953d9deb9cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:32Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:32 crc kubenswrapper[4953]: I1211 10:12:32.620957 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:32 crc kubenswrapper[4953]: I1211 10:12:32.620988 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:32 crc kubenswrapper[4953]: I1211 10:12:32.621000 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:32 crc kubenswrapper[4953]: I1211 10:12:32.621017 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:32 crc kubenswrapper[4953]: I1211 10:12:32.621031 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:32Z","lastTransitionTime":"2025-12-11T10:12:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:32 crc kubenswrapper[4953]: I1211 10:12:32.635231 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7f8ca70-14ac-499f-9a73-c03f1cb9d3f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afbf1d478a1ccbd17c29483adf2e39e60be93dfde72d96dd4c45ee2b81c7db7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89487ecc0b25583d92a2adb537e660618a1f0477d9b0ca805c7d5cc120a38ef5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5850c59617cbc5cbf3d86246bfb8d7645964fdb32f406648e47de3d2e1dcca39\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b38e6fc7946d99ff7570627e9bfd01e9f5e029ad3f3e2cda276461f222d7950\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a91255550d88dd1963fef1112d90d2c1e779fc3e2dd1e7c824640879b8c6a58e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T10:11:37Z\\\",\\\"message\\\":\\\"W1211 10:11:26.311312 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1211 10:11:26.312053 1 crypto.go:601] Generating new CA for check-endpoints-signer@1765447886 cert, and key in /tmp/serving-cert-3652440615/serving-signer.crt, /tmp/serving-cert-3652440615/serving-signer.key\\\\nI1211 10:11:26.711906 1 observer_polling.go:159] Starting file observer\\\\nW1211 10:11:26.714018 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1211 10:11:26.714220 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1211 10:11:26.715195 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3652440615/tls.crt::/tmp/serving-cert-3652440615/tls.key\\\\\\\"\\\\nF1211 10:11:37.220702 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:26Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2348bd7a336966cd91aa6ba1cf71771e7fd111085acbb0481adee82d7a6e109\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4536100178c4611eea8cfe2b15d79f55fdd32b7004de523cc2694bd656ce5be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4536100178c4611eea8cfe2b15d79f55fdd32b7004de523cc2694bd656ce5be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:32Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:32 crc kubenswrapper[4953]: I1211 10:12:32.649906 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:32Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:32 crc kubenswrapper[4953]: I1211 10:12:32.666396 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e57ec14864d78b0463b4bd4af9dfa21aec61df60a63a38b7d98ba4871716edfc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:32Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:32 crc kubenswrapper[4953]: I1211 10:12:32.682254 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h4dvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"644e1d40-ab80-469e-94b4-540e52b8e2c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f734acf34a05a9425f305c809775bae58615ae1d5f89e3b519e54d7e7abb8bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbwwr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h4dvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:32Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:32 crc kubenswrapper[4953]: I1211 10:12:32.701831 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb9312a7af4fcd14d64411afec83b7315dbe399254aab23665cccfa0b04a62db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:32Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:32 crc kubenswrapper[4953]: I1211 10:12:32.715263 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q2898" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed741fb7-1326-48b7-a713-17c9f0243eac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c91690c6fc715e967f98fc731db9ff317a21946b0903480ee2534f5e71ae7ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5z9nk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd6810974250266a6a2efbea13db5cb6f52a4bbdec05955f7b9f58e55d7a8c4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5z9nk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q2898\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:32Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:32 crc kubenswrapper[4953]: I1211 10:12:32.722929 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:32 crc kubenswrapper[4953]: I1211 10:12:32.722954 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:32 crc kubenswrapper[4953]: I1211 10:12:32.722962 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:32 crc kubenswrapper[4953]: I1211 10:12:32.722975 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:32 crc kubenswrapper[4953]: I1211 10:12:32.722986 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:32Z","lastTransitionTime":"2025-12-11T10:12:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:32 crc kubenswrapper[4953]: I1211 10:12:32.739271 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x6f57" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c09d8243-6693-433e-bce1-8a99e5e37b95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b88a1ca7bd533da2486525e0a6c3dc45272433e743146be01eb8013265f02d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://622e6781ae06491000d6c0e3d2922942c3709bb09483a8e0d2ab723972c2aa78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c34fbadf5e4606e6ad395137eff9b9edc19b5d02125e818a6d8a19d0d20649e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2587a8f43682f31a01fae073769712321abb001f975f5ac5413264489a110f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42759c1ba178a28ddf9b11b221249364f6993c1288309dff6b1c50be13c3b6fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99f628cfb5844a18bd0013d2cd5f7f545ef7a561017498bdfa4a6b6fc4686e54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60a64421d848d2d5154604bb89edadbac944c141172896eb9bc48b6fab7e7b77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60a64421d848d2d5154604bb89edadbac944c141172896eb9bc48b6fab7e7b77\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T10:12:18Z\\\",\\\"message\\\":\\\"r.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1211 10:12:18.510892 6563 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1211 10:12:18.511234 6563 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1211 10:12:18.511255 6563 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1211 10:12:18.511261 6563 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1211 10:12:18.511504 6563 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1211 10:12:18.511509 6563 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1211 10:12:18.511523 6563 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1211 10:12:18.511549 6563 factory.go:656] Stopping watch factory\\\\nI1211 10:12:18.511589 6563 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1211 10:12:18.511599 6563 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1211 10:12:18.511606 6563 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1211 10:12:18.511613 6563 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1211 10:12:18.511620 6563 handler.go:208] Removed *v1.Node event handler 2\\\\nI1211 10:12:18.511628 6563 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T10:12:17Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-x6f57_openshift-ovn-kubernetes(c09d8243-6693-433e-bce1-8a99e5e37b95)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8da892281a7b8dea449aee461dcb35bef204e2a768689e751abcb21204091aaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c84dce3b218bcc630289f0bff286bad09d4481e1787ef2c048799bfc6c97108f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c84dce3b218bcc630289f0bff286bad09d4481e1787ef2c048799bfc6c97108f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x6f57\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:32Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:32 crc kubenswrapper[4953]: I1211 10:12:32.751866 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bjhsd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c4cea1-0872-4490-8195-2a195090982c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e2ab3c73fffd4d07174524dd41c285309cc588049ea3896875e75982d072ced\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnnf2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1469f484fec8f5c7863ebaa62188bc38d6553fe3ef65e315a928924306724842\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnnf2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bjhsd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:32Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:32 crc kubenswrapper[4953]: I1211 10:12:32.825750 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:32 crc kubenswrapper[4953]: I1211 10:12:32.825887 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:32 crc kubenswrapper[4953]: I1211 10:12:32.825923 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:32 crc kubenswrapper[4953]: I1211 10:12:32.825950 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:32 crc kubenswrapper[4953]: I1211 10:12:32.825961 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:32Z","lastTransitionTime":"2025-12-11T10:12:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:32 crc kubenswrapper[4953]: I1211 10:12:32.928522 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:32 crc kubenswrapper[4953]: I1211 10:12:32.928557 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:32 crc kubenswrapper[4953]: I1211 10:12:32.928590 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:32 crc kubenswrapper[4953]: I1211 10:12:32.928607 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:32 crc kubenswrapper[4953]: I1211 10:12:32.928618 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:32Z","lastTransitionTime":"2025-12-11T10:12:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:33 crc kubenswrapper[4953]: I1211 10:12:33.031714 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:33 crc kubenswrapper[4953]: I1211 10:12:33.031761 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:33 crc kubenswrapper[4953]: I1211 10:12:33.031772 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:33 crc kubenswrapper[4953]: I1211 10:12:33.031790 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:33 crc kubenswrapper[4953]: I1211 10:12:33.031802 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:33Z","lastTransitionTime":"2025-12-11T10:12:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:33 crc kubenswrapper[4953]: I1211 10:12:33.135684 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:33 crc kubenswrapper[4953]: I1211 10:12:33.135740 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:33 crc kubenswrapper[4953]: I1211 10:12:33.135756 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:33 crc kubenswrapper[4953]: I1211 10:12:33.135777 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:33 crc kubenswrapper[4953]: I1211 10:12:33.135795 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:33Z","lastTransitionTime":"2025-12-11T10:12:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:33 crc kubenswrapper[4953]: I1211 10:12:33.238304 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:33 crc kubenswrapper[4953]: I1211 10:12:33.238351 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:33 crc kubenswrapper[4953]: I1211 10:12:33.238363 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:33 crc kubenswrapper[4953]: I1211 10:12:33.238379 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:33 crc kubenswrapper[4953]: I1211 10:12:33.238391 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:33Z","lastTransitionTime":"2025-12-11T10:12:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:33 crc kubenswrapper[4953]: I1211 10:12:33.341076 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:33 crc kubenswrapper[4953]: I1211 10:12:33.341153 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:33 crc kubenswrapper[4953]: I1211 10:12:33.341171 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:33 crc kubenswrapper[4953]: I1211 10:12:33.341196 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:33 crc kubenswrapper[4953]: I1211 10:12:33.341214 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:33Z","lastTransitionTime":"2025-12-11T10:12:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:33 crc kubenswrapper[4953]: I1211 10:12:33.443752 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:33 crc kubenswrapper[4953]: I1211 10:12:33.443806 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:33 crc kubenswrapper[4953]: I1211 10:12:33.443823 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:33 crc kubenswrapper[4953]: I1211 10:12:33.443848 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:33 crc kubenswrapper[4953]: I1211 10:12:33.443862 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:33Z","lastTransitionTime":"2025-12-11T10:12:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:33 crc kubenswrapper[4953]: I1211 10:12:33.472400 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qm4mr" Dec 11 10:12:33 crc kubenswrapper[4953]: I1211 10:12:33.472445 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 10:12:33 crc kubenswrapper[4953]: I1211 10:12:33.472416 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 10:12:33 crc kubenswrapper[4953]: E1211 10:12:33.472543 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qm4mr" podUID="86f65b63-32e0-49cc-bc96-272ecfb987ed" Dec 11 10:12:33 crc kubenswrapper[4953]: I1211 10:12:33.472400 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 10:12:33 crc kubenswrapper[4953]: E1211 10:12:33.472682 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 10:12:33 crc kubenswrapper[4953]: E1211 10:12:33.472800 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 10:12:33 crc kubenswrapper[4953]: E1211 10:12:33.472923 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 10:12:33 crc kubenswrapper[4953]: I1211 10:12:33.535653 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/86f65b63-32e0-49cc-bc96-272ecfb987ed-metrics-certs\") pod \"network-metrics-daemon-qm4mr\" (UID: \"86f65b63-32e0-49cc-bc96-272ecfb987ed\") " pod="openshift-multus/network-metrics-daemon-qm4mr" Dec 11 10:12:33 crc kubenswrapper[4953]: E1211 10:12:33.535853 4953 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 11 10:12:33 crc kubenswrapper[4953]: E1211 10:12:33.535964 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/86f65b63-32e0-49cc-bc96-272ecfb987ed-metrics-certs podName:86f65b63-32e0-49cc-bc96-272ecfb987ed nodeName:}" failed. No retries permitted until 2025-12-11 10:13:05.535937086 +0000 UTC m=+103.559796119 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/86f65b63-32e0-49cc-bc96-272ecfb987ed-metrics-certs") pod "network-metrics-daemon-qm4mr" (UID: "86f65b63-32e0-49cc-bc96-272ecfb987ed") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 11 10:12:33 crc kubenswrapper[4953]: I1211 10:12:33.546075 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:33 crc kubenswrapper[4953]: I1211 10:12:33.546126 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:33 crc kubenswrapper[4953]: I1211 10:12:33.546138 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:33 crc kubenswrapper[4953]: I1211 10:12:33.546158 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:33 crc kubenswrapper[4953]: I1211 10:12:33.546168 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:33Z","lastTransitionTime":"2025-12-11T10:12:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:33 crc kubenswrapper[4953]: I1211 10:12:33.649330 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:33 crc kubenswrapper[4953]: I1211 10:12:33.649367 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:33 crc kubenswrapper[4953]: I1211 10:12:33.649379 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:33 crc kubenswrapper[4953]: I1211 10:12:33.649395 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:33 crc kubenswrapper[4953]: I1211 10:12:33.649407 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:33Z","lastTransitionTime":"2025-12-11T10:12:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:33 crc kubenswrapper[4953]: I1211 10:12:33.751973 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:33 crc kubenswrapper[4953]: I1211 10:12:33.752016 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:33 crc kubenswrapper[4953]: I1211 10:12:33.752027 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:33 crc kubenswrapper[4953]: I1211 10:12:33.752043 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:33 crc kubenswrapper[4953]: I1211 10:12:33.752054 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:33Z","lastTransitionTime":"2025-12-11T10:12:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:33 crc kubenswrapper[4953]: I1211 10:12:33.854118 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:33 crc kubenswrapper[4953]: I1211 10:12:33.854156 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:33 crc kubenswrapper[4953]: I1211 10:12:33.854165 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:33 crc kubenswrapper[4953]: I1211 10:12:33.854181 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:33 crc kubenswrapper[4953]: I1211 10:12:33.854194 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:33Z","lastTransitionTime":"2025-12-11T10:12:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:33 crc kubenswrapper[4953]: I1211 10:12:33.956846 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:33 crc kubenswrapper[4953]: I1211 10:12:33.956889 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:33 crc kubenswrapper[4953]: I1211 10:12:33.956902 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:33 crc kubenswrapper[4953]: I1211 10:12:33.956919 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:33 crc kubenswrapper[4953]: I1211 10:12:33.956930 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:33Z","lastTransitionTime":"2025-12-11T10:12:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:34 crc kubenswrapper[4953]: I1211 10:12:34.059618 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:34 crc kubenswrapper[4953]: I1211 10:12:34.059658 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:34 crc kubenswrapper[4953]: I1211 10:12:34.059670 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:34 crc kubenswrapper[4953]: I1211 10:12:34.059688 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:34 crc kubenswrapper[4953]: I1211 10:12:34.059700 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:34Z","lastTransitionTime":"2025-12-11T10:12:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:34 crc kubenswrapper[4953]: I1211 10:12:34.162113 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:34 crc kubenswrapper[4953]: I1211 10:12:34.162189 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:34 crc kubenswrapper[4953]: I1211 10:12:34.162202 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:34 crc kubenswrapper[4953]: I1211 10:12:34.162218 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:34 crc kubenswrapper[4953]: I1211 10:12:34.162228 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:34Z","lastTransitionTime":"2025-12-11T10:12:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:34 crc kubenswrapper[4953]: I1211 10:12:34.264430 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:34 crc kubenswrapper[4953]: I1211 10:12:34.264491 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:34 crc kubenswrapper[4953]: I1211 10:12:34.264506 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:34 crc kubenswrapper[4953]: I1211 10:12:34.264523 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:34 crc kubenswrapper[4953]: I1211 10:12:34.264535 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:34Z","lastTransitionTime":"2025-12-11T10:12:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:34 crc kubenswrapper[4953]: I1211 10:12:34.367261 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:34 crc kubenswrapper[4953]: I1211 10:12:34.367306 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:34 crc kubenswrapper[4953]: I1211 10:12:34.367317 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:34 crc kubenswrapper[4953]: I1211 10:12:34.367335 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:34 crc kubenswrapper[4953]: I1211 10:12:34.367347 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:34Z","lastTransitionTime":"2025-12-11T10:12:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:34 crc kubenswrapper[4953]: I1211 10:12:34.469504 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:34 crc kubenswrapper[4953]: I1211 10:12:34.469590 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:34 crc kubenswrapper[4953]: I1211 10:12:34.469606 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:34 crc kubenswrapper[4953]: I1211 10:12:34.469622 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:34 crc kubenswrapper[4953]: I1211 10:12:34.469632 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:34Z","lastTransitionTime":"2025-12-11T10:12:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:34 crc kubenswrapper[4953]: I1211 10:12:34.571828 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:34 crc kubenswrapper[4953]: I1211 10:12:34.571879 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:34 crc kubenswrapper[4953]: I1211 10:12:34.571890 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:34 crc kubenswrapper[4953]: I1211 10:12:34.571905 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:34 crc kubenswrapper[4953]: I1211 10:12:34.571917 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:34Z","lastTransitionTime":"2025-12-11T10:12:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:34 crc kubenswrapper[4953]: I1211 10:12:34.674783 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:34 crc kubenswrapper[4953]: I1211 10:12:34.674830 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:34 crc kubenswrapper[4953]: I1211 10:12:34.674840 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:34 crc kubenswrapper[4953]: I1211 10:12:34.674853 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:34 crc kubenswrapper[4953]: I1211 10:12:34.674862 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:34Z","lastTransitionTime":"2025-12-11T10:12:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:34 crc kubenswrapper[4953]: I1211 10:12:34.780380 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:34 crc kubenswrapper[4953]: I1211 10:12:34.780440 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:34 crc kubenswrapper[4953]: I1211 10:12:34.780466 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:34 crc kubenswrapper[4953]: I1211 10:12:34.780507 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:34 crc kubenswrapper[4953]: I1211 10:12:34.780534 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:34Z","lastTransitionTime":"2025-12-11T10:12:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:34 crc kubenswrapper[4953]: I1211 10:12:34.883626 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:34 crc kubenswrapper[4953]: I1211 10:12:34.883669 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:34 crc kubenswrapper[4953]: I1211 10:12:34.883679 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:34 crc kubenswrapper[4953]: I1211 10:12:34.883694 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:34 crc kubenswrapper[4953]: I1211 10:12:34.883703 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:34Z","lastTransitionTime":"2025-12-11T10:12:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:34 crc kubenswrapper[4953]: I1211 10:12:34.986081 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:34 crc kubenswrapper[4953]: I1211 10:12:34.986128 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:34 crc kubenswrapper[4953]: I1211 10:12:34.986137 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:34 crc kubenswrapper[4953]: I1211 10:12:34.986152 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:34 crc kubenswrapper[4953]: I1211 10:12:34.986161 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:34Z","lastTransitionTime":"2025-12-11T10:12:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:35 crc kubenswrapper[4953]: I1211 10:12:35.088541 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:35 crc kubenswrapper[4953]: I1211 10:12:35.088595 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:35 crc kubenswrapper[4953]: I1211 10:12:35.088607 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:35 crc kubenswrapper[4953]: I1211 10:12:35.088623 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:35 crc kubenswrapper[4953]: I1211 10:12:35.088635 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:35Z","lastTransitionTime":"2025-12-11T10:12:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:35 crc kubenswrapper[4953]: I1211 10:12:35.190893 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:35 crc kubenswrapper[4953]: I1211 10:12:35.190942 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:35 crc kubenswrapper[4953]: I1211 10:12:35.190952 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:35 crc kubenswrapper[4953]: I1211 10:12:35.190966 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:35 crc kubenswrapper[4953]: I1211 10:12:35.190975 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:35Z","lastTransitionTime":"2025-12-11T10:12:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:35 crc kubenswrapper[4953]: I1211 10:12:35.293664 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:35 crc kubenswrapper[4953]: I1211 10:12:35.294197 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:35 crc kubenswrapper[4953]: I1211 10:12:35.294282 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:35 crc kubenswrapper[4953]: I1211 10:12:35.294356 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:35 crc kubenswrapper[4953]: I1211 10:12:35.294440 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:35Z","lastTransitionTime":"2025-12-11T10:12:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:35 crc kubenswrapper[4953]: I1211 10:12:35.397228 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:35 crc kubenswrapper[4953]: I1211 10:12:35.397664 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:35 crc kubenswrapper[4953]: I1211 10:12:35.397776 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:35 crc kubenswrapper[4953]: I1211 10:12:35.397889 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:35 crc kubenswrapper[4953]: I1211 10:12:35.398013 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:35Z","lastTransitionTime":"2025-12-11T10:12:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:35 crc kubenswrapper[4953]: I1211 10:12:35.472537 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 10:12:35 crc kubenswrapper[4953]: I1211 10:12:35.472611 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 10:12:35 crc kubenswrapper[4953]: I1211 10:12:35.472710 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qm4mr" Dec 11 10:12:35 crc kubenswrapper[4953]: I1211 10:12:35.472805 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 10:12:35 crc kubenswrapper[4953]: E1211 10:12:35.472988 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 10:12:35 crc kubenswrapper[4953]: E1211 10:12:35.473651 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 10:12:35 crc kubenswrapper[4953]: E1211 10:12:35.474102 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qm4mr" podUID="86f65b63-32e0-49cc-bc96-272ecfb987ed" Dec 11 10:12:35 crc kubenswrapper[4953]: I1211 10:12:35.474176 4953 scope.go:117] "RemoveContainer" containerID="60a64421d848d2d5154604bb89edadbac944c141172896eb9bc48b6fab7e7b77" Dec 11 10:12:35 crc kubenswrapper[4953]: E1211 10:12:35.474271 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 10:12:35 crc kubenswrapper[4953]: E1211 10:12:35.474774 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-x6f57_openshift-ovn-kubernetes(c09d8243-6693-433e-bce1-8a99e5e37b95)\"" pod="openshift-ovn-kubernetes/ovnkube-node-x6f57" podUID="c09d8243-6693-433e-bce1-8a99e5e37b95" Dec 11 10:12:35 crc kubenswrapper[4953]: I1211 10:12:35.500279 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:35 crc kubenswrapper[4953]: I1211 10:12:35.500372 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:35 crc kubenswrapper[4953]: I1211 10:12:35.500391 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:35 crc kubenswrapper[4953]: I1211 10:12:35.500415 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:35 crc kubenswrapper[4953]: I1211 10:12:35.500428 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:35Z","lastTransitionTime":"2025-12-11T10:12:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:35 crc kubenswrapper[4953]: I1211 10:12:35.602468 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:35 crc kubenswrapper[4953]: I1211 10:12:35.602511 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:35 crc kubenswrapper[4953]: I1211 10:12:35.602522 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:35 crc kubenswrapper[4953]: I1211 10:12:35.602535 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:35 crc kubenswrapper[4953]: I1211 10:12:35.602544 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:35Z","lastTransitionTime":"2025-12-11T10:12:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:35 crc kubenswrapper[4953]: I1211 10:12:35.704910 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:35 crc kubenswrapper[4953]: I1211 10:12:35.705190 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:35 crc kubenswrapper[4953]: I1211 10:12:35.705294 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:35 crc kubenswrapper[4953]: I1211 10:12:35.705393 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:35 crc kubenswrapper[4953]: I1211 10:12:35.705490 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:35Z","lastTransitionTime":"2025-12-11T10:12:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:35 crc kubenswrapper[4953]: I1211 10:12:35.808737 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:35 crc kubenswrapper[4953]: I1211 10:12:35.808793 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:35 crc kubenswrapper[4953]: I1211 10:12:35.808804 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:35 crc kubenswrapper[4953]: I1211 10:12:35.808824 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:35 crc kubenswrapper[4953]: I1211 10:12:35.808836 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:35Z","lastTransitionTime":"2025-12-11T10:12:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:35 crc kubenswrapper[4953]: I1211 10:12:35.913181 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:35 crc kubenswrapper[4953]: I1211 10:12:35.913257 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:35 crc kubenswrapper[4953]: I1211 10:12:35.913279 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:35 crc kubenswrapper[4953]: I1211 10:12:35.913308 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:35 crc kubenswrapper[4953]: I1211 10:12:35.913334 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:35Z","lastTransitionTime":"2025-12-11T10:12:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:36 crc kubenswrapper[4953]: I1211 10:12:36.016502 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:36 crc kubenswrapper[4953]: I1211 10:12:36.016599 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:36 crc kubenswrapper[4953]: I1211 10:12:36.016624 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:36 crc kubenswrapper[4953]: I1211 10:12:36.016648 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:36 crc kubenswrapper[4953]: I1211 10:12:36.016667 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:36Z","lastTransitionTime":"2025-12-11T10:12:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:36 crc kubenswrapper[4953]: I1211 10:12:36.119709 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:36 crc kubenswrapper[4953]: I1211 10:12:36.119795 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:36 crc kubenswrapper[4953]: I1211 10:12:36.119809 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:36 crc kubenswrapper[4953]: I1211 10:12:36.119827 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:36 crc kubenswrapper[4953]: I1211 10:12:36.119862 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:36Z","lastTransitionTime":"2025-12-11T10:12:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:36 crc kubenswrapper[4953]: I1211 10:12:36.222153 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:36 crc kubenswrapper[4953]: I1211 10:12:36.222199 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:36 crc kubenswrapper[4953]: I1211 10:12:36.222210 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:36 crc kubenswrapper[4953]: I1211 10:12:36.222224 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:36 crc kubenswrapper[4953]: I1211 10:12:36.222234 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:36Z","lastTransitionTime":"2025-12-11T10:12:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:36 crc kubenswrapper[4953]: I1211 10:12:36.324920 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:36 crc kubenswrapper[4953]: I1211 10:12:36.324961 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:36 crc kubenswrapper[4953]: I1211 10:12:36.324973 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:36 crc kubenswrapper[4953]: I1211 10:12:36.324990 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:36 crc kubenswrapper[4953]: I1211 10:12:36.325002 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:36Z","lastTransitionTime":"2025-12-11T10:12:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:36 crc kubenswrapper[4953]: I1211 10:12:36.427808 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:36 crc kubenswrapper[4953]: I1211 10:12:36.427876 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:36 crc kubenswrapper[4953]: I1211 10:12:36.427896 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:36 crc kubenswrapper[4953]: I1211 10:12:36.427922 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:36 crc kubenswrapper[4953]: I1211 10:12:36.427941 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:36Z","lastTransitionTime":"2025-12-11T10:12:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:36 crc kubenswrapper[4953]: I1211 10:12:36.530761 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:36 crc kubenswrapper[4953]: I1211 10:12:36.530817 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:36 crc kubenswrapper[4953]: I1211 10:12:36.530828 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:36 crc kubenswrapper[4953]: I1211 10:12:36.530845 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:36 crc kubenswrapper[4953]: I1211 10:12:36.530857 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:36Z","lastTransitionTime":"2025-12-11T10:12:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:36 crc kubenswrapper[4953]: I1211 10:12:36.633738 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:36 crc kubenswrapper[4953]: I1211 10:12:36.633815 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:36 crc kubenswrapper[4953]: I1211 10:12:36.633833 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:36 crc kubenswrapper[4953]: I1211 10:12:36.633858 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:36 crc kubenswrapper[4953]: I1211 10:12:36.633877 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:36Z","lastTransitionTime":"2025-12-11T10:12:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:36 crc kubenswrapper[4953]: I1211 10:12:36.737323 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:36 crc kubenswrapper[4953]: I1211 10:12:36.737375 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:36 crc kubenswrapper[4953]: I1211 10:12:36.737386 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:36 crc kubenswrapper[4953]: I1211 10:12:36.737403 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:36 crc kubenswrapper[4953]: I1211 10:12:36.737418 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:36Z","lastTransitionTime":"2025-12-11T10:12:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:36 crc kubenswrapper[4953]: I1211 10:12:36.840492 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:36 crc kubenswrapper[4953]: I1211 10:12:36.840658 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:36 crc kubenswrapper[4953]: I1211 10:12:36.840741 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:36 crc kubenswrapper[4953]: I1211 10:12:36.840810 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:36 crc kubenswrapper[4953]: I1211 10:12:36.840835 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:36Z","lastTransitionTime":"2025-12-11T10:12:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:36 crc kubenswrapper[4953]: I1211 10:12:36.943815 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:36 crc kubenswrapper[4953]: I1211 10:12:36.943860 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:36 crc kubenswrapper[4953]: I1211 10:12:36.943871 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:36 crc kubenswrapper[4953]: I1211 10:12:36.943887 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:36 crc kubenswrapper[4953]: I1211 10:12:36.943901 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:36Z","lastTransitionTime":"2025-12-11T10:12:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:37 crc kubenswrapper[4953]: I1211 10:12:37.046413 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:37 crc kubenswrapper[4953]: I1211 10:12:37.046450 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:37 crc kubenswrapper[4953]: I1211 10:12:37.046460 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:37 crc kubenswrapper[4953]: I1211 10:12:37.046474 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:37 crc kubenswrapper[4953]: I1211 10:12:37.046484 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:37Z","lastTransitionTime":"2025-12-11T10:12:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:37 crc kubenswrapper[4953]: I1211 10:12:37.149535 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:37 crc kubenswrapper[4953]: I1211 10:12:37.149610 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:37 crc kubenswrapper[4953]: I1211 10:12:37.149622 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:37 crc kubenswrapper[4953]: I1211 10:12:37.149639 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:37 crc kubenswrapper[4953]: I1211 10:12:37.149651 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:37Z","lastTransitionTime":"2025-12-11T10:12:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:37 crc kubenswrapper[4953]: I1211 10:12:37.253509 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:37 crc kubenswrapper[4953]: I1211 10:12:37.253562 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:37 crc kubenswrapper[4953]: I1211 10:12:37.253676 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:37 crc kubenswrapper[4953]: I1211 10:12:37.253696 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:37 crc kubenswrapper[4953]: I1211 10:12:37.253707 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:37Z","lastTransitionTime":"2025-12-11T10:12:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:37 crc kubenswrapper[4953]: I1211 10:12:37.356663 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:37 crc kubenswrapper[4953]: I1211 10:12:37.356721 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:37 crc kubenswrapper[4953]: I1211 10:12:37.356736 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:37 crc kubenswrapper[4953]: I1211 10:12:37.356762 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:37 crc kubenswrapper[4953]: I1211 10:12:37.356776 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:37Z","lastTransitionTime":"2025-12-11T10:12:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:37 crc kubenswrapper[4953]: I1211 10:12:37.419745 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-h4dvx_644e1d40-ab80-469e-94b4-540e52b8e2c0/kube-multus/0.log" Dec 11 10:12:37 crc kubenswrapper[4953]: I1211 10:12:37.419837 4953 generic.go:334] "Generic (PLEG): container finished" podID="644e1d40-ab80-469e-94b4-540e52b8e2c0" containerID="5f734acf34a05a9425f305c809775bae58615ae1d5f89e3b519e54d7e7abb8bc" exitCode=1 Dec 11 10:12:37 crc kubenswrapper[4953]: I1211 10:12:37.419940 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-h4dvx" event={"ID":"644e1d40-ab80-469e-94b4-540e52b8e2c0","Type":"ContainerDied","Data":"5f734acf34a05a9425f305c809775bae58615ae1d5f89e3b519e54d7e7abb8bc"} Dec 11 10:12:37 crc kubenswrapper[4953]: I1211 10:12:37.420895 4953 scope.go:117] "RemoveContainer" containerID="5f734acf34a05a9425f305c809775bae58615ae1d5f89e3b519e54d7e7abb8bc" Dec 11 10:12:37 crc kubenswrapper[4953]: I1211 10:12:37.444659 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6787fb31-272a-4dd9-b0f2-bfb5630d6901\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42af1d5ca92f02433468753b3f0f0cb74ef360928733d71e4316fb8ed77aea63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ec7f5911594475d4a03216b385df264254e50cb55ef7eee3d2ac0a88e8ef1af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e43d812b41951ea02ea6aeaf53d101e762a3bc0513865818ff2dcc6506a24d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b17d140523000135ca46bbc525af1160b82222469a9ca408985ab27c2514f82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b17d140523000135ca46bbc525af1160b82222469a9ca408985ab27c2514f82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:23Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:22Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:37Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:37 crc kubenswrapper[4953]: I1211 10:12:37.459340 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:37 crc kubenswrapper[4953]: I1211 10:12:37.459374 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:37 crc kubenswrapper[4953]: I1211 10:12:37.459385 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:37 crc kubenswrapper[4953]: I1211 10:12:37.459402 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:37 crc kubenswrapper[4953]: I1211 10:12:37.459414 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:37Z","lastTransitionTime":"2025-12-11T10:12:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:37 crc kubenswrapper[4953]: I1211 10:12:37.463494 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e38e7bec81ab11b9afe5c592d5c57aa1c0527e5e4031265a00a99ef8cb3c6dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da0ab06260b0bf565e089d1d1a78ae71e0ce94f0d5e867393dafc543f9014367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:37Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:37 crc kubenswrapper[4953]: I1211 10:12:37.472528 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 10:12:37 crc kubenswrapper[4953]: I1211 10:12:37.472561 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qm4mr" Dec 11 10:12:37 crc kubenswrapper[4953]: I1211 10:12:37.472604 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 10:12:37 crc kubenswrapper[4953]: I1211 10:12:37.472672 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 10:12:37 crc kubenswrapper[4953]: E1211 10:12:37.472692 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 10:12:37 crc kubenswrapper[4953]: E1211 10:12:37.472787 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 10:12:37 crc kubenswrapper[4953]: E1211 10:12:37.472859 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qm4mr" podUID="86f65b63-32e0-49cc-bc96-272ecfb987ed" Dec 11 10:12:37 crc kubenswrapper[4953]: E1211 10:12:37.472962 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 10:12:37 crc kubenswrapper[4953]: I1211 10:12:37.475692 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ps59j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed9da9e3-3f97-49f6-9774-3c2f06987b9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8b7289e76184818bc11ef0e99cd573244647de790af79ac277a91ebf305bc1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vngds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ps59j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:37Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:37 crc kubenswrapper[4953]: I1211 10:12:37.503977 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pqtrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d80d6bd6-dd9c-433e-93cb-2be48e4cea72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7525c3e73b38b27709833d8bf03853f82b08bafa8734d97890332f8aff9d3317\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c456b7fb076e4e1f8ebe23ee26c07f2038800afe76c0e183ea82aebebe5e20d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c456b7fb076e4e1f8ebe23ee26c07f2038800afe76c0e183ea82aebebe5e20d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1199214a609bfaf541ddc0e45714a86f4165bdb240eda61b8ee35764cc2b6c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1199214a609bfaf541ddc0e45714a86f4165bdb240eda61b8ee35764cc2b6c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79056708a2a0ccc6dca203be18aec98296471e9c95fa3124c910deb77f04cb23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79056708a2a0ccc6dca203be18aec98296471e9c95fa3124c910deb77f04cb23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdf32e1c2b4bd849d12dcee505be41ebbd38d03660e37c1b36d65f81e55d5160\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdf32e1c2b4bd849d12dcee505be41ebbd38d03660e37c1b36d65f81e55d5160\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6dd8c365888d82936ae2eeef058fd79b7134d40d2096eeb655fc79faa658ce6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6dd8c365888d82936ae2eeef058fd79b7134d40d2096eeb655fc79faa658ce6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22373c7e841c5b2889f89395496fcd5cf912db482ef228c680812c667bead5da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22373c7e841c5b2889f89395496fcd5cf912db482ef228c680812c667bead5da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pqtrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:37Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:37 crc kubenswrapper[4953]: I1211 10:12:37.515623 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qm4mr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86f65b63-32e0-49cc-bc96-272ecfb987ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqpb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqpb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:12:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qm4mr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:37Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:37 crc kubenswrapper[4953]: I1211 10:12:37.529308 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:37Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:37 crc kubenswrapper[4953]: I1211 10:12:37.542151 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:37Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:37 crc kubenswrapper[4953]: I1211 10:12:37.555879 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7cgmm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5c6cf50-ac15-4b65-98a3-24d5b55a9f5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e8c3b294febaab8650ca738b055222b11b0f3502da927fb9bb1f2f30b97c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrv98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7cgmm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:37Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:37 crc kubenswrapper[4953]: I1211 10:12:37.561372 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:37 crc kubenswrapper[4953]: I1211 10:12:37.561659 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:37 crc kubenswrapper[4953]: I1211 10:12:37.561782 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:37 crc kubenswrapper[4953]: I1211 10:12:37.561885 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:37 crc kubenswrapper[4953]: I1211 10:12:37.562028 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:37Z","lastTransitionTime":"2025-12-11T10:12:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:37 crc kubenswrapper[4953]: I1211 10:12:37.572756 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:37Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:37 crc kubenswrapper[4953]: I1211 10:12:37.584809 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e57ec14864d78b0463b4bd4af9dfa21aec61df60a63a38b7d98ba4871716edfc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:37Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:37 crc kubenswrapper[4953]: I1211 10:12:37.598561 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h4dvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"644e1d40-ab80-469e-94b4-540e52b8e2c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f734acf34a05a9425f305c809775bae58615ae1d5f89e3b519e54d7e7abb8bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f734acf34a05a9425f305c809775bae58615ae1d5f89e3b519e54d7e7abb8bc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T10:12:37Z\\\",\\\"message\\\":\\\"2025-12-11T10:11:51+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_bd4647c9-41fa-450f-a887-bb37cf629b23\\\\n2025-12-11T10:11:51+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_bd4647c9-41fa-450f-a887-bb37cf629b23 to /host/opt/cni/bin/\\\\n2025-12-11T10:11:52Z [verbose] multus-daemon started\\\\n2025-12-11T10:11:52Z [verbose] Readiness Indicator file check\\\\n2025-12-11T10:12:37Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbwwr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h4dvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:37Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:37 crc kubenswrapper[4953]: I1211 10:12:37.612188 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d98f6e58-767e-4e80-8dc7-bf97cdc14997\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec306b9048e81de45ce4e5ae1f564ab611980d56edf94f34c48cba7299dd754e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7453febb17d4aadef8c87c8d256a0339b441e2bed33a20a3f7cf88b4d0ce5a83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5c40bd3d558c5cff3d458a0b5a993371c3e8b6afc0035a64a21ffc0cc6c2357\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b22d8239ad9f5511dc6ae773c7ea181c4e194b0847b58332e716953d9deb9cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:37Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:37 crc kubenswrapper[4953]: I1211 10:12:37.627946 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7f8ca70-14ac-499f-9a73-c03f1cb9d3f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afbf1d478a1ccbd17c29483adf2e39e60be93dfde72d96dd4c45ee2b81c7db7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89487ecc0b25583d92a2adb537e660618a1f0477d9b0ca805c7d5cc120a38ef5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5850c59617cbc5cbf3d86246bfb8d7645964fdb32f406648e47de3d2e1dcca39\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b38e6fc7946d99ff7570627e9bfd01e9f5e029ad3f3e2cda276461f222d7950\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a91255550d88dd1963fef1112d90d2c1e779fc3e2dd1e7c824640879b8c6a58e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T10:11:37Z\\\",\\\"message\\\":\\\"W1211 10:11:26.311312 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1211 10:11:26.312053 1 crypto.go:601] Generating new CA for check-endpoints-signer@1765447886 cert, and key in /tmp/serving-cert-3652440615/serving-signer.crt, /tmp/serving-cert-3652440615/serving-signer.key\\\\nI1211 10:11:26.711906 1 observer_polling.go:159] Starting file observer\\\\nW1211 10:11:26.714018 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1211 10:11:26.714220 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1211 10:11:26.715195 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3652440615/tls.crt::/tmp/serving-cert-3652440615/tls.key\\\\\\\"\\\\nF1211 10:11:37.220702 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:26Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2348bd7a336966cd91aa6ba1cf71771e7fd111085acbb0481adee82d7a6e109\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4536100178c4611eea8cfe2b15d79f55fdd32b7004de523cc2694bd656ce5be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4536100178c4611eea8cfe2b15d79f55fdd32b7004de523cc2694bd656ce5be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:37Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:37 crc kubenswrapper[4953]: I1211 10:12:37.643206 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bjhsd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c4cea1-0872-4490-8195-2a195090982c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e2ab3c73fffd4d07174524dd41c285309cc588049ea3896875e75982d072ced\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnnf2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1469f484fec8f5c7863ebaa62188bc38d6553fe3ef65e315a928924306724842\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnnf2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bjhsd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:37Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:37 crc kubenswrapper[4953]: I1211 10:12:37.658090 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb9312a7af4fcd14d64411afec83b7315dbe399254aab23665cccfa0b04a62db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:37Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:37 crc kubenswrapper[4953]: I1211 10:12:37.664304 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:37 crc kubenswrapper[4953]: I1211 10:12:37.664593 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:37 crc kubenswrapper[4953]: I1211 10:12:37.664705 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:37 crc kubenswrapper[4953]: I1211 10:12:37.664812 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:37 crc kubenswrapper[4953]: I1211 10:12:37.664949 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:37Z","lastTransitionTime":"2025-12-11T10:12:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:37 crc kubenswrapper[4953]: I1211 10:12:37.673505 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q2898" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed741fb7-1326-48b7-a713-17c9f0243eac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c91690c6fc715e967f98fc731db9ff317a21946b0903480ee2534f5e71ae7ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5z9nk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd6810974250266a6a2efbea13db5cb6f52a4bbdec05955f7b9f58e55d7a8c4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5z9nk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q2898\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:37Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:37 crc kubenswrapper[4953]: I1211 10:12:37.692058 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x6f57" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c09d8243-6693-433e-bce1-8a99e5e37b95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b88a1ca7bd533da2486525e0a6c3dc45272433e743146be01eb8013265f02d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://622e6781ae06491000d6c0e3d2922942c3709bb09483a8e0d2ab723972c2aa78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c34fbadf5e4606e6ad395137eff9b9edc19b5d02125e818a6d8a19d0d20649e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2587a8f43682f31a01fae073769712321abb001f975f5ac5413264489a110f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42759c1ba178a28ddf9b11b221249364f6993c1288309dff6b1c50be13c3b6fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99f628cfb5844a18bd0013d2cd5f7f545ef7a561017498bdfa4a6b6fc4686e54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60a64421d848d2d5154604bb89edadbac944c141172896eb9bc48b6fab7e7b77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60a64421d848d2d5154604bb89edadbac944c141172896eb9bc48b6fab7e7b77\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T10:12:18Z\\\",\\\"message\\\":\\\"r.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1211 10:12:18.510892 6563 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1211 10:12:18.511234 6563 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1211 10:12:18.511255 6563 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1211 10:12:18.511261 6563 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1211 10:12:18.511504 6563 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1211 10:12:18.511509 6563 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1211 10:12:18.511523 6563 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1211 10:12:18.511549 6563 factory.go:656] Stopping watch factory\\\\nI1211 10:12:18.511589 6563 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1211 10:12:18.511599 6563 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1211 10:12:18.511606 6563 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1211 10:12:18.511613 6563 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1211 10:12:18.511620 6563 handler.go:208] Removed *v1.Node event handler 2\\\\nI1211 10:12:18.511628 6563 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T10:12:17Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-x6f57_openshift-ovn-kubernetes(c09d8243-6693-433e-bce1-8a99e5e37b95)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8da892281a7b8dea449aee461dcb35bef204e2a768689e751abcb21204091aaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c84dce3b218bcc630289f0bff286bad09d4481e1787ef2c048799bfc6c97108f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c84dce3b218bcc630289f0bff286bad09d4481e1787ef2c048799bfc6c97108f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x6f57\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:37Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:37 crc kubenswrapper[4953]: I1211 10:12:37.768528 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:37 crc kubenswrapper[4953]: I1211 10:12:37.768600 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:37 crc kubenswrapper[4953]: I1211 10:12:37.768614 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:37 crc kubenswrapper[4953]: I1211 10:12:37.768627 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:37 crc kubenswrapper[4953]: I1211 10:12:37.768637 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:37Z","lastTransitionTime":"2025-12-11T10:12:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:37 crc kubenswrapper[4953]: I1211 10:12:37.871333 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:37 crc kubenswrapper[4953]: I1211 10:12:37.871406 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:37 crc kubenswrapper[4953]: I1211 10:12:37.871428 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:37 crc kubenswrapper[4953]: I1211 10:12:37.871453 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:37 crc kubenswrapper[4953]: I1211 10:12:37.871473 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:37Z","lastTransitionTime":"2025-12-11T10:12:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:37 crc kubenswrapper[4953]: I1211 10:12:37.973789 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:37 crc kubenswrapper[4953]: I1211 10:12:37.973853 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:37 crc kubenswrapper[4953]: I1211 10:12:37.973871 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:37 crc kubenswrapper[4953]: I1211 10:12:37.973927 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:37 crc kubenswrapper[4953]: I1211 10:12:37.973944 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:37Z","lastTransitionTime":"2025-12-11T10:12:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:38 crc kubenswrapper[4953]: I1211 10:12:38.076465 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:38 crc kubenswrapper[4953]: I1211 10:12:38.076521 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:38 crc kubenswrapper[4953]: I1211 10:12:38.076536 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:38 crc kubenswrapper[4953]: I1211 10:12:38.076558 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:38 crc kubenswrapper[4953]: I1211 10:12:38.076593 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:38Z","lastTransitionTime":"2025-12-11T10:12:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:38 crc kubenswrapper[4953]: I1211 10:12:38.178409 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:38 crc kubenswrapper[4953]: I1211 10:12:38.178683 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:38 crc kubenswrapper[4953]: I1211 10:12:38.178823 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:38 crc kubenswrapper[4953]: I1211 10:12:38.178913 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:38 crc kubenswrapper[4953]: I1211 10:12:38.178983 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:38Z","lastTransitionTime":"2025-12-11T10:12:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:38 crc kubenswrapper[4953]: I1211 10:12:38.281080 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:38 crc kubenswrapper[4953]: I1211 10:12:38.281114 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:38 crc kubenswrapper[4953]: I1211 10:12:38.281126 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:38 crc kubenswrapper[4953]: I1211 10:12:38.281142 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:38 crc kubenswrapper[4953]: I1211 10:12:38.281152 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:38Z","lastTransitionTime":"2025-12-11T10:12:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:38 crc kubenswrapper[4953]: I1211 10:12:38.383980 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:38 crc kubenswrapper[4953]: I1211 10:12:38.384119 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:38 crc kubenswrapper[4953]: I1211 10:12:38.384130 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:38 crc kubenswrapper[4953]: I1211 10:12:38.384149 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:38 crc kubenswrapper[4953]: I1211 10:12:38.384162 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:38Z","lastTransitionTime":"2025-12-11T10:12:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:38 crc kubenswrapper[4953]: I1211 10:12:38.431160 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-h4dvx_644e1d40-ab80-469e-94b4-540e52b8e2c0/kube-multus/0.log" Dec 11 10:12:38 crc kubenswrapper[4953]: I1211 10:12:38.431233 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-h4dvx" event={"ID":"644e1d40-ab80-469e-94b4-540e52b8e2c0","Type":"ContainerStarted","Data":"bc80f2149ec8320584aa8fd55223ba13d53848232acd659a71bb35fdea7a043f"} Dec 11 10:12:38 crc kubenswrapper[4953]: I1211 10:12:38.449017 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb9312a7af4fcd14d64411afec83b7315dbe399254aab23665cccfa0b04a62db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:38Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:38 crc kubenswrapper[4953]: I1211 10:12:38.462996 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q2898" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed741fb7-1326-48b7-a713-17c9f0243eac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c91690c6fc715e967f98fc731db9ff317a21946b0903480ee2534f5e71ae7ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5z9nk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd6810974250266a6a2efbea13db5cb6f52a4bbdec05955f7b9f58e55d7a8c4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5z9nk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q2898\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:38Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:38 crc kubenswrapper[4953]: I1211 10:12:38.486967 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:38 crc kubenswrapper[4953]: I1211 10:12:38.487155 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:38 crc kubenswrapper[4953]: I1211 10:12:38.487172 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:38 crc kubenswrapper[4953]: I1211 10:12:38.487190 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:38 crc kubenswrapper[4953]: I1211 10:12:38.487202 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:38Z","lastTransitionTime":"2025-12-11T10:12:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:38 crc kubenswrapper[4953]: I1211 10:12:38.489070 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x6f57" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c09d8243-6693-433e-bce1-8a99e5e37b95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b88a1ca7bd533da2486525e0a6c3dc45272433e743146be01eb8013265f02d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://622e6781ae06491000d6c0e3d2922942c3709bb09483a8e0d2ab723972c2aa78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c34fbadf5e4606e6ad395137eff9b9edc19b5d02125e818a6d8a19d0d20649e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2587a8f43682f31a01fae073769712321abb001f975f5ac5413264489a110f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42759c1ba178a28ddf9b11b221249364f6993c1288309dff6b1c50be13c3b6fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99f628cfb5844a18bd0013d2cd5f7f545ef7a561017498bdfa4a6b6fc4686e54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60a64421d848d2d5154604bb89edadbac944c141172896eb9bc48b6fab7e7b77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60a64421d848d2d5154604bb89edadbac944c141172896eb9bc48b6fab7e7b77\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T10:12:18Z\\\",\\\"message\\\":\\\"r.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1211 10:12:18.510892 6563 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1211 10:12:18.511234 6563 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1211 10:12:18.511255 6563 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1211 10:12:18.511261 6563 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1211 10:12:18.511504 6563 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1211 10:12:18.511509 6563 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1211 10:12:18.511523 6563 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1211 10:12:18.511549 6563 factory.go:656] Stopping watch factory\\\\nI1211 10:12:18.511589 6563 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1211 10:12:18.511599 6563 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1211 10:12:18.511606 6563 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1211 10:12:18.511613 6563 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1211 10:12:18.511620 6563 handler.go:208] Removed *v1.Node event handler 2\\\\nI1211 10:12:18.511628 6563 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T10:12:17Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-x6f57_openshift-ovn-kubernetes(c09d8243-6693-433e-bce1-8a99e5e37b95)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8da892281a7b8dea449aee461dcb35bef204e2a768689e751abcb21204091aaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c84dce3b218bcc630289f0bff286bad09d4481e1787ef2c048799bfc6c97108f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c84dce3b218bcc630289f0bff286bad09d4481e1787ef2c048799bfc6c97108f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x6f57\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:38Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:38 crc kubenswrapper[4953]: I1211 10:12:38.500548 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bjhsd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c4cea1-0872-4490-8195-2a195090982c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e2ab3c73fffd4d07174524dd41c285309cc588049ea3896875e75982d072ced\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnnf2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1469f484fec8f5c7863ebaa62188bc38d6553fe3ef65e315a928924306724842\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnnf2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bjhsd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:38Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:38 crc kubenswrapper[4953]: I1211 10:12:38.512925 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6787fb31-272a-4dd9-b0f2-bfb5630d6901\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42af1d5ca92f02433468753b3f0f0cb74ef360928733d71e4316fb8ed77aea63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ec7f5911594475d4a03216b385df264254e50cb55ef7eee3d2ac0a88e8ef1af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e43d812b41951ea02ea6aeaf53d101e762a3bc0513865818ff2dcc6506a24d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b17d140523000135ca46bbc525af1160b82222469a9ca408985ab27c2514f82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b17d140523000135ca46bbc525af1160b82222469a9ca408985ab27c2514f82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:23Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:22Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:38Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:38 crc kubenswrapper[4953]: I1211 10:12:38.526091 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e38e7bec81ab11b9afe5c592d5c57aa1c0527e5e4031265a00a99ef8cb3c6dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da0ab06260b0bf565e089d1d1a78ae71e0ce94f0d5e867393dafc543f9014367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:38Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:38 crc kubenswrapper[4953]: I1211 10:12:38.542066 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pqtrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d80d6bd6-dd9c-433e-93cb-2be48e4cea72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7525c3e73b38b27709833d8bf03853f82b08bafa8734d97890332f8aff9d3317\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c456b7fb076e4e1f8ebe23ee26c07f2038800afe76c0e183ea82aebebe5e20d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c456b7fb076e4e1f8ebe23ee26c07f2038800afe76c0e183ea82aebebe5e20d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1199214a609bfaf541ddc0e45714a86f4165bdb240eda61b8ee35764cc2b6c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1199214a609bfaf541ddc0e45714a86f4165bdb240eda61b8ee35764cc2b6c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79056708a2a0ccc6dca203be18aec98296471e9c95fa3124c910deb77f04cb23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79056708a2a0ccc6dca203be18aec98296471e9c95fa3124c910deb77f04cb23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdf32e1c2b4bd849d12dcee505be41ebbd38d03660e37c1b36d65f81e55d5160\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdf32e1c2b4bd849d12dcee505be41ebbd38d03660e37c1b36d65f81e55d5160\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6dd8c365888d82936ae2eeef058fd79b7134d40d2096eeb655fc79faa658ce6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6dd8c365888d82936ae2eeef058fd79b7134d40d2096eeb655fc79faa658ce6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22373c7e841c5b2889f89395496fcd5cf912db482ef228c680812c667bead5da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22373c7e841c5b2889f89395496fcd5cf912db482ef228c680812c667bead5da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pqtrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:38Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:38 crc kubenswrapper[4953]: I1211 10:12:38.554394 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qm4mr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86f65b63-32e0-49cc-bc96-272ecfb987ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqpb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqpb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:12:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qm4mr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:38Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:38 crc kubenswrapper[4953]: I1211 10:12:38.568282 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:38Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:38 crc kubenswrapper[4953]: I1211 10:12:38.580616 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:38Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:38 crc kubenswrapper[4953]: I1211 10:12:38.589296 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:38 crc kubenswrapper[4953]: I1211 10:12:38.589336 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:38 crc kubenswrapper[4953]: I1211 10:12:38.589344 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:38 crc kubenswrapper[4953]: I1211 10:12:38.589362 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:38 crc kubenswrapper[4953]: I1211 10:12:38.589375 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:38Z","lastTransitionTime":"2025-12-11T10:12:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:38 crc kubenswrapper[4953]: I1211 10:12:38.590700 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7cgmm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5c6cf50-ac15-4b65-98a3-24d5b55a9f5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e8c3b294febaab8650ca738b055222b11b0f3502da927fb9bb1f2f30b97c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrv98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7cgmm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:38Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:38 crc kubenswrapper[4953]: I1211 10:12:38.600263 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ps59j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed9da9e3-3f97-49f6-9774-3c2f06987b9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8b7289e76184818bc11ef0e99cd573244647de790af79ac277a91ebf305bc1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vngds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ps59j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:38Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:38 crc kubenswrapper[4953]: I1211 10:12:38.612283 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e57ec14864d78b0463b4bd4af9dfa21aec61df60a63a38b7d98ba4871716edfc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:38Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:38 crc kubenswrapper[4953]: I1211 10:12:38.625087 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h4dvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"644e1d40-ab80-469e-94b4-540e52b8e2c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc80f2149ec8320584aa8fd55223ba13d53848232acd659a71bb35fdea7a043f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f734acf34a05a9425f305c809775bae58615ae1d5f89e3b519e54d7e7abb8bc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T10:12:37Z\\\",\\\"message\\\":\\\"2025-12-11T10:11:51+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_bd4647c9-41fa-450f-a887-bb37cf629b23\\\\n2025-12-11T10:11:51+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_bd4647c9-41fa-450f-a887-bb37cf629b23 to /host/opt/cni/bin/\\\\n2025-12-11T10:11:52Z [verbose] multus-daemon started\\\\n2025-12-11T10:11:52Z [verbose] Readiness Indicator file check\\\\n2025-12-11T10:12:37Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:47Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:12:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbwwr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h4dvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:38Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:38 crc kubenswrapper[4953]: I1211 10:12:38.637408 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d98f6e58-767e-4e80-8dc7-bf97cdc14997\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec306b9048e81de45ce4e5ae1f564ab611980d56edf94f34c48cba7299dd754e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7453febb17d4aadef8c87c8d256a0339b441e2bed33a20a3f7cf88b4d0ce5a83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5c40bd3d558c5cff3d458a0b5a993371c3e8b6afc0035a64a21ffc0cc6c2357\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b22d8239ad9f5511dc6ae773c7ea181c4e194b0847b58332e716953d9deb9cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:38Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:38 crc kubenswrapper[4953]: I1211 10:12:38.651120 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7f8ca70-14ac-499f-9a73-c03f1cb9d3f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afbf1d478a1ccbd17c29483adf2e39e60be93dfde72d96dd4c45ee2b81c7db7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89487ecc0b25583d92a2adb537e660618a1f0477d9b0ca805c7d5cc120a38ef5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5850c59617cbc5cbf3d86246bfb8d7645964fdb32f406648e47de3d2e1dcca39\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b38e6fc7946d99ff7570627e9bfd01e9f5e029ad3f3e2cda276461f222d7950\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a91255550d88dd1963fef1112d90d2c1e779fc3e2dd1e7c824640879b8c6a58e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T10:11:37Z\\\",\\\"message\\\":\\\"W1211 10:11:26.311312 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1211 10:11:26.312053 1 crypto.go:601] Generating new CA for check-endpoints-signer@1765447886 cert, and key in /tmp/serving-cert-3652440615/serving-signer.crt, /tmp/serving-cert-3652440615/serving-signer.key\\\\nI1211 10:11:26.711906 1 observer_polling.go:159] Starting file observer\\\\nW1211 10:11:26.714018 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1211 10:11:26.714220 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1211 10:11:26.715195 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3652440615/tls.crt::/tmp/serving-cert-3652440615/tls.key\\\\\\\"\\\\nF1211 10:11:37.220702 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:26Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2348bd7a336966cd91aa6ba1cf71771e7fd111085acbb0481adee82d7a6e109\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4536100178c4611eea8cfe2b15d79f55fdd32b7004de523cc2694bd656ce5be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4536100178c4611eea8cfe2b15d79f55fdd32b7004de523cc2694bd656ce5be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:38Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:38 crc kubenswrapper[4953]: I1211 10:12:38.662755 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:38Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:38 crc kubenswrapper[4953]: I1211 10:12:38.691550 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:38 crc kubenswrapper[4953]: I1211 10:12:38.691620 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:38 crc kubenswrapper[4953]: I1211 10:12:38.691635 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:38 crc kubenswrapper[4953]: I1211 10:12:38.691654 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:38 crc kubenswrapper[4953]: I1211 10:12:38.691675 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:38Z","lastTransitionTime":"2025-12-11T10:12:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:38 crc kubenswrapper[4953]: I1211 10:12:38.793972 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:38 crc kubenswrapper[4953]: I1211 10:12:38.794024 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:38 crc kubenswrapper[4953]: I1211 10:12:38.794037 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:38 crc kubenswrapper[4953]: I1211 10:12:38.794056 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:38 crc kubenswrapper[4953]: I1211 10:12:38.794068 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:38Z","lastTransitionTime":"2025-12-11T10:12:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:38 crc kubenswrapper[4953]: I1211 10:12:38.896240 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:38 crc kubenswrapper[4953]: I1211 10:12:38.896286 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:38 crc kubenswrapper[4953]: I1211 10:12:38.896296 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:38 crc kubenswrapper[4953]: I1211 10:12:38.896312 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:38 crc kubenswrapper[4953]: I1211 10:12:38.896321 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:38Z","lastTransitionTime":"2025-12-11T10:12:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:38 crc kubenswrapper[4953]: I1211 10:12:38.998857 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:38 crc kubenswrapper[4953]: I1211 10:12:38.998902 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:38 crc kubenswrapper[4953]: I1211 10:12:38.998912 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:38 crc kubenswrapper[4953]: I1211 10:12:38.998928 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:38 crc kubenswrapper[4953]: I1211 10:12:38.998940 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:38Z","lastTransitionTime":"2025-12-11T10:12:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:39 crc kubenswrapper[4953]: I1211 10:12:39.101531 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:39 crc kubenswrapper[4953]: I1211 10:12:39.101568 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:39 crc kubenswrapper[4953]: I1211 10:12:39.101603 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:39 crc kubenswrapper[4953]: I1211 10:12:39.101619 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:39 crc kubenswrapper[4953]: I1211 10:12:39.101630 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:39Z","lastTransitionTime":"2025-12-11T10:12:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:39 crc kubenswrapper[4953]: I1211 10:12:39.205085 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:39 crc kubenswrapper[4953]: I1211 10:12:39.205192 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:39 crc kubenswrapper[4953]: I1211 10:12:39.205205 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:39 crc kubenswrapper[4953]: I1211 10:12:39.205222 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:39 crc kubenswrapper[4953]: I1211 10:12:39.205232 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:39Z","lastTransitionTime":"2025-12-11T10:12:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:39 crc kubenswrapper[4953]: I1211 10:12:39.307415 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:39 crc kubenswrapper[4953]: I1211 10:12:39.307444 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:39 crc kubenswrapper[4953]: I1211 10:12:39.307452 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:39 crc kubenswrapper[4953]: I1211 10:12:39.307472 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:39 crc kubenswrapper[4953]: I1211 10:12:39.307480 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:39Z","lastTransitionTime":"2025-12-11T10:12:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:39 crc kubenswrapper[4953]: I1211 10:12:39.409904 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:39 crc kubenswrapper[4953]: I1211 10:12:39.409966 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:39 crc kubenswrapper[4953]: I1211 10:12:39.409977 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:39 crc kubenswrapper[4953]: I1211 10:12:39.409995 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:39 crc kubenswrapper[4953]: I1211 10:12:39.410006 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:39Z","lastTransitionTime":"2025-12-11T10:12:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:39 crc kubenswrapper[4953]: I1211 10:12:39.472828 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 10:12:39 crc kubenswrapper[4953]: I1211 10:12:39.472830 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 10:12:39 crc kubenswrapper[4953]: E1211 10:12:39.472986 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 10:12:39 crc kubenswrapper[4953]: E1211 10:12:39.473023 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 10:12:39 crc kubenswrapper[4953]: I1211 10:12:39.472848 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qm4mr" Dec 11 10:12:39 crc kubenswrapper[4953]: E1211 10:12:39.473126 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qm4mr" podUID="86f65b63-32e0-49cc-bc96-272ecfb987ed" Dec 11 10:12:39 crc kubenswrapper[4953]: I1211 10:12:39.473338 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 10:12:39 crc kubenswrapper[4953]: E1211 10:12:39.473522 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 10:12:39 crc kubenswrapper[4953]: I1211 10:12:39.512050 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:39 crc kubenswrapper[4953]: I1211 10:12:39.512120 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:39 crc kubenswrapper[4953]: I1211 10:12:39.512145 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:39 crc kubenswrapper[4953]: I1211 10:12:39.512171 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:39 crc kubenswrapper[4953]: I1211 10:12:39.512189 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:39Z","lastTransitionTime":"2025-12-11T10:12:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:39 crc kubenswrapper[4953]: I1211 10:12:39.615860 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:39 crc kubenswrapper[4953]: I1211 10:12:39.615908 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:39 crc kubenswrapper[4953]: I1211 10:12:39.615923 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:39 crc kubenswrapper[4953]: I1211 10:12:39.615941 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:39 crc kubenswrapper[4953]: I1211 10:12:39.615953 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:39Z","lastTransitionTime":"2025-12-11T10:12:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:39 crc kubenswrapper[4953]: I1211 10:12:39.718633 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:39 crc kubenswrapper[4953]: I1211 10:12:39.718693 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:39 crc kubenswrapper[4953]: I1211 10:12:39.718707 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:39 crc kubenswrapper[4953]: I1211 10:12:39.718733 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:39 crc kubenswrapper[4953]: I1211 10:12:39.718748 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:39Z","lastTransitionTime":"2025-12-11T10:12:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:39 crc kubenswrapper[4953]: I1211 10:12:39.821806 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:39 crc kubenswrapper[4953]: I1211 10:12:39.821864 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:39 crc kubenswrapper[4953]: I1211 10:12:39.821878 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:39 crc kubenswrapper[4953]: I1211 10:12:39.821897 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:39 crc kubenswrapper[4953]: I1211 10:12:39.821912 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:39Z","lastTransitionTime":"2025-12-11T10:12:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:39 crc kubenswrapper[4953]: I1211 10:12:39.925260 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:39 crc kubenswrapper[4953]: I1211 10:12:39.925401 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:39 crc kubenswrapper[4953]: I1211 10:12:39.925956 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:39 crc kubenswrapper[4953]: I1211 10:12:39.925993 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:39 crc kubenswrapper[4953]: I1211 10:12:39.926005 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:39Z","lastTransitionTime":"2025-12-11T10:12:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:40 crc kubenswrapper[4953]: I1211 10:12:40.028677 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:40 crc kubenswrapper[4953]: I1211 10:12:40.028705 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:40 crc kubenswrapper[4953]: I1211 10:12:40.028713 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:40 crc kubenswrapper[4953]: I1211 10:12:40.028726 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:40 crc kubenswrapper[4953]: I1211 10:12:40.028734 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:40Z","lastTransitionTime":"2025-12-11T10:12:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:40 crc kubenswrapper[4953]: I1211 10:12:40.131826 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:40 crc kubenswrapper[4953]: I1211 10:12:40.131869 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:40 crc kubenswrapper[4953]: I1211 10:12:40.131879 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:40 crc kubenswrapper[4953]: I1211 10:12:40.131893 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:40 crc kubenswrapper[4953]: I1211 10:12:40.131904 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:40Z","lastTransitionTime":"2025-12-11T10:12:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:40 crc kubenswrapper[4953]: I1211 10:12:40.234463 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:40 crc kubenswrapper[4953]: I1211 10:12:40.234499 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:40 crc kubenswrapper[4953]: I1211 10:12:40.234508 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:40 crc kubenswrapper[4953]: I1211 10:12:40.234522 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:40 crc kubenswrapper[4953]: I1211 10:12:40.234530 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:40Z","lastTransitionTime":"2025-12-11T10:12:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:40 crc kubenswrapper[4953]: I1211 10:12:40.336945 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:40 crc kubenswrapper[4953]: I1211 10:12:40.336988 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:40 crc kubenswrapper[4953]: I1211 10:12:40.336999 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:40 crc kubenswrapper[4953]: I1211 10:12:40.337015 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:40 crc kubenswrapper[4953]: I1211 10:12:40.337029 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:40Z","lastTransitionTime":"2025-12-11T10:12:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:40 crc kubenswrapper[4953]: I1211 10:12:40.438723 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:40 crc kubenswrapper[4953]: I1211 10:12:40.438775 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:40 crc kubenswrapper[4953]: I1211 10:12:40.438787 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:40 crc kubenswrapper[4953]: I1211 10:12:40.438804 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:40 crc kubenswrapper[4953]: I1211 10:12:40.438817 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:40Z","lastTransitionTime":"2025-12-11T10:12:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:40 crc kubenswrapper[4953]: I1211 10:12:40.541362 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:40 crc kubenswrapper[4953]: I1211 10:12:40.541409 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:40 crc kubenswrapper[4953]: I1211 10:12:40.541418 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:40 crc kubenswrapper[4953]: I1211 10:12:40.541438 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:40 crc kubenswrapper[4953]: I1211 10:12:40.541449 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:40Z","lastTransitionTime":"2025-12-11T10:12:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:40 crc kubenswrapper[4953]: I1211 10:12:40.644396 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:40 crc kubenswrapper[4953]: I1211 10:12:40.644475 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:40 crc kubenswrapper[4953]: I1211 10:12:40.644501 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:40 crc kubenswrapper[4953]: I1211 10:12:40.644534 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:40 crc kubenswrapper[4953]: I1211 10:12:40.644559 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:40Z","lastTransitionTime":"2025-12-11T10:12:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:40 crc kubenswrapper[4953]: I1211 10:12:40.747384 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:40 crc kubenswrapper[4953]: I1211 10:12:40.747452 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:40 crc kubenswrapper[4953]: I1211 10:12:40.747469 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:40 crc kubenswrapper[4953]: I1211 10:12:40.747494 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:40 crc kubenswrapper[4953]: I1211 10:12:40.747514 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:40Z","lastTransitionTime":"2025-12-11T10:12:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:40 crc kubenswrapper[4953]: I1211 10:12:40.851827 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:40 crc kubenswrapper[4953]: I1211 10:12:40.851886 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:40 crc kubenswrapper[4953]: I1211 10:12:40.851903 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:40 crc kubenswrapper[4953]: I1211 10:12:40.851927 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:40 crc kubenswrapper[4953]: I1211 10:12:40.851944 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:40Z","lastTransitionTime":"2025-12-11T10:12:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:40 crc kubenswrapper[4953]: I1211 10:12:40.955907 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:40 crc kubenswrapper[4953]: I1211 10:12:40.955971 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:40 crc kubenswrapper[4953]: I1211 10:12:40.956008 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:40 crc kubenswrapper[4953]: I1211 10:12:40.956043 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:40 crc kubenswrapper[4953]: I1211 10:12:40.956069 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:40Z","lastTransitionTime":"2025-12-11T10:12:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:41 crc kubenswrapper[4953]: I1211 10:12:41.060288 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:41 crc kubenswrapper[4953]: I1211 10:12:41.060339 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:41 crc kubenswrapper[4953]: I1211 10:12:41.060350 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:41 crc kubenswrapper[4953]: I1211 10:12:41.060368 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:41 crc kubenswrapper[4953]: I1211 10:12:41.060398 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:41Z","lastTransitionTime":"2025-12-11T10:12:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:41 crc kubenswrapper[4953]: I1211 10:12:41.163443 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:41 crc kubenswrapper[4953]: I1211 10:12:41.163554 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:41 crc kubenswrapper[4953]: I1211 10:12:41.163638 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:41 crc kubenswrapper[4953]: I1211 10:12:41.163671 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:41 crc kubenswrapper[4953]: I1211 10:12:41.163695 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:41Z","lastTransitionTime":"2025-12-11T10:12:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:41 crc kubenswrapper[4953]: I1211 10:12:41.266259 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:41 crc kubenswrapper[4953]: I1211 10:12:41.266303 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:41 crc kubenswrapper[4953]: I1211 10:12:41.266314 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:41 crc kubenswrapper[4953]: I1211 10:12:41.266330 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:41 crc kubenswrapper[4953]: I1211 10:12:41.266342 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:41Z","lastTransitionTime":"2025-12-11T10:12:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:41 crc kubenswrapper[4953]: I1211 10:12:41.335929 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:41 crc kubenswrapper[4953]: I1211 10:12:41.335982 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:41 crc kubenswrapper[4953]: I1211 10:12:41.335992 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:41 crc kubenswrapper[4953]: I1211 10:12:41.336005 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:41 crc kubenswrapper[4953]: I1211 10:12:41.336014 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:41Z","lastTransitionTime":"2025-12-11T10:12:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:41 crc kubenswrapper[4953]: E1211 10:12:41.349090 4953 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T10:12:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T10:12:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T10:12:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T10:12:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5fa37296-71b7-4540-87a3-260b8ecb76f4\\\",\\\"systemUUID\\\":\\\"28c30a59-aa99-484b-82a7-0daea6b2659e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:41Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:41 crc kubenswrapper[4953]: I1211 10:12:41.352938 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:41 crc kubenswrapper[4953]: I1211 10:12:41.352978 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:41 crc kubenswrapper[4953]: I1211 10:12:41.352989 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:41 crc kubenswrapper[4953]: I1211 10:12:41.353005 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:41 crc kubenswrapper[4953]: I1211 10:12:41.353017 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:41Z","lastTransitionTime":"2025-12-11T10:12:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:41 crc kubenswrapper[4953]: E1211 10:12:41.366724 4953 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T10:12:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T10:12:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T10:12:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T10:12:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5fa37296-71b7-4540-87a3-260b8ecb76f4\\\",\\\"systemUUID\\\":\\\"28c30a59-aa99-484b-82a7-0daea6b2659e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:41Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:41 crc kubenswrapper[4953]: I1211 10:12:41.370891 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:41 crc kubenswrapper[4953]: I1211 10:12:41.371005 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:41 crc kubenswrapper[4953]: I1211 10:12:41.371091 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:41 crc kubenswrapper[4953]: I1211 10:12:41.371184 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:41 crc kubenswrapper[4953]: I1211 10:12:41.371270 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:41Z","lastTransitionTime":"2025-12-11T10:12:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:41 crc kubenswrapper[4953]: E1211 10:12:41.383560 4953 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T10:12:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T10:12:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T10:12:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T10:12:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5fa37296-71b7-4540-87a3-260b8ecb76f4\\\",\\\"systemUUID\\\":\\\"28c30a59-aa99-484b-82a7-0daea6b2659e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:41Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:41 crc kubenswrapper[4953]: I1211 10:12:41.387013 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:41 crc kubenswrapper[4953]: I1211 10:12:41.387157 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:41 crc kubenswrapper[4953]: I1211 10:12:41.387225 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:41 crc kubenswrapper[4953]: I1211 10:12:41.387304 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:41 crc kubenswrapper[4953]: I1211 10:12:41.387398 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:41Z","lastTransitionTime":"2025-12-11T10:12:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:41 crc kubenswrapper[4953]: E1211 10:12:41.399251 4953 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T10:12:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T10:12:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T10:12:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T10:12:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5fa37296-71b7-4540-87a3-260b8ecb76f4\\\",\\\"systemUUID\\\":\\\"28c30a59-aa99-484b-82a7-0daea6b2659e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:41Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:41 crc kubenswrapper[4953]: I1211 10:12:41.403994 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:41 crc kubenswrapper[4953]: I1211 10:12:41.404028 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:41 crc kubenswrapper[4953]: I1211 10:12:41.404041 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:41 crc kubenswrapper[4953]: I1211 10:12:41.404056 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:41 crc kubenswrapper[4953]: I1211 10:12:41.404070 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:41Z","lastTransitionTime":"2025-12-11T10:12:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:41 crc kubenswrapper[4953]: E1211 10:12:41.416638 4953 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T10:12:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T10:12:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T10:12:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T10:12:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5fa37296-71b7-4540-87a3-260b8ecb76f4\\\",\\\"systemUUID\\\":\\\"28c30a59-aa99-484b-82a7-0daea6b2659e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:41Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:41 crc kubenswrapper[4953]: E1211 10:12:41.417140 4953 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 11 10:12:41 crc kubenswrapper[4953]: I1211 10:12:41.418994 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:41 crc kubenswrapper[4953]: I1211 10:12:41.419051 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:41 crc kubenswrapper[4953]: I1211 10:12:41.419061 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:41 crc kubenswrapper[4953]: I1211 10:12:41.419079 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:41 crc kubenswrapper[4953]: I1211 10:12:41.419090 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:41Z","lastTransitionTime":"2025-12-11T10:12:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:41 crc kubenswrapper[4953]: I1211 10:12:41.472343 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 10:12:41 crc kubenswrapper[4953]: I1211 10:12:41.472413 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 10:12:41 crc kubenswrapper[4953]: I1211 10:12:41.472425 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 10:12:41 crc kubenswrapper[4953]: E1211 10:12:41.472515 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 10:12:41 crc kubenswrapper[4953]: E1211 10:12:41.472672 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 10:12:41 crc kubenswrapper[4953]: E1211 10:12:41.472744 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 10:12:41 crc kubenswrapper[4953]: I1211 10:12:41.473152 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qm4mr" Dec 11 10:12:41 crc kubenswrapper[4953]: E1211 10:12:41.473266 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qm4mr" podUID="86f65b63-32e0-49cc-bc96-272ecfb987ed" Dec 11 10:12:41 crc kubenswrapper[4953]: I1211 10:12:41.522276 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:41 crc kubenswrapper[4953]: I1211 10:12:41.522345 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:41 crc kubenswrapper[4953]: I1211 10:12:41.522363 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:41 crc kubenswrapper[4953]: I1211 10:12:41.522385 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:41 crc kubenswrapper[4953]: I1211 10:12:41.522403 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:41Z","lastTransitionTime":"2025-12-11T10:12:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:41 crc kubenswrapper[4953]: I1211 10:12:41.625528 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:41 crc kubenswrapper[4953]: I1211 10:12:41.625565 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:41 crc kubenswrapper[4953]: I1211 10:12:41.625588 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:41 crc kubenswrapper[4953]: I1211 10:12:41.625603 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:41 crc kubenswrapper[4953]: I1211 10:12:41.625612 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:41Z","lastTransitionTime":"2025-12-11T10:12:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:41 crc kubenswrapper[4953]: I1211 10:12:41.729419 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:41 crc kubenswrapper[4953]: I1211 10:12:41.729482 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:41 crc kubenswrapper[4953]: I1211 10:12:41.729493 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:41 crc kubenswrapper[4953]: I1211 10:12:41.729511 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:41 crc kubenswrapper[4953]: I1211 10:12:41.729525 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:41Z","lastTransitionTime":"2025-12-11T10:12:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:41 crc kubenswrapper[4953]: I1211 10:12:41.832103 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:41 crc kubenswrapper[4953]: I1211 10:12:41.832155 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:41 crc kubenswrapper[4953]: I1211 10:12:41.832170 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:41 crc kubenswrapper[4953]: I1211 10:12:41.832188 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:41 crc kubenswrapper[4953]: I1211 10:12:41.832203 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:41Z","lastTransitionTime":"2025-12-11T10:12:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:41 crc kubenswrapper[4953]: I1211 10:12:41.936159 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:41 crc kubenswrapper[4953]: I1211 10:12:41.936240 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:41 crc kubenswrapper[4953]: I1211 10:12:41.936290 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:41 crc kubenswrapper[4953]: I1211 10:12:41.936313 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:41 crc kubenswrapper[4953]: I1211 10:12:41.936329 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:41Z","lastTransitionTime":"2025-12-11T10:12:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:42 crc kubenswrapper[4953]: I1211 10:12:42.039445 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:42 crc kubenswrapper[4953]: I1211 10:12:42.039487 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:42 crc kubenswrapper[4953]: I1211 10:12:42.039498 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:42 crc kubenswrapper[4953]: I1211 10:12:42.039514 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:42 crc kubenswrapper[4953]: I1211 10:12:42.039526 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:42Z","lastTransitionTime":"2025-12-11T10:12:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:42 crc kubenswrapper[4953]: I1211 10:12:42.142152 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:42 crc kubenswrapper[4953]: I1211 10:12:42.142219 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:42 crc kubenswrapper[4953]: I1211 10:12:42.142232 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:42 crc kubenswrapper[4953]: I1211 10:12:42.142247 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:42 crc kubenswrapper[4953]: I1211 10:12:42.142290 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:42Z","lastTransitionTime":"2025-12-11T10:12:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:42 crc kubenswrapper[4953]: I1211 10:12:42.245218 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:42 crc kubenswrapper[4953]: I1211 10:12:42.245297 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:42 crc kubenswrapper[4953]: I1211 10:12:42.245322 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:42 crc kubenswrapper[4953]: I1211 10:12:42.245354 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:42 crc kubenswrapper[4953]: I1211 10:12:42.245377 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:42Z","lastTransitionTime":"2025-12-11T10:12:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:42 crc kubenswrapper[4953]: I1211 10:12:42.348916 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:42 crc kubenswrapper[4953]: I1211 10:12:42.348971 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:42 crc kubenswrapper[4953]: I1211 10:12:42.348983 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:42 crc kubenswrapper[4953]: I1211 10:12:42.349002 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:42 crc kubenswrapper[4953]: I1211 10:12:42.349015 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:42Z","lastTransitionTime":"2025-12-11T10:12:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:42 crc kubenswrapper[4953]: I1211 10:12:42.451174 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:42 crc kubenswrapper[4953]: I1211 10:12:42.451211 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:42 crc kubenswrapper[4953]: I1211 10:12:42.451223 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:42 crc kubenswrapper[4953]: I1211 10:12:42.451238 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:42 crc kubenswrapper[4953]: I1211 10:12:42.451250 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:42Z","lastTransitionTime":"2025-12-11T10:12:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:42 crc kubenswrapper[4953]: I1211 10:12:42.485542 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:42Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:42 crc kubenswrapper[4953]: I1211 10:12:42.511448 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e57ec14864d78b0463b4bd4af9dfa21aec61df60a63a38b7d98ba4871716edfc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:42Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:42 crc kubenswrapper[4953]: I1211 10:12:42.528650 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h4dvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"644e1d40-ab80-469e-94b4-540e52b8e2c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc80f2149ec8320584aa8fd55223ba13d53848232acd659a71bb35fdea7a043f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f734acf34a05a9425f305c809775bae58615ae1d5f89e3b519e54d7e7abb8bc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T10:12:37Z\\\",\\\"message\\\":\\\"2025-12-11T10:11:51+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_bd4647c9-41fa-450f-a887-bb37cf629b23\\\\n2025-12-11T10:11:51+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_bd4647c9-41fa-450f-a887-bb37cf629b23 to /host/opt/cni/bin/\\\\n2025-12-11T10:11:52Z [verbose] multus-daemon started\\\\n2025-12-11T10:11:52Z [verbose] Readiness Indicator file check\\\\n2025-12-11T10:12:37Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:47Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:12:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbwwr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h4dvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:42Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:42 crc kubenswrapper[4953]: I1211 10:12:42.542435 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d98f6e58-767e-4e80-8dc7-bf97cdc14997\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec306b9048e81de45ce4e5ae1f564ab611980d56edf94f34c48cba7299dd754e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7453febb17d4aadef8c87c8d256a0339b441e2bed33a20a3f7cf88b4d0ce5a83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5c40bd3d558c5cff3d458a0b5a993371c3e8b6afc0035a64a21ffc0cc6c2357\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b22d8239ad9f5511dc6ae773c7ea181c4e194b0847b58332e716953d9deb9cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:42Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:42 crc kubenswrapper[4953]: I1211 10:12:42.553951 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:42 crc kubenswrapper[4953]: I1211 10:12:42.553985 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:42 crc kubenswrapper[4953]: I1211 10:12:42.553995 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:42 crc kubenswrapper[4953]: I1211 10:12:42.554007 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:42 crc kubenswrapper[4953]: I1211 10:12:42.554017 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:42Z","lastTransitionTime":"2025-12-11T10:12:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:42 crc kubenswrapper[4953]: I1211 10:12:42.557083 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7f8ca70-14ac-499f-9a73-c03f1cb9d3f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afbf1d478a1ccbd17c29483adf2e39e60be93dfde72d96dd4c45ee2b81c7db7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89487ecc0b25583d92a2adb537e660618a1f0477d9b0ca805c7d5cc120a38ef5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5850c59617cbc5cbf3d86246bfb8d7645964fdb32f406648e47de3d2e1dcca39\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b38e6fc7946d99ff7570627e9bfd01e9f5e029ad3f3e2cda276461f222d7950\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a91255550d88dd1963fef1112d90d2c1e779fc3e2dd1e7c824640879b8c6a58e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T10:11:37Z\\\",\\\"message\\\":\\\"W1211 10:11:26.311312 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1211 10:11:26.312053 1 crypto.go:601] Generating new CA for check-endpoints-signer@1765447886 cert, and key in /tmp/serving-cert-3652440615/serving-signer.crt, /tmp/serving-cert-3652440615/serving-signer.key\\\\nI1211 10:11:26.711906 1 observer_polling.go:159] Starting file observer\\\\nW1211 10:11:26.714018 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1211 10:11:26.714220 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1211 10:11:26.715195 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3652440615/tls.crt::/tmp/serving-cert-3652440615/tls.key\\\\\\\"\\\\nF1211 10:11:37.220702 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:26Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2348bd7a336966cd91aa6ba1cf71771e7fd111085acbb0481adee82d7a6e109\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4536100178c4611eea8cfe2b15d79f55fdd32b7004de523cc2694bd656ce5be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4536100178c4611eea8cfe2b15d79f55fdd32b7004de523cc2694bd656ce5be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:42Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:42 crc kubenswrapper[4953]: I1211 10:12:42.568669 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bjhsd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c4cea1-0872-4490-8195-2a195090982c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e2ab3c73fffd4d07174524dd41c285309cc588049ea3896875e75982d072ced\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnnf2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1469f484fec8f5c7863ebaa62188bc38d6553fe3ef65e315a928924306724842\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnnf2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bjhsd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:42Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:42 crc kubenswrapper[4953]: I1211 10:12:42.585124 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb9312a7af4fcd14d64411afec83b7315dbe399254aab23665cccfa0b04a62db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:42Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:42 crc kubenswrapper[4953]: I1211 10:12:42.599247 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q2898" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed741fb7-1326-48b7-a713-17c9f0243eac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c91690c6fc715e967f98fc731db9ff317a21946b0903480ee2534f5e71ae7ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5z9nk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd6810974250266a6a2efbea13db5cb6f52a4bbdec05955f7b9f58e55d7a8c4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5z9nk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q2898\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:42Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:42 crc kubenswrapper[4953]: I1211 10:12:42.623682 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x6f57" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c09d8243-6693-433e-bce1-8a99e5e37b95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b88a1ca7bd533da2486525e0a6c3dc45272433e743146be01eb8013265f02d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://622e6781ae06491000d6c0e3d2922942c3709bb09483a8e0d2ab723972c2aa78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c34fbadf5e4606e6ad395137eff9b9edc19b5d02125e818a6d8a19d0d20649e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2587a8f43682f31a01fae073769712321abb001f975f5ac5413264489a110f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42759c1ba178a28ddf9b11b221249364f6993c1288309dff6b1c50be13c3b6fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99f628cfb5844a18bd0013d2cd5f7f545ef7a561017498bdfa4a6b6fc4686e54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60a64421d848d2d5154604bb89edadbac944c141172896eb9bc48b6fab7e7b77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60a64421d848d2d5154604bb89edadbac944c141172896eb9bc48b6fab7e7b77\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T10:12:18Z\\\",\\\"message\\\":\\\"r.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1211 10:12:18.510892 6563 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1211 10:12:18.511234 6563 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1211 10:12:18.511255 6563 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1211 10:12:18.511261 6563 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1211 10:12:18.511504 6563 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1211 10:12:18.511509 6563 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1211 10:12:18.511523 6563 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1211 10:12:18.511549 6563 factory.go:656] Stopping watch factory\\\\nI1211 10:12:18.511589 6563 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1211 10:12:18.511599 6563 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1211 10:12:18.511606 6563 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1211 10:12:18.511613 6563 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1211 10:12:18.511620 6563 handler.go:208] Removed *v1.Node event handler 2\\\\nI1211 10:12:18.511628 6563 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T10:12:17Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-x6f57_openshift-ovn-kubernetes(c09d8243-6693-433e-bce1-8a99e5e37b95)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8da892281a7b8dea449aee461dcb35bef204e2a768689e751abcb21204091aaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c84dce3b218bcc630289f0bff286bad09d4481e1787ef2c048799bfc6c97108f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c84dce3b218bcc630289f0bff286bad09d4481e1787ef2c048799bfc6c97108f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x6f57\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:42Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:42 crc kubenswrapper[4953]: I1211 10:12:42.638267 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6787fb31-272a-4dd9-b0f2-bfb5630d6901\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42af1d5ca92f02433468753b3f0f0cb74ef360928733d71e4316fb8ed77aea63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ec7f5911594475d4a03216b385df264254e50cb55ef7eee3d2ac0a88e8ef1af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e43d812b41951ea02ea6aeaf53d101e762a3bc0513865818ff2dcc6506a24d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b17d140523000135ca46bbc525af1160b82222469a9ca408985ab27c2514f82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b17d140523000135ca46bbc525af1160b82222469a9ca408985ab27c2514f82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:23Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:22Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:42Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:42 crc kubenswrapper[4953]: I1211 10:12:42.651832 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e38e7bec81ab11b9afe5c592d5c57aa1c0527e5e4031265a00a99ef8cb3c6dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da0ab06260b0bf565e089d1d1a78ae71e0ce94f0d5e867393dafc543f9014367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:42Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:42 crc kubenswrapper[4953]: I1211 10:12:42.656664 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:42 crc kubenswrapper[4953]: I1211 10:12:42.656724 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:42 crc kubenswrapper[4953]: I1211 10:12:42.656737 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:42 crc kubenswrapper[4953]: I1211 10:12:42.656754 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:42 crc kubenswrapper[4953]: I1211 10:12:42.656766 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:42Z","lastTransitionTime":"2025-12-11T10:12:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:42 crc kubenswrapper[4953]: I1211 10:12:42.665283 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ps59j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed9da9e3-3f97-49f6-9774-3c2f06987b9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8b7289e76184818bc11ef0e99cd573244647de790af79ac277a91ebf305bc1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vngds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ps59j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:42Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:42 crc kubenswrapper[4953]: I1211 10:12:42.681418 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pqtrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d80d6bd6-dd9c-433e-93cb-2be48e4cea72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7525c3e73b38b27709833d8bf03853f82b08bafa8734d97890332f8aff9d3317\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c456b7fb076e4e1f8ebe23ee26c07f2038800afe76c0e183ea82aebebe5e20d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c456b7fb076e4e1f8ebe23ee26c07f2038800afe76c0e183ea82aebebe5e20d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1199214a609bfaf541ddc0e45714a86f4165bdb240eda61b8ee35764cc2b6c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1199214a609bfaf541ddc0e45714a86f4165bdb240eda61b8ee35764cc2b6c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79056708a2a0ccc6dca203be18aec98296471e9c95fa3124c910deb77f04cb23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79056708a2a0ccc6dca203be18aec98296471e9c95fa3124c910deb77f04cb23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdf32e1c2b4bd849d12dcee505be41ebbd38d03660e37c1b36d65f81e55d5160\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdf32e1c2b4bd849d12dcee505be41ebbd38d03660e37c1b36d65f81e55d5160\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6dd8c365888d82936ae2eeef058fd79b7134d40d2096eeb655fc79faa658ce6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6dd8c365888d82936ae2eeef058fd79b7134d40d2096eeb655fc79faa658ce6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22373c7e841c5b2889f89395496fcd5cf912db482ef228c680812c667bead5da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22373c7e841c5b2889f89395496fcd5cf912db482ef228c680812c667bead5da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pqtrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:42Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:42 crc kubenswrapper[4953]: I1211 10:12:42.694687 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qm4mr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86f65b63-32e0-49cc-bc96-272ecfb987ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqpb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqpb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:12:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qm4mr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:42Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:42 crc kubenswrapper[4953]: I1211 10:12:42.709334 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:42Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:42 crc kubenswrapper[4953]: I1211 10:12:42.725860 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:42Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:42 crc kubenswrapper[4953]: I1211 10:12:42.736515 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7cgmm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5c6cf50-ac15-4b65-98a3-24d5b55a9f5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e8c3b294febaab8650ca738b055222b11b0f3502da927fb9bb1f2f30b97c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrv98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7cgmm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:42Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:42 crc kubenswrapper[4953]: I1211 10:12:42.759858 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:42 crc kubenswrapper[4953]: I1211 10:12:42.759907 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:42 crc kubenswrapper[4953]: I1211 10:12:42.759919 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:42 crc kubenswrapper[4953]: I1211 10:12:42.759935 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:42 crc kubenswrapper[4953]: I1211 10:12:42.759947 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:42Z","lastTransitionTime":"2025-12-11T10:12:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:42 crc kubenswrapper[4953]: I1211 10:12:42.863375 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:42 crc kubenswrapper[4953]: I1211 10:12:42.863425 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:42 crc kubenswrapper[4953]: I1211 10:12:42.863434 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:42 crc kubenswrapper[4953]: I1211 10:12:42.863450 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:42 crc kubenswrapper[4953]: I1211 10:12:42.863460 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:42Z","lastTransitionTime":"2025-12-11T10:12:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:42 crc kubenswrapper[4953]: I1211 10:12:42.966226 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:42 crc kubenswrapper[4953]: I1211 10:12:42.966279 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:42 crc kubenswrapper[4953]: I1211 10:12:42.966293 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:42 crc kubenswrapper[4953]: I1211 10:12:42.966311 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:42 crc kubenswrapper[4953]: I1211 10:12:42.966323 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:42Z","lastTransitionTime":"2025-12-11T10:12:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:43 crc kubenswrapper[4953]: I1211 10:12:43.069773 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:43 crc kubenswrapper[4953]: I1211 10:12:43.069822 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:43 crc kubenswrapper[4953]: I1211 10:12:43.069830 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:43 crc kubenswrapper[4953]: I1211 10:12:43.069846 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:43 crc kubenswrapper[4953]: I1211 10:12:43.069856 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:43Z","lastTransitionTime":"2025-12-11T10:12:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:43 crc kubenswrapper[4953]: I1211 10:12:43.173182 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:43 crc kubenswrapper[4953]: I1211 10:12:43.173235 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:43 crc kubenswrapper[4953]: I1211 10:12:43.173251 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:43 crc kubenswrapper[4953]: I1211 10:12:43.173273 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:43 crc kubenswrapper[4953]: I1211 10:12:43.173290 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:43Z","lastTransitionTime":"2025-12-11T10:12:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:43 crc kubenswrapper[4953]: I1211 10:12:43.275375 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:43 crc kubenswrapper[4953]: I1211 10:12:43.275420 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:43 crc kubenswrapper[4953]: I1211 10:12:43.275429 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:43 crc kubenswrapper[4953]: I1211 10:12:43.275444 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:43 crc kubenswrapper[4953]: I1211 10:12:43.275455 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:43Z","lastTransitionTime":"2025-12-11T10:12:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:43 crc kubenswrapper[4953]: I1211 10:12:43.377761 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:43 crc kubenswrapper[4953]: I1211 10:12:43.377796 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:43 crc kubenswrapper[4953]: I1211 10:12:43.377808 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:43 crc kubenswrapper[4953]: I1211 10:12:43.377824 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:43 crc kubenswrapper[4953]: I1211 10:12:43.377835 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:43Z","lastTransitionTime":"2025-12-11T10:12:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:43 crc kubenswrapper[4953]: I1211 10:12:43.472847 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 10:12:43 crc kubenswrapper[4953]: I1211 10:12:43.472847 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qm4mr" Dec 11 10:12:43 crc kubenswrapper[4953]: I1211 10:12:43.472945 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 10:12:43 crc kubenswrapper[4953]: I1211 10:12:43.473124 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 10:12:43 crc kubenswrapper[4953]: E1211 10:12:43.473225 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qm4mr" podUID="86f65b63-32e0-49cc-bc96-272ecfb987ed" Dec 11 10:12:43 crc kubenswrapper[4953]: E1211 10:12:43.473290 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 10:12:43 crc kubenswrapper[4953]: E1211 10:12:43.473437 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 10:12:43 crc kubenswrapper[4953]: E1211 10:12:43.473469 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 10:12:43 crc kubenswrapper[4953]: I1211 10:12:43.480863 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:43 crc kubenswrapper[4953]: I1211 10:12:43.480899 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:43 crc kubenswrapper[4953]: I1211 10:12:43.480908 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:43 crc kubenswrapper[4953]: I1211 10:12:43.480923 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:43 crc kubenswrapper[4953]: I1211 10:12:43.480934 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:43Z","lastTransitionTime":"2025-12-11T10:12:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:43 crc kubenswrapper[4953]: I1211 10:12:43.485663 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Dec 11 10:12:43 crc kubenswrapper[4953]: I1211 10:12:43.583663 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:43 crc kubenswrapper[4953]: I1211 10:12:43.583706 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:43 crc kubenswrapper[4953]: I1211 10:12:43.583716 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:43 crc kubenswrapper[4953]: I1211 10:12:43.583729 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:43 crc kubenswrapper[4953]: I1211 10:12:43.583740 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:43Z","lastTransitionTime":"2025-12-11T10:12:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:43 crc kubenswrapper[4953]: I1211 10:12:43.685465 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:43 crc kubenswrapper[4953]: I1211 10:12:43.685518 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:43 crc kubenswrapper[4953]: I1211 10:12:43.685530 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:43 crc kubenswrapper[4953]: I1211 10:12:43.685547 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:43 crc kubenswrapper[4953]: I1211 10:12:43.685560 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:43Z","lastTransitionTime":"2025-12-11T10:12:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:43 crc kubenswrapper[4953]: I1211 10:12:43.789186 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:43 crc kubenswrapper[4953]: I1211 10:12:43.789265 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:43 crc kubenswrapper[4953]: I1211 10:12:43.789278 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:43 crc kubenswrapper[4953]: I1211 10:12:43.789306 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:43 crc kubenswrapper[4953]: I1211 10:12:43.789319 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:43Z","lastTransitionTime":"2025-12-11T10:12:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:43 crc kubenswrapper[4953]: I1211 10:12:43.891682 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:43 crc kubenswrapper[4953]: I1211 10:12:43.891727 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:43 crc kubenswrapper[4953]: I1211 10:12:43.891741 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:43 crc kubenswrapper[4953]: I1211 10:12:43.891757 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:43 crc kubenswrapper[4953]: I1211 10:12:43.891768 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:43Z","lastTransitionTime":"2025-12-11T10:12:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:43 crc kubenswrapper[4953]: I1211 10:12:43.994301 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:43 crc kubenswrapper[4953]: I1211 10:12:43.994342 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:43 crc kubenswrapper[4953]: I1211 10:12:43.994353 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:43 crc kubenswrapper[4953]: I1211 10:12:43.994369 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:43 crc kubenswrapper[4953]: I1211 10:12:43.994382 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:43Z","lastTransitionTime":"2025-12-11T10:12:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:44 crc kubenswrapper[4953]: I1211 10:12:44.097277 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:44 crc kubenswrapper[4953]: I1211 10:12:44.097366 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:44 crc kubenswrapper[4953]: I1211 10:12:44.097379 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:44 crc kubenswrapper[4953]: I1211 10:12:44.097399 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:44 crc kubenswrapper[4953]: I1211 10:12:44.097410 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:44Z","lastTransitionTime":"2025-12-11T10:12:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:44 crc kubenswrapper[4953]: I1211 10:12:44.199874 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:44 crc kubenswrapper[4953]: I1211 10:12:44.199923 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:44 crc kubenswrapper[4953]: I1211 10:12:44.199941 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:44 crc kubenswrapper[4953]: I1211 10:12:44.199963 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:44 crc kubenswrapper[4953]: I1211 10:12:44.199978 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:44Z","lastTransitionTime":"2025-12-11T10:12:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:44 crc kubenswrapper[4953]: I1211 10:12:44.303120 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:44 crc kubenswrapper[4953]: I1211 10:12:44.303526 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:44 crc kubenswrapper[4953]: I1211 10:12:44.303555 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:44 crc kubenswrapper[4953]: I1211 10:12:44.303569 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:44 crc kubenswrapper[4953]: I1211 10:12:44.303598 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:44Z","lastTransitionTime":"2025-12-11T10:12:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:44 crc kubenswrapper[4953]: I1211 10:12:44.406561 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:44 crc kubenswrapper[4953]: I1211 10:12:44.406634 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:44 crc kubenswrapper[4953]: I1211 10:12:44.406646 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:44 crc kubenswrapper[4953]: I1211 10:12:44.406663 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:44 crc kubenswrapper[4953]: I1211 10:12:44.406676 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:44Z","lastTransitionTime":"2025-12-11T10:12:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:44 crc kubenswrapper[4953]: I1211 10:12:44.509658 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:44 crc kubenswrapper[4953]: I1211 10:12:44.509713 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:44 crc kubenswrapper[4953]: I1211 10:12:44.509725 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:44 crc kubenswrapper[4953]: I1211 10:12:44.509741 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:44 crc kubenswrapper[4953]: I1211 10:12:44.509753 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:44Z","lastTransitionTime":"2025-12-11T10:12:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:44 crc kubenswrapper[4953]: I1211 10:12:44.612509 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:44 crc kubenswrapper[4953]: I1211 10:12:44.612550 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:44 crc kubenswrapper[4953]: I1211 10:12:44.612559 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:44 crc kubenswrapper[4953]: I1211 10:12:44.612595 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:44 crc kubenswrapper[4953]: I1211 10:12:44.612605 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:44Z","lastTransitionTime":"2025-12-11T10:12:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:44 crc kubenswrapper[4953]: I1211 10:12:44.715345 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:44 crc kubenswrapper[4953]: I1211 10:12:44.715386 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:44 crc kubenswrapper[4953]: I1211 10:12:44.715395 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:44 crc kubenswrapper[4953]: I1211 10:12:44.715409 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:44 crc kubenswrapper[4953]: I1211 10:12:44.715417 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:44Z","lastTransitionTime":"2025-12-11T10:12:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:44 crc kubenswrapper[4953]: I1211 10:12:44.829928 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:44 crc kubenswrapper[4953]: I1211 10:12:44.829972 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:44 crc kubenswrapper[4953]: I1211 10:12:44.829984 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:44 crc kubenswrapper[4953]: I1211 10:12:44.830000 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:44 crc kubenswrapper[4953]: I1211 10:12:44.830013 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:44Z","lastTransitionTime":"2025-12-11T10:12:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:44 crc kubenswrapper[4953]: I1211 10:12:44.933071 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:44 crc kubenswrapper[4953]: I1211 10:12:44.933113 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:44 crc kubenswrapper[4953]: I1211 10:12:44.933125 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:44 crc kubenswrapper[4953]: I1211 10:12:44.933139 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:44 crc kubenswrapper[4953]: I1211 10:12:44.933150 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:44Z","lastTransitionTime":"2025-12-11T10:12:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:45 crc kubenswrapper[4953]: I1211 10:12:45.035906 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:45 crc kubenswrapper[4953]: I1211 10:12:45.035954 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:45 crc kubenswrapper[4953]: I1211 10:12:45.035966 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:45 crc kubenswrapper[4953]: I1211 10:12:45.035981 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:45 crc kubenswrapper[4953]: I1211 10:12:45.035990 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:45Z","lastTransitionTime":"2025-12-11T10:12:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:45 crc kubenswrapper[4953]: I1211 10:12:45.138845 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:45 crc kubenswrapper[4953]: I1211 10:12:45.138916 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:45 crc kubenswrapper[4953]: I1211 10:12:45.138939 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:45 crc kubenswrapper[4953]: I1211 10:12:45.138973 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:45 crc kubenswrapper[4953]: I1211 10:12:45.138998 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:45Z","lastTransitionTime":"2025-12-11T10:12:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:45 crc kubenswrapper[4953]: I1211 10:12:45.242485 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:45 crc kubenswrapper[4953]: I1211 10:12:45.242556 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:45 crc kubenswrapper[4953]: I1211 10:12:45.242615 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:45 crc kubenswrapper[4953]: I1211 10:12:45.242664 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:45 crc kubenswrapper[4953]: I1211 10:12:45.242684 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:45Z","lastTransitionTime":"2025-12-11T10:12:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:45 crc kubenswrapper[4953]: I1211 10:12:45.346362 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:45 crc kubenswrapper[4953]: I1211 10:12:45.346411 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:45 crc kubenswrapper[4953]: I1211 10:12:45.346424 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:45 crc kubenswrapper[4953]: I1211 10:12:45.346445 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:45 crc kubenswrapper[4953]: I1211 10:12:45.346458 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:45Z","lastTransitionTime":"2025-12-11T10:12:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:45 crc kubenswrapper[4953]: I1211 10:12:45.450748 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:45 crc kubenswrapper[4953]: I1211 10:12:45.450834 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:45 crc kubenswrapper[4953]: I1211 10:12:45.450853 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:45 crc kubenswrapper[4953]: I1211 10:12:45.450881 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:45 crc kubenswrapper[4953]: I1211 10:12:45.450899 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:45Z","lastTransitionTime":"2025-12-11T10:12:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:45 crc kubenswrapper[4953]: I1211 10:12:45.472940 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 10:12:45 crc kubenswrapper[4953]: I1211 10:12:45.472971 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 10:12:45 crc kubenswrapper[4953]: I1211 10:12:45.472976 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qm4mr" Dec 11 10:12:45 crc kubenswrapper[4953]: I1211 10:12:45.473759 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 10:12:45 crc kubenswrapper[4953]: E1211 10:12:45.474052 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 10:12:45 crc kubenswrapper[4953]: E1211 10:12:45.474186 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 10:12:45 crc kubenswrapper[4953]: E1211 10:12:45.474307 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 10:12:45 crc kubenswrapper[4953]: E1211 10:12:45.474447 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qm4mr" podUID="86f65b63-32e0-49cc-bc96-272ecfb987ed" Dec 11 10:12:45 crc kubenswrapper[4953]: I1211 10:12:45.553683 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:45 crc kubenswrapper[4953]: I1211 10:12:45.553713 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:45 crc kubenswrapper[4953]: I1211 10:12:45.553723 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:45 crc kubenswrapper[4953]: I1211 10:12:45.553736 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:45 crc kubenswrapper[4953]: I1211 10:12:45.553745 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:45Z","lastTransitionTime":"2025-12-11T10:12:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:45 crc kubenswrapper[4953]: I1211 10:12:45.657888 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:45 crc kubenswrapper[4953]: I1211 10:12:45.657935 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:45 crc kubenswrapper[4953]: I1211 10:12:45.657952 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:45 crc kubenswrapper[4953]: I1211 10:12:45.657977 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:45 crc kubenswrapper[4953]: I1211 10:12:45.657994 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:45Z","lastTransitionTime":"2025-12-11T10:12:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:45 crc kubenswrapper[4953]: I1211 10:12:45.761082 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:45 crc kubenswrapper[4953]: I1211 10:12:45.761159 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:45 crc kubenswrapper[4953]: I1211 10:12:45.761190 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:45 crc kubenswrapper[4953]: I1211 10:12:45.761225 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:45 crc kubenswrapper[4953]: I1211 10:12:45.761249 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:45Z","lastTransitionTime":"2025-12-11T10:12:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:45 crc kubenswrapper[4953]: I1211 10:12:45.864810 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:45 crc kubenswrapper[4953]: I1211 10:12:45.864859 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:45 crc kubenswrapper[4953]: I1211 10:12:45.864896 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:45 crc kubenswrapper[4953]: I1211 10:12:45.864914 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:45 crc kubenswrapper[4953]: I1211 10:12:45.864928 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:45Z","lastTransitionTime":"2025-12-11T10:12:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:45 crc kubenswrapper[4953]: I1211 10:12:45.968155 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:45 crc kubenswrapper[4953]: I1211 10:12:45.968198 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:45 crc kubenswrapper[4953]: I1211 10:12:45.968208 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:45 crc kubenswrapper[4953]: I1211 10:12:45.968223 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:45 crc kubenswrapper[4953]: I1211 10:12:45.968233 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:45Z","lastTransitionTime":"2025-12-11T10:12:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:46 crc kubenswrapper[4953]: I1211 10:12:46.070988 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:46 crc kubenswrapper[4953]: I1211 10:12:46.071025 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:46 crc kubenswrapper[4953]: I1211 10:12:46.071035 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:46 crc kubenswrapper[4953]: I1211 10:12:46.071051 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:46 crc kubenswrapper[4953]: I1211 10:12:46.071062 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:46Z","lastTransitionTime":"2025-12-11T10:12:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:46 crc kubenswrapper[4953]: I1211 10:12:46.173920 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:46 crc kubenswrapper[4953]: I1211 10:12:46.173978 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:46 crc kubenswrapper[4953]: I1211 10:12:46.173990 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:46 crc kubenswrapper[4953]: I1211 10:12:46.174006 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:46 crc kubenswrapper[4953]: I1211 10:12:46.174018 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:46Z","lastTransitionTime":"2025-12-11T10:12:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:46 crc kubenswrapper[4953]: I1211 10:12:46.277058 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:46 crc kubenswrapper[4953]: I1211 10:12:46.277096 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:46 crc kubenswrapper[4953]: I1211 10:12:46.277106 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:46 crc kubenswrapper[4953]: I1211 10:12:46.277121 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:46 crc kubenswrapper[4953]: I1211 10:12:46.277133 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:46Z","lastTransitionTime":"2025-12-11T10:12:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:46 crc kubenswrapper[4953]: I1211 10:12:46.379736 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:46 crc kubenswrapper[4953]: I1211 10:12:46.379795 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:46 crc kubenswrapper[4953]: I1211 10:12:46.379808 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:46 crc kubenswrapper[4953]: I1211 10:12:46.379825 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:46 crc kubenswrapper[4953]: I1211 10:12:46.379835 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:46Z","lastTransitionTime":"2025-12-11T10:12:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:46 crc kubenswrapper[4953]: I1211 10:12:46.482271 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:46 crc kubenswrapper[4953]: I1211 10:12:46.482326 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:46 crc kubenswrapper[4953]: I1211 10:12:46.482338 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:46 crc kubenswrapper[4953]: I1211 10:12:46.482350 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:46 crc kubenswrapper[4953]: I1211 10:12:46.482359 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:46Z","lastTransitionTime":"2025-12-11T10:12:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:46 crc kubenswrapper[4953]: I1211 10:12:46.587010 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:46 crc kubenswrapper[4953]: I1211 10:12:46.587050 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:46 crc kubenswrapper[4953]: I1211 10:12:46.587062 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:46 crc kubenswrapper[4953]: I1211 10:12:46.587079 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:46 crc kubenswrapper[4953]: I1211 10:12:46.587093 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:46Z","lastTransitionTime":"2025-12-11T10:12:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:46 crc kubenswrapper[4953]: I1211 10:12:46.690489 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:46 crc kubenswrapper[4953]: I1211 10:12:46.690554 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:46 crc kubenswrapper[4953]: I1211 10:12:46.690599 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:46 crc kubenswrapper[4953]: I1211 10:12:46.690625 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:46 crc kubenswrapper[4953]: I1211 10:12:46.690644 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:46Z","lastTransitionTime":"2025-12-11T10:12:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:46 crc kubenswrapper[4953]: I1211 10:12:46.793250 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:46 crc kubenswrapper[4953]: I1211 10:12:46.793299 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:46 crc kubenswrapper[4953]: I1211 10:12:46.793308 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:46 crc kubenswrapper[4953]: I1211 10:12:46.793320 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:46 crc kubenswrapper[4953]: I1211 10:12:46.793330 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:46Z","lastTransitionTime":"2025-12-11T10:12:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:46 crc kubenswrapper[4953]: I1211 10:12:46.897318 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:46 crc kubenswrapper[4953]: I1211 10:12:46.897642 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:46 crc kubenswrapper[4953]: I1211 10:12:46.897758 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:46 crc kubenswrapper[4953]: I1211 10:12:46.897845 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:46 crc kubenswrapper[4953]: I1211 10:12:46.897931 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:46Z","lastTransitionTime":"2025-12-11T10:12:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:47 crc kubenswrapper[4953]: I1211 10:12:47.000852 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:47 crc kubenswrapper[4953]: I1211 10:12:47.000892 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:47 crc kubenswrapper[4953]: I1211 10:12:47.000904 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:47 crc kubenswrapper[4953]: I1211 10:12:47.000921 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:47 crc kubenswrapper[4953]: I1211 10:12:47.000932 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:47Z","lastTransitionTime":"2025-12-11T10:12:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:47 crc kubenswrapper[4953]: I1211 10:12:47.103681 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:47 crc kubenswrapper[4953]: I1211 10:12:47.103727 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:47 crc kubenswrapper[4953]: I1211 10:12:47.103738 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:47 crc kubenswrapper[4953]: I1211 10:12:47.103756 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:47 crc kubenswrapper[4953]: I1211 10:12:47.103768 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:47Z","lastTransitionTime":"2025-12-11T10:12:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:47 crc kubenswrapper[4953]: I1211 10:12:47.207006 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:47 crc kubenswrapper[4953]: I1211 10:12:47.207052 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:47 crc kubenswrapper[4953]: I1211 10:12:47.207063 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:47 crc kubenswrapper[4953]: I1211 10:12:47.207079 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:47 crc kubenswrapper[4953]: I1211 10:12:47.207091 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:47Z","lastTransitionTime":"2025-12-11T10:12:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:47 crc kubenswrapper[4953]: I1211 10:12:47.283125 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 10:12:47 crc kubenswrapper[4953]: E1211 10:12:47.283446 4953 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 11 10:12:47 crc kubenswrapper[4953]: E1211 10:12:47.283819 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-11 10:13:51.283640374 +0000 UTC m=+149.307499447 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 11 10:12:47 crc kubenswrapper[4953]: I1211 10:12:47.484534 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:47 crc kubenswrapper[4953]: I1211 10:12:47.484600 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:47 crc kubenswrapper[4953]: I1211 10:12:47.484619 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:47 crc kubenswrapper[4953]: I1211 10:12:47.484643 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:47 crc kubenswrapper[4953]: I1211 10:12:47.484661 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:47Z","lastTransitionTime":"2025-12-11T10:12:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:47 crc kubenswrapper[4953]: I1211 10:12:47.486693 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 10:12:47 crc kubenswrapper[4953]: E1211 10:12:47.486819 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 10:12:47 crc kubenswrapper[4953]: I1211 10:12:47.487060 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 10:12:47 crc kubenswrapper[4953]: E1211 10:12:47.487151 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 10:12:47 crc kubenswrapper[4953]: I1211 10:12:47.487323 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 10:12:47 crc kubenswrapper[4953]: E1211 10:12:47.487409 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 10:12:47 crc kubenswrapper[4953]: I1211 10:12:47.487593 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qm4mr" Dec 11 10:12:47 crc kubenswrapper[4953]: E1211 10:12:47.487674 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qm4mr" podUID="86f65b63-32e0-49cc-bc96-272ecfb987ed" Dec 11 10:12:47 crc kubenswrapper[4953]: I1211 10:12:47.588011 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:47 crc kubenswrapper[4953]: I1211 10:12:47.588053 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:47 crc kubenswrapper[4953]: I1211 10:12:47.588065 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:47 crc kubenswrapper[4953]: I1211 10:12:47.588080 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:47 crc kubenswrapper[4953]: I1211 10:12:47.588092 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:47Z","lastTransitionTime":"2025-12-11T10:12:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:47 crc kubenswrapper[4953]: I1211 10:12:47.688033 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 10:12:47 crc kubenswrapper[4953]: E1211 10:12:47.688169 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 10:13:51.68814997 +0000 UTC m=+149.712009003 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:12:47 crc kubenswrapper[4953]: I1211 10:12:47.688226 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 10:12:47 crc kubenswrapper[4953]: I1211 10:12:47.688265 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 10:12:47 crc kubenswrapper[4953]: I1211 10:12:47.688286 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 10:12:47 crc kubenswrapper[4953]: E1211 10:12:47.688376 4953 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 11 10:12:47 crc kubenswrapper[4953]: E1211 10:12:47.688407 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-11 10:13:51.688400408 +0000 UTC m=+149.712259451 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 11 10:12:47 crc kubenswrapper[4953]: E1211 10:12:47.688406 4953 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 11 10:12:47 crc kubenswrapper[4953]: E1211 10:12:47.688427 4953 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 11 10:12:47 crc kubenswrapper[4953]: E1211 10:12:47.688435 4953 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 11 10:12:47 crc kubenswrapper[4953]: E1211 10:12:47.688444 4953 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 10:12:47 crc kubenswrapper[4953]: E1211 10:12:47.688456 4953 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 11 10:12:47 crc kubenswrapper[4953]: E1211 10:12:47.688466 4953 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 10:12:47 crc kubenswrapper[4953]: E1211 10:12:47.688480 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-11 10:13:51.688471841 +0000 UTC m=+149.712330884 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 10:12:47 crc kubenswrapper[4953]: E1211 10:12:47.688512 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-11 10:13:51.688500092 +0000 UTC m=+149.712359125 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 10:12:47 crc kubenswrapper[4953]: I1211 10:12:47.690077 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:47 crc kubenswrapper[4953]: I1211 10:12:47.690105 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:47 crc kubenswrapper[4953]: I1211 10:12:47.690113 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:47 crc kubenswrapper[4953]: I1211 10:12:47.690125 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:47 crc kubenswrapper[4953]: I1211 10:12:47.690133 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:47Z","lastTransitionTime":"2025-12-11T10:12:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:47 crc kubenswrapper[4953]: I1211 10:12:47.792384 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:47 crc kubenswrapper[4953]: I1211 10:12:47.792442 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:47 crc kubenswrapper[4953]: I1211 10:12:47.792455 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:47 crc kubenswrapper[4953]: I1211 10:12:47.792479 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:47 crc kubenswrapper[4953]: I1211 10:12:47.792504 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:47Z","lastTransitionTime":"2025-12-11T10:12:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:47 crc kubenswrapper[4953]: I1211 10:12:47.895893 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:47 crc kubenswrapper[4953]: I1211 10:12:47.895950 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:47 crc kubenswrapper[4953]: I1211 10:12:47.895961 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:47 crc kubenswrapper[4953]: I1211 10:12:47.895977 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:47 crc kubenswrapper[4953]: I1211 10:12:47.895986 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:47Z","lastTransitionTime":"2025-12-11T10:12:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:47 crc kubenswrapper[4953]: I1211 10:12:47.999030 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:47 crc kubenswrapper[4953]: I1211 10:12:47.999127 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:47 crc kubenswrapper[4953]: I1211 10:12:47.999153 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:47 crc kubenswrapper[4953]: I1211 10:12:47.999184 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:47 crc kubenswrapper[4953]: I1211 10:12:47.999210 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:47Z","lastTransitionTime":"2025-12-11T10:12:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:48 crc kubenswrapper[4953]: I1211 10:12:48.102009 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:48 crc kubenswrapper[4953]: I1211 10:12:48.102141 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:48 crc kubenswrapper[4953]: I1211 10:12:48.102347 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:48 crc kubenswrapper[4953]: I1211 10:12:48.102388 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:48 crc kubenswrapper[4953]: I1211 10:12:48.102409 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:48Z","lastTransitionTime":"2025-12-11T10:12:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:48 crc kubenswrapper[4953]: I1211 10:12:48.206873 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:48 crc kubenswrapper[4953]: I1211 10:12:48.206992 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:48 crc kubenswrapper[4953]: I1211 10:12:48.207044 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:48 crc kubenswrapper[4953]: I1211 10:12:48.207144 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:48 crc kubenswrapper[4953]: I1211 10:12:48.207169 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:48Z","lastTransitionTime":"2025-12-11T10:12:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:48 crc kubenswrapper[4953]: I1211 10:12:48.391392 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:48 crc kubenswrapper[4953]: I1211 10:12:48.391498 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:48 crc kubenswrapper[4953]: I1211 10:12:48.391512 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:48 crc kubenswrapper[4953]: I1211 10:12:48.391526 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:48 crc kubenswrapper[4953]: I1211 10:12:48.391539 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:48Z","lastTransitionTime":"2025-12-11T10:12:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:48 crc kubenswrapper[4953]: I1211 10:12:48.493703 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:48 crc kubenswrapper[4953]: I1211 10:12:48.493811 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:48 crc kubenswrapper[4953]: I1211 10:12:48.493856 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:48 crc kubenswrapper[4953]: I1211 10:12:48.493905 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:48 crc kubenswrapper[4953]: I1211 10:12:48.493931 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:48Z","lastTransitionTime":"2025-12-11T10:12:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:48 crc kubenswrapper[4953]: I1211 10:12:48.596487 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:48 crc kubenswrapper[4953]: I1211 10:12:48.596538 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:48 crc kubenswrapper[4953]: I1211 10:12:48.596550 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:48 crc kubenswrapper[4953]: I1211 10:12:48.596584 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:48 crc kubenswrapper[4953]: I1211 10:12:48.596598 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:48Z","lastTransitionTime":"2025-12-11T10:12:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:48 crc kubenswrapper[4953]: I1211 10:12:48.699137 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:48 crc kubenswrapper[4953]: I1211 10:12:48.699183 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:48 crc kubenswrapper[4953]: I1211 10:12:48.699200 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:48 crc kubenswrapper[4953]: I1211 10:12:48.699221 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:48 crc kubenswrapper[4953]: I1211 10:12:48.699236 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:48Z","lastTransitionTime":"2025-12-11T10:12:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:48 crc kubenswrapper[4953]: I1211 10:12:48.801470 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:48 crc kubenswrapper[4953]: I1211 10:12:48.801515 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:48 crc kubenswrapper[4953]: I1211 10:12:48.801528 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:48 crc kubenswrapper[4953]: I1211 10:12:48.801543 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:48 crc kubenswrapper[4953]: I1211 10:12:48.801555 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:48Z","lastTransitionTime":"2025-12-11T10:12:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:48 crc kubenswrapper[4953]: I1211 10:12:48.903859 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:48 crc kubenswrapper[4953]: I1211 10:12:48.903896 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:48 crc kubenswrapper[4953]: I1211 10:12:48.903907 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:48 crc kubenswrapper[4953]: I1211 10:12:48.903921 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:48 crc kubenswrapper[4953]: I1211 10:12:48.903934 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:48Z","lastTransitionTime":"2025-12-11T10:12:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:49 crc kubenswrapper[4953]: I1211 10:12:49.006290 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:49 crc kubenswrapper[4953]: I1211 10:12:49.006342 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:49 crc kubenswrapper[4953]: I1211 10:12:49.006352 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:49 crc kubenswrapper[4953]: I1211 10:12:49.006374 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:49 crc kubenswrapper[4953]: I1211 10:12:49.006388 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:49Z","lastTransitionTime":"2025-12-11T10:12:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:49 crc kubenswrapper[4953]: I1211 10:12:49.108391 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:49 crc kubenswrapper[4953]: I1211 10:12:49.108428 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:49 crc kubenswrapper[4953]: I1211 10:12:49.108437 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:49 crc kubenswrapper[4953]: I1211 10:12:49.108466 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:49 crc kubenswrapper[4953]: I1211 10:12:49.108480 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:49Z","lastTransitionTime":"2025-12-11T10:12:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:49 crc kubenswrapper[4953]: I1211 10:12:49.211591 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:49 crc kubenswrapper[4953]: I1211 10:12:49.211666 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:49 crc kubenswrapper[4953]: I1211 10:12:49.211695 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:49 crc kubenswrapper[4953]: I1211 10:12:49.211721 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:49 crc kubenswrapper[4953]: I1211 10:12:49.211741 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:49Z","lastTransitionTime":"2025-12-11T10:12:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:49 crc kubenswrapper[4953]: I1211 10:12:49.314509 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:49 crc kubenswrapper[4953]: I1211 10:12:49.314604 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:49 crc kubenswrapper[4953]: I1211 10:12:49.314626 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:49 crc kubenswrapper[4953]: I1211 10:12:49.314652 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:49 crc kubenswrapper[4953]: I1211 10:12:49.314673 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:49Z","lastTransitionTime":"2025-12-11T10:12:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:49 crc kubenswrapper[4953]: I1211 10:12:49.417408 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:49 crc kubenswrapper[4953]: I1211 10:12:49.417447 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:49 crc kubenswrapper[4953]: I1211 10:12:49.417458 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:49 crc kubenswrapper[4953]: I1211 10:12:49.417474 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:49 crc kubenswrapper[4953]: I1211 10:12:49.417486 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:49Z","lastTransitionTime":"2025-12-11T10:12:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:49 crc kubenswrapper[4953]: I1211 10:12:49.472840 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 10:12:49 crc kubenswrapper[4953]: I1211 10:12:49.472956 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qm4mr" Dec 11 10:12:49 crc kubenswrapper[4953]: I1211 10:12:49.473097 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 10:12:49 crc kubenswrapper[4953]: E1211 10:12:49.473103 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 10:12:49 crc kubenswrapper[4953]: I1211 10:12:49.473149 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 10:12:49 crc kubenswrapper[4953]: E1211 10:12:49.473261 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 10:12:49 crc kubenswrapper[4953]: E1211 10:12:49.473870 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 10:12:49 crc kubenswrapper[4953]: E1211 10:12:49.474033 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qm4mr" podUID="86f65b63-32e0-49cc-bc96-272ecfb987ed" Dec 11 10:12:49 crc kubenswrapper[4953]: I1211 10:12:49.474564 4953 scope.go:117] "RemoveContainer" containerID="60a64421d848d2d5154604bb89edadbac944c141172896eb9bc48b6fab7e7b77" Dec 11 10:12:49 crc kubenswrapper[4953]: I1211 10:12:49.521848 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:49 crc kubenswrapper[4953]: I1211 10:12:49.521919 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:49 crc kubenswrapper[4953]: I1211 10:12:49.521943 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:49 crc kubenswrapper[4953]: I1211 10:12:49.521974 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:49 crc kubenswrapper[4953]: I1211 10:12:49.521997 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:49Z","lastTransitionTime":"2025-12-11T10:12:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:49 crc kubenswrapper[4953]: I1211 10:12:49.625200 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:49 crc kubenswrapper[4953]: I1211 10:12:49.626283 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:49 crc kubenswrapper[4953]: I1211 10:12:49.626455 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:49 crc kubenswrapper[4953]: I1211 10:12:49.626658 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:49 crc kubenswrapper[4953]: I1211 10:12:49.626825 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:49Z","lastTransitionTime":"2025-12-11T10:12:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:49 crc kubenswrapper[4953]: I1211 10:12:49.729868 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:49 crc kubenswrapper[4953]: I1211 10:12:49.730119 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:49 crc kubenswrapper[4953]: I1211 10:12:49.730191 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:49 crc kubenswrapper[4953]: I1211 10:12:49.730269 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:49 crc kubenswrapper[4953]: I1211 10:12:49.730345 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:49Z","lastTransitionTime":"2025-12-11T10:12:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:49 crc kubenswrapper[4953]: I1211 10:12:49.834080 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:49 crc kubenswrapper[4953]: I1211 10:12:49.834162 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:49 crc kubenswrapper[4953]: I1211 10:12:49.834185 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:49 crc kubenswrapper[4953]: I1211 10:12:49.834217 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:49 crc kubenswrapper[4953]: I1211 10:12:49.834259 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:49Z","lastTransitionTime":"2025-12-11T10:12:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:49 crc kubenswrapper[4953]: I1211 10:12:49.937738 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:49 crc kubenswrapper[4953]: I1211 10:12:49.937805 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:49 crc kubenswrapper[4953]: I1211 10:12:49.937828 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:49 crc kubenswrapper[4953]: I1211 10:12:49.937862 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:49 crc kubenswrapper[4953]: I1211 10:12:49.937885 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:49Z","lastTransitionTime":"2025-12-11T10:12:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:50 crc kubenswrapper[4953]: I1211 10:12:50.040324 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:50 crc kubenswrapper[4953]: I1211 10:12:50.040879 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:50 crc kubenswrapper[4953]: I1211 10:12:50.041068 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:50 crc kubenswrapper[4953]: I1211 10:12:50.041258 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:50 crc kubenswrapper[4953]: I1211 10:12:50.041433 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:50Z","lastTransitionTime":"2025-12-11T10:12:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:50 crc kubenswrapper[4953]: I1211 10:12:50.144695 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:50 crc kubenswrapper[4953]: I1211 10:12:50.145340 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:50 crc kubenswrapper[4953]: I1211 10:12:50.145563 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:50 crc kubenswrapper[4953]: I1211 10:12:50.145782 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:50 crc kubenswrapper[4953]: I1211 10:12:50.145920 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:50Z","lastTransitionTime":"2025-12-11T10:12:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:50 crc kubenswrapper[4953]: I1211 10:12:50.249452 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:50 crc kubenswrapper[4953]: I1211 10:12:50.249544 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:50 crc kubenswrapper[4953]: I1211 10:12:50.249561 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:50 crc kubenswrapper[4953]: I1211 10:12:50.249620 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:50 crc kubenswrapper[4953]: I1211 10:12:50.249637 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:50Z","lastTransitionTime":"2025-12-11T10:12:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:50 crc kubenswrapper[4953]: I1211 10:12:50.354308 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:50 crc kubenswrapper[4953]: I1211 10:12:50.354891 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:50 crc kubenswrapper[4953]: I1211 10:12:50.355106 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:50 crc kubenswrapper[4953]: I1211 10:12:50.355284 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:50 crc kubenswrapper[4953]: I1211 10:12:50.355438 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:50Z","lastTransitionTime":"2025-12-11T10:12:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:50 crc kubenswrapper[4953]: I1211 10:12:50.458276 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:50 crc kubenswrapper[4953]: I1211 10:12:50.458322 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:50 crc kubenswrapper[4953]: I1211 10:12:50.458336 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:50 crc kubenswrapper[4953]: I1211 10:12:50.458355 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:50 crc kubenswrapper[4953]: I1211 10:12:50.458367 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:50Z","lastTransitionTime":"2025-12-11T10:12:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:50 crc kubenswrapper[4953]: I1211 10:12:50.501536 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x6f57_c09d8243-6693-433e-bce1-8a99e5e37b95/ovnkube-controller/2.log" Dec 11 10:12:50 crc kubenswrapper[4953]: I1211 10:12:50.504166 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x6f57" event={"ID":"c09d8243-6693-433e-bce1-8a99e5e37b95","Type":"ContainerStarted","Data":"7dc0cdbe5f1b125694bc32b6055f6f98ac803834f27c54f96be12ec7c359b5c1"} Dec 11 10:12:50 crc kubenswrapper[4953]: I1211 10:12:50.504706 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-x6f57" Dec 11 10:12:50 crc kubenswrapper[4953]: I1211 10:12:50.534343 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb9312a7af4fcd14d64411afec83b7315dbe399254aab23665cccfa0b04a62db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:50Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:50 crc kubenswrapper[4953]: I1211 10:12:50.548555 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q2898" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed741fb7-1326-48b7-a713-17c9f0243eac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c91690c6fc715e967f98fc731db9ff317a21946b0903480ee2534f5e71ae7ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5z9nk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd6810974250266a6a2efbea13db5cb6f52a4bbdec05955f7b9f58e55d7a8c4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5z9nk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q2898\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:50Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:50 crc kubenswrapper[4953]: I1211 10:12:50.560832 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:50 crc kubenswrapper[4953]: I1211 10:12:50.560866 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:50 crc kubenswrapper[4953]: I1211 10:12:50.560875 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:50 crc kubenswrapper[4953]: I1211 10:12:50.560890 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:50 crc kubenswrapper[4953]: I1211 10:12:50.560898 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:50Z","lastTransitionTime":"2025-12-11T10:12:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:50 crc kubenswrapper[4953]: I1211 10:12:50.572648 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x6f57" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c09d8243-6693-433e-bce1-8a99e5e37b95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b88a1ca7bd533da2486525e0a6c3dc45272433e743146be01eb8013265f02d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://622e6781ae06491000d6c0e3d2922942c3709bb09483a8e0d2ab723972c2aa78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c34fbadf5e4606e6ad395137eff9b9edc19b5d02125e818a6d8a19d0d20649e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2587a8f43682f31a01fae073769712321abb001f975f5ac5413264489a110f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42759c1ba178a28ddf9b11b221249364f6993c1288309dff6b1c50be13c3b6fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99f628cfb5844a18bd0013d2cd5f7f545ef7a561017498bdfa4a6b6fc4686e54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dc0cdbe5f1b125694bc32b6055f6f98ac803834f27c54f96be12ec7c359b5c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60a64421d848d2d5154604bb89edadbac944c141172896eb9bc48b6fab7e7b77\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T10:12:18Z\\\",\\\"message\\\":\\\"r.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1211 10:12:18.510892 6563 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1211 10:12:18.511234 6563 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1211 10:12:18.511255 6563 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1211 10:12:18.511261 6563 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1211 10:12:18.511504 6563 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1211 10:12:18.511509 6563 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1211 10:12:18.511523 6563 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1211 10:12:18.511549 6563 factory.go:656] Stopping watch factory\\\\nI1211 10:12:18.511589 6563 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1211 10:12:18.511599 6563 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1211 10:12:18.511606 6563 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1211 10:12:18.511613 6563 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1211 10:12:18.511620 6563 handler.go:208] Removed *v1.Node event handler 2\\\\nI1211 10:12:18.511628 6563 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T10:12:17Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:12:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8da892281a7b8dea449aee461dcb35bef204e2a768689e751abcb21204091aaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c84dce3b218bcc630289f0bff286bad09d4481e1787ef2c048799bfc6c97108f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c84dce3b218bcc630289f0bff286bad09d4481e1787ef2c048799bfc6c97108f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x6f57\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:50Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:50 crc kubenswrapper[4953]: I1211 10:12:50.586747 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bjhsd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c4cea1-0872-4490-8195-2a195090982c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e2ab3c73fffd4d07174524dd41c285309cc588049ea3896875e75982d072ced\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnnf2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1469f484fec8f5c7863ebaa62188bc38d6553fe3ef65e315a928924306724842\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnnf2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bjhsd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:50Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:50 crc kubenswrapper[4953]: I1211 10:12:50.600079 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6787fb31-272a-4dd9-b0f2-bfb5630d6901\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42af1d5ca92f02433468753b3f0f0cb74ef360928733d71e4316fb8ed77aea63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ec7f5911594475d4a03216b385df264254e50cb55ef7eee3d2ac0a88e8ef1af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e43d812b41951ea02ea6aeaf53d101e762a3bc0513865818ff2dcc6506a24d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b17d140523000135ca46bbc525af1160b82222469a9ca408985ab27c2514f82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b17d140523000135ca46bbc525af1160b82222469a9ca408985ab27c2514f82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:23Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:22Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:50Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:50 crc kubenswrapper[4953]: I1211 10:12:50.612990 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e38e7bec81ab11b9afe5c592d5c57aa1c0527e5e4031265a00a99ef8cb3c6dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da0ab06260b0bf565e089d1d1a78ae71e0ce94f0d5e867393dafc543f9014367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:50Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:50 crc kubenswrapper[4953]: I1211 10:12:50.631215 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:50Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:50 crc kubenswrapper[4953]: I1211 10:12:50.644706 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:50Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:50 crc kubenswrapper[4953]: I1211 10:12:50.656688 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7cgmm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5c6cf50-ac15-4b65-98a3-24d5b55a9f5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e8c3b294febaab8650ca738b055222b11b0f3502da927fb9bb1f2f30b97c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrv98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7cgmm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:50Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:50 crc kubenswrapper[4953]: I1211 10:12:50.662846 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:50 crc kubenswrapper[4953]: I1211 10:12:50.662879 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:50 crc kubenswrapper[4953]: I1211 10:12:50.662888 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:50 crc kubenswrapper[4953]: I1211 10:12:50.662901 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:50 crc kubenswrapper[4953]: I1211 10:12:50.662912 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:50Z","lastTransitionTime":"2025-12-11T10:12:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:50 crc kubenswrapper[4953]: I1211 10:12:50.668389 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ps59j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed9da9e3-3f97-49f6-9774-3c2f06987b9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8b7289e76184818bc11ef0e99cd573244647de790af79ac277a91ebf305bc1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vngds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ps59j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:50Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:50 crc kubenswrapper[4953]: I1211 10:12:50.690687 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pqtrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d80d6bd6-dd9c-433e-93cb-2be48e4cea72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7525c3e73b38b27709833d8bf03853f82b08bafa8734d97890332f8aff9d3317\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c456b7fb076e4e1f8ebe23ee26c07f2038800afe76c0e183ea82aebebe5e20d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c456b7fb076e4e1f8ebe23ee26c07f2038800afe76c0e183ea82aebebe5e20d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1199214a609bfaf541ddc0e45714a86f4165bdb240eda61b8ee35764cc2b6c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1199214a609bfaf541ddc0e45714a86f4165bdb240eda61b8ee35764cc2b6c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79056708a2a0ccc6dca203be18aec98296471e9c95fa3124c910deb77f04cb23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79056708a2a0ccc6dca203be18aec98296471e9c95fa3124c910deb77f04cb23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdf32e1c2b4bd849d12dcee505be41ebbd38d03660e37c1b36d65f81e55d5160\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdf32e1c2b4bd849d12dcee505be41ebbd38d03660e37c1b36d65f81e55d5160\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6dd8c365888d82936ae2eeef058fd79b7134d40d2096eeb655fc79faa658ce6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6dd8c365888d82936ae2eeef058fd79b7134d40d2096eeb655fc79faa658ce6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22373c7e841c5b2889f89395496fcd5cf912db482ef228c680812c667bead5da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22373c7e841c5b2889f89395496fcd5cf912db482ef228c680812c667bead5da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pqtrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:50Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:50 crc kubenswrapper[4953]: I1211 10:12:50.702272 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qm4mr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86f65b63-32e0-49cc-bc96-272ecfb987ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqpb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqpb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:12:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qm4mr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:50Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:50 crc kubenswrapper[4953]: I1211 10:12:50.721257 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d98f6e58-767e-4e80-8dc7-bf97cdc14997\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec306b9048e81de45ce4e5ae1f564ab611980d56edf94f34c48cba7299dd754e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7453febb17d4aadef8c87c8d256a0339b441e2bed33a20a3f7cf88b4d0ce5a83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5c40bd3d558c5cff3d458a0b5a993371c3e8b6afc0035a64a21ffc0cc6c2357\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b22d8239ad9f5511dc6ae773c7ea181c4e194b0847b58332e716953d9deb9cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:50Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:50 crc kubenswrapper[4953]: I1211 10:12:50.732416 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7daafa5e-2caa-42a9-8578-35292ce8ce51\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://976b60a31862ddd53a1343b6fc3d27137f731775f54572f0c6e202fe6d7db1de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://145c63ec5d9aa482290fb3b6e2dc891fc95675fc0124836381f31f6535eb4574\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://145c63ec5d9aa482290fb3b6e2dc891fc95675fc0124836381f31f6535eb4574\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:50Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:50 crc kubenswrapper[4953]: I1211 10:12:50.748738 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7f8ca70-14ac-499f-9a73-c03f1cb9d3f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afbf1d478a1ccbd17c29483adf2e39e60be93dfde72d96dd4c45ee2b81c7db7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89487ecc0b25583d92a2adb537e660618a1f0477d9b0ca805c7d5cc120a38ef5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5850c59617cbc5cbf3d86246bfb8d7645964fdb32f406648e47de3d2e1dcca39\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b38e6fc7946d99ff7570627e9bfd01e9f5e029ad3f3e2cda276461f222d7950\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a91255550d88dd1963fef1112d90d2c1e779fc3e2dd1e7c824640879b8c6a58e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T10:11:37Z\\\",\\\"message\\\":\\\"W1211 10:11:26.311312 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1211 10:11:26.312053 1 crypto.go:601] Generating new CA for check-endpoints-signer@1765447886 cert, and key in /tmp/serving-cert-3652440615/serving-signer.crt, /tmp/serving-cert-3652440615/serving-signer.key\\\\nI1211 10:11:26.711906 1 observer_polling.go:159] Starting file observer\\\\nW1211 10:11:26.714018 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1211 10:11:26.714220 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1211 10:11:26.715195 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3652440615/tls.crt::/tmp/serving-cert-3652440615/tls.key\\\\\\\"\\\\nF1211 10:11:37.220702 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:26Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2348bd7a336966cd91aa6ba1cf71771e7fd111085acbb0481adee82d7a6e109\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4536100178c4611eea8cfe2b15d79f55fdd32b7004de523cc2694bd656ce5be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4536100178c4611eea8cfe2b15d79f55fdd32b7004de523cc2694bd656ce5be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:50Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:50 crc kubenswrapper[4953]: I1211 10:12:50.764644 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:50Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:50 crc kubenswrapper[4953]: I1211 10:12:50.765602 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:50 crc kubenswrapper[4953]: I1211 10:12:50.765651 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:50 crc kubenswrapper[4953]: I1211 10:12:50.765684 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:50 crc kubenswrapper[4953]: I1211 10:12:50.765703 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:50 crc kubenswrapper[4953]: I1211 10:12:50.765714 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:50Z","lastTransitionTime":"2025-12-11T10:12:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:50 crc kubenswrapper[4953]: I1211 10:12:50.778540 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e57ec14864d78b0463b4bd4af9dfa21aec61df60a63a38b7d98ba4871716edfc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:50Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:50 crc kubenswrapper[4953]: I1211 10:12:50.789910 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h4dvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"644e1d40-ab80-469e-94b4-540e52b8e2c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc80f2149ec8320584aa8fd55223ba13d53848232acd659a71bb35fdea7a043f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f734acf34a05a9425f305c809775bae58615ae1d5f89e3b519e54d7e7abb8bc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T10:12:37Z\\\",\\\"message\\\":\\\"2025-12-11T10:11:51+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_bd4647c9-41fa-450f-a887-bb37cf629b23\\\\n2025-12-11T10:11:51+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_bd4647c9-41fa-450f-a887-bb37cf629b23 to /host/opt/cni/bin/\\\\n2025-12-11T10:11:52Z [verbose] multus-daemon started\\\\n2025-12-11T10:11:52Z [verbose] Readiness Indicator file check\\\\n2025-12-11T10:12:37Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:47Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:12:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbwwr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h4dvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:50Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:50 crc kubenswrapper[4953]: I1211 10:12:50.871169 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:50 crc kubenswrapper[4953]: I1211 10:12:50.871221 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:50 crc kubenswrapper[4953]: I1211 10:12:50.871231 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:50 crc kubenswrapper[4953]: I1211 10:12:50.871246 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:50 crc kubenswrapper[4953]: I1211 10:12:50.871259 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:50Z","lastTransitionTime":"2025-12-11T10:12:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:50 crc kubenswrapper[4953]: I1211 10:12:50.974219 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:50 crc kubenswrapper[4953]: I1211 10:12:50.974269 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:50 crc kubenswrapper[4953]: I1211 10:12:50.974281 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:50 crc kubenswrapper[4953]: I1211 10:12:50.974296 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:50 crc kubenswrapper[4953]: I1211 10:12:50.974307 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:50Z","lastTransitionTime":"2025-12-11T10:12:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:51 crc kubenswrapper[4953]: I1211 10:12:51.077073 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:51 crc kubenswrapper[4953]: I1211 10:12:51.077113 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:51 crc kubenswrapper[4953]: I1211 10:12:51.077140 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:51 crc kubenswrapper[4953]: I1211 10:12:51.077156 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:51 crc kubenswrapper[4953]: I1211 10:12:51.077166 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:51Z","lastTransitionTime":"2025-12-11T10:12:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:51 crc kubenswrapper[4953]: I1211 10:12:51.180105 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:51 crc kubenswrapper[4953]: I1211 10:12:51.180183 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:51 crc kubenswrapper[4953]: I1211 10:12:51.180219 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:51 crc kubenswrapper[4953]: I1211 10:12:51.180241 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:51 crc kubenswrapper[4953]: I1211 10:12:51.180252 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:51Z","lastTransitionTime":"2025-12-11T10:12:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:51 crc kubenswrapper[4953]: I1211 10:12:51.595335 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 10:12:51 crc kubenswrapper[4953]: I1211 10:12:51.595511 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 10:12:51 crc kubenswrapper[4953]: I1211 10:12:51.595753 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 10:12:51 crc kubenswrapper[4953]: E1211 10:12:51.595856 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 10:12:51 crc kubenswrapper[4953]: E1211 10:12:51.595745 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 10:12:51 crc kubenswrapper[4953]: I1211 10:12:51.595936 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qm4mr" Dec 11 10:12:51 crc kubenswrapper[4953]: E1211 10:12:51.595984 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 10:12:51 crc kubenswrapper[4953]: E1211 10:12:51.596027 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qm4mr" podUID="86f65b63-32e0-49cc-bc96-272ecfb987ed" Dec 11 10:12:51 crc kubenswrapper[4953]: I1211 10:12:51.596663 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:51 crc kubenswrapper[4953]: I1211 10:12:51.596700 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:51 crc kubenswrapper[4953]: I1211 10:12:51.596713 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:51 crc kubenswrapper[4953]: I1211 10:12:51.596729 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:51 crc kubenswrapper[4953]: I1211 10:12:51.596740 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:51Z","lastTransitionTime":"2025-12-11T10:12:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:51 crc kubenswrapper[4953]: I1211 10:12:51.634104 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:51 crc kubenswrapper[4953]: I1211 10:12:51.634172 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:51 crc kubenswrapper[4953]: I1211 10:12:51.634237 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:51 crc kubenswrapper[4953]: I1211 10:12:51.634258 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:51 crc kubenswrapper[4953]: I1211 10:12:51.634271 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:51Z","lastTransitionTime":"2025-12-11T10:12:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:51 crc kubenswrapper[4953]: E1211 10:12:51.649935 4953 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T10:12:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T10:12:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T10:12:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T10:12:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5fa37296-71b7-4540-87a3-260b8ecb76f4\\\",\\\"systemUUID\\\":\\\"28c30a59-aa99-484b-82a7-0daea6b2659e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:51Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:51 crc kubenswrapper[4953]: I1211 10:12:51.655415 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:51 crc kubenswrapper[4953]: I1211 10:12:51.655468 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:51 crc kubenswrapper[4953]: I1211 10:12:51.655481 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:51 crc kubenswrapper[4953]: I1211 10:12:51.655510 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:51 crc kubenswrapper[4953]: I1211 10:12:51.655526 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:51Z","lastTransitionTime":"2025-12-11T10:12:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:51 crc kubenswrapper[4953]: E1211 10:12:51.669525 4953 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T10:12:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T10:12:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T10:12:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T10:12:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5fa37296-71b7-4540-87a3-260b8ecb76f4\\\",\\\"systemUUID\\\":\\\"28c30a59-aa99-484b-82a7-0daea6b2659e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:51Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:51 crc kubenswrapper[4953]: I1211 10:12:51.673426 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:51 crc kubenswrapper[4953]: I1211 10:12:51.673479 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:51 crc kubenswrapper[4953]: I1211 10:12:51.673488 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:51 crc kubenswrapper[4953]: I1211 10:12:51.673501 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:51 crc kubenswrapper[4953]: I1211 10:12:51.673512 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:51Z","lastTransitionTime":"2025-12-11T10:12:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:51 crc kubenswrapper[4953]: E1211 10:12:51.686474 4953 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T10:12:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T10:12:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T10:12:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T10:12:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5fa37296-71b7-4540-87a3-260b8ecb76f4\\\",\\\"systemUUID\\\":\\\"28c30a59-aa99-484b-82a7-0daea6b2659e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:51Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:51 crc kubenswrapper[4953]: I1211 10:12:51.691344 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:51 crc kubenswrapper[4953]: I1211 10:12:51.691475 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:51 crc kubenswrapper[4953]: I1211 10:12:51.691496 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:51 crc kubenswrapper[4953]: I1211 10:12:51.691615 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:51 crc kubenswrapper[4953]: I1211 10:12:51.691640 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:51Z","lastTransitionTime":"2025-12-11T10:12:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:51 crc kubenswrapper[4953]: E1211 10:12:51.705912 4953 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T10:12:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T10:12:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T10:12:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T10:12:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5fa37296-71b7-4540-87a3-260b8ecb76f4\\\",\\\"systemUUID\\\":\\\"28c30a59-aa99-484b-82a7-0daea6b2659e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:51Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:51 crc kubenswrapper[4953]: I1211 10:12:51.710336 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:51 crc kubenswrapper[4953]: I1211 10:12:51.710376 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:51 crc kubenswrapper[4953]: I1211 10:12:51.710385 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:51 crc kubenswrapper[4953]: I1211 10:12:51.710399 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:51 crc kubenswrapper[4953]: I1211 10:12:51.710409 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:51Z","lastTransitionTime":"2025-12-11T10:12:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:51 crc kubenswrapper[4953]: E1211 10:12:51.728596 4953 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T10:12:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T10:12:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T10:12:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T10:12:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5fa37296-71b7-4540-87a3-260b8ecb76f4\\\",\\\"systemUUID\\\":\\\"28c30a59-aa99-484b-82a7-0daea6b2659e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:51Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:51 crc kubenswrapper[4953]: E1211 10:12:51.728723 4953 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 11 10:12:51 crc kubenswrapper[4953]: I1211 10:12:51.730710 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:51 crc kubenswrapper[4953]: I1211 10:12:51.730739 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:51 crc kubenswrapper[4953]: I1211 10:12:51.730748 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:51 crc kubenswrapper[4953]: I1211 10:12:51.730761 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:51 crc kubenswrapper[4953]: I1211 10:12:51.730771 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:51Z","lastTransitionTime":"2025-12-11T10:12:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:51 crc kubenswrapper[4953]: I1211 10:12:51.832783 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:51 crc kubenswrapper[4953]: I1211 10:12:51.832819 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:51 crc kubenswrapper[4953]: I1211 10:12:51.832830 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:51 crc kubenswrapper[4953]: I1211 10:12:51.832846 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:51 crc kubenswrapper[4953]: I1211 10:12:51.832857 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:51Z","lastTransitionTime":"2025-12-11T10:12:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:51 crc kubenswrapper[4953]: I1211 10:12:51.935537 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:51 crc kubenswrapper[4953]: I1211 10:12:51.935588 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:51 crc kubenswrapper[4953]: I1211 10:12:51.935598 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:51 crc kubenswrapper[4953]: I1211 10:12:51.935615 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:51 crc kubenswrapper[4953]: I1211 10:12:51.935626 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:51Z","lastTransitionTime":"2025-12-11T10:12:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:52 crc kubenswrapper[4953]: I1211 10:12:52.045384 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:52 crc kubenswrapper[4953]: I1211 10:12:52.045460 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:52 crc kubenswrapper[4953]: I1211 10:12:52.045470 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:52 crc kubenswrapper[4953]: I1211 10:12:52.045484 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:52 crc kubenswrapper[4953]: I1211 10:12:52.045492 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:52Z","lastTransitionTime":"2025-12-11T10:12:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:52 crc kubenswrapper[4953]: I1211 10:12:52.149295 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:52 crc kubenswrapper[4953]: I1211 10:12:52.149399 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:52 crc kubenswrapper[4953]: I1211 10:12:52.149419 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:52 crc kubenswrapper[4953]: I1211 10:12:52.149482 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:52 crc kubenswrapper[4953]: I1211 10:12:52.149500 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:52Z","lastTransitionTime":"2025-12-11T10:12:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:52 crc kubenswrapper[4953]: I1211 10:12:52.253214 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:52 crc kubenswrapper[4953]: I1211 10:12:52.253261 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:52 crc kubenswrapper[4953]: I1211 10:12:52.253274 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:52 crc kubenswrapper[4953]: I1211 10:12:52.253294 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:52 crc kubenswrapper[4953]: I1211 10:12:52.253310 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:52Z","lastTransitionTime":"2025-12-11T10:12:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:52 crc kubenswrapper[4953]: I1211 10:12:52.355104 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:52 crc kubenswrapper[4953]: I1211 10:12:52.355146 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:52 crc kubenswrapper[4953]: I1211 10:12:52.355158 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:52 crc kubenswrapper[4953]: I1211 10:12:52.355174 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:52 crc kubenswrapper[4953]: I1211 10:12:52.355186 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:52Z","lastTransitionTime":"2025-12-11T10:12:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:52 crc kubenswrapper[4953]: I1211 10:12:52.458484 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:52 crc kubenswrapper[4953]: I1211 10:12:52.458530 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:52 crc kubenswrapper[4953]: I1211 10:12:52.458539 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:52 crc kubenswrapper[4953]: I1211 10:12:52.458553 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:52 crc kubenswrapper[4953]: I1211 10:12:52.458562 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:52Z","lastTransitionTime":"2025-12-11T10:12:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:52 crc kubenswrapper[4953]: I1211 10:12:52.496626 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:52Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:52 crc kubenswrapper[4953]: I1211 10:12:52.511098 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7cgmm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5c6cf50-ac15-4b65-98a3-24d5b55a9f5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e8c3b294febaab8650ca738b055222b11b0f3502da927fb9bb1f2f30b97c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrv98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7cgmm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:52Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:52 crc kubenswrapper[4953]: I1211 10:12:52.525293 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ps59j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed9da9e3-3f97-49f6-9774-3c2f06987b9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8b7289e76184818bc11ef0e99cd573244647de790af79ac277a91ebf305bc1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vngds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ps59j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:52Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:52 crc kubenswrapper[4953]: I1211 10:12:52.546783 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pqtrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d80d6bd6-dd9c-433e-93cb-2be48e4cea72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7525c3e73b38b27709833d8bf03853f82b08bafa8734d97890332f8aff9d3317\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c456b7fb076e4e1f8ebe23ee26c07f2038800afe76c0e183ea82aebebe5e20d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c456b7fb076e4e1f8ebe23ee26c07f2038800afe76c0e183ea82aebebe5e20d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1199214a609bfaf541ddc0e45714a86f4165bdb240eda61b8ee35764cc2b6c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1199214a609bfaf541ddc0e45714a86f4165bdb240eda61b8ee35764cc2b6c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79056708a2a0ccc6dca203be18aec98296471e9c95fa3124c910deb77f04cb23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79056708a2a0ccc6dca203be18aec98296471e9c95fa3124c910deb77f04cb23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdf32e1c2b4bd849d12dcee505be41ebbd38d03660e37c1b36d65f81e55d5160\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdf32e1c2b4bd849d12dcee505be41ebbd38d03660e37c1b36d65f81e55d5160\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6dd8c365888d82936ae2eeef058fd79b7134d40d2096eeb655fc79faa658ce6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6dd8c365888d82936ae2eeef058fd79b7134d40d2096eeb655fc79faa658ce6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22373c7e841c5b2889f89395496fcd5cf912db482ef228c680812c667bead5da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22373c7e841c5b2889f89395496fcd5cf912db482ef228c680812c667bead5da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pqtrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:52Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:52 crc kubenswrapper[4953]: I1211 10:12:52.560358 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qm4mr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86f65b63-32e0-49cc-bc96-272ecfb987ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqpb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqpb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:12:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qm4mr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:52Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:52 crc kubenswrapper[4953]: I1211 10:12:52.561338 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:52 crc kubenswrapper[4953]: I1211 10:12:52.561443 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:52 crc kubenswrapper[4953]: I1211 10:12:52.561459 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:52 crc kubenswrapper[4953]: I1211 10:12:52.561493 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:52 crc kubenswrapper[4953]: I1211 10:12:52.561511 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:52Z","lastTransitionTime":"2025-12-11T10:12:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:52 crc kubenswrapper[4953]: I1211 10:12:52.577355 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:52Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:52 crc kubenswrapper[4953]: I1211 10:12:52.594036 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7daafa5e-2caa-42a9-8578-35292ce8ce51\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://976b60a31862ddd53a1343b6fc3d27137f731775f54572f0c6e202fe6d7db1de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://145c63ec5d9aa482290fb3b6e2dc891fc95675fc0124836381f31f6535eb4574\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://145c63ec5d9aa482290fb3b6e2dc891fc95675fc0124836381f31f6535eb4574\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:52Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:52 crc kubenswrapper[4953]: I1211 10:12:52.604530 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x6f57_c09d8243-6693-433e-bce1-8a99e5e37b95/ovnkube-controller/3.log" Dec 11 10:12:52 crc kubenswrapper[4953]: I1211 10:12:52.605568 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x6f57_c09d8243-6693-433e-bce1-8a99e5e37b95/ovnkube-controller/2.log" Dec 11 10:12:52 crc kubenswrapper[4953]: I1211 10:12:52.609342 4953 generic.go:334] "Generic (PLEG): container finished" podID="c09d8243-6693-433e-bce1-8a99e5e37b95" containerID="7dc0cdbe5f1b125694bc32b6055f6f98ac803834f27c54f96be12ec7c359b5c1" exitCode=1 Dec 11 10:12:52 crc kubenswrapper[4953]: I1211 10:12:52.609399 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x6f57" event={"ID":"c09d8243-6693-433e-bce1-8a99e5e37b95","Type":"ContainerDied","Data":"7dc0cdbe5f1b125694bc32b6055f6f98ac803834f27c54f96be12ec7c359b5c1"} Dec 11 10:12:52 crc kubenswrapper[4953]: I1211 10:12:52.609439 4953 scope.go:117] "RemoveContainer" containerID="60a64421d848d2d5154604bb89edadbac944c141172896eb9bc48b6fab7e7b77" Dec 11 10:12:52 crc kubenswrapper[4953]: I1211 10:12:52.610262 4953 scope.go:117] "RemoveContainer" containerID="7dc0cdbe5f1b125694bc32b6055f6f98ac803834f27c54f96be12ec7c359b5c1" Dec 11 10:12:52 crc kubenswrapper[4953]: E1211 10:12:52.610493 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-x6f57_openshift-ovn-kubernetes(c09d8243-6693-433e-bce1-8a99e5e37b95)\"" pod="openshift-ovn-kubernetes/ovnkube-node-x6f57" podUID="c09d8243-6693-433e-bce1-8a99e5e37b95" Dec 11 10:12:52 crc kubenswrapper[4953]: I1211 10:12:52.612634 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7f8ca70-14ac-499f-9a73-c03f1cb9d3f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afbf1d478a1ccbd17c29483adf2e39e60be93dfde72d96dd4c45ee2b81c7db7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89487ecc0b25583d92a2adb537e660618a1f0477d9b0ca805c7d5cc120a38ef5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5850c59617cbc5cbf3d86246bfb8d7645964fdb32f406648e47de3d2e1dcca39\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b38e6fc7946d99ff7570627e9bfd01e9f5e029ad3f3e2cda276461f222d7950\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a91255550d88dd1963fef1112d90d2c1e779fc3e2dd1e7c824640879b8c6a58e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T10:11:37Z\\\",\\\"message\\\":\\\"W1211 10:11:26.311312 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1211 10:11:26.312053 1 crypto.go:601] Generating new CA for check-endpoints-signer@1765447886 cert, and key in /tmp/serving-cert-3652440615/serving-signer.crt, /tmp/serving-cert-3652440615/serving-signer.key\\\\nI1211 10:11:26.711906 1 observer_polling.go:159] Starting file observer\\\\nW1211 10:11:26.714018 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1211 10:11:26.714220 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1211 10:11:26.715195 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3652440615/tls.crt::/tmp/serving-cert-3652440615/tls.key\\\\\\\"\\\\nF1211 10:11:37.220702 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:26Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2348bd7a336966cd91aa6ba1cf71771e7fd111085acbb0481adee82d7a6e109\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4536100178c4611eea8cfe2b15d79f55fdd32b7004de523cc2694bd656ce5be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4536100178c4611eea8cfe2b15d79f55fdd32b7004de523cc2694bd656ce5be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:52Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:52 crc kubenswrapper[4953]: I1211 10:12:52.626483 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:52Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:52 crc kubenswrapper[4953]: I1211 10:12:52.641542 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e57ec14864d78b0463b4bd4af9dfa21aec61df60a63a38b7d98ba4871716edfc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:52Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:52 crc kubenswrapper[4953]: I1211 10:12:52.659003 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h4dvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"644e1d40-ab80-469e-94b4-540e52b8e2c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc80f2149ec8320584aa8fd55223ba13d53848232acd659a71bb35fdea7a043f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f734acf34a05a9425f305c809775bae58615ae1d5f89e3b519e54d7e7abb8bc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T10:12:37Z\\\",\\\"message\\\":\\\"2025-12-11T10:11:51+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_bd4647c9-41fa-450f-a887-bb37cf629b23\\\\n2025-12-11T10:11:51+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_bd4647c9-41fa-450f-a887-bb37cf629b23 to /host/opt/cni/bin/\\\\n2025-12-11T10:11:52Z [verbose] multus-daemon started\\\\n2025-12-11T10:11:52Z [verbose] Readiness Indicator file check\\\\n2025-12-11T10:12:37Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:47Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:12:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbwwr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h4dvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:52Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:52 crc kubenswrapper[4953]: I1211 10:12:52.663559 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:52 crc kubenswrapper[4953]: I1211 10:12:52.663626 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:52 crc kubenswrapper[4953]: I1211 10:12:52.663639 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:52 crc kubenswrapper[4953]: I1211 10:12:52.663683 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:52 crc kubenswrapper[4953]: I1211 10:12:52.663697 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:52Z","lastTransitionTime":"2025-12-11T10:12:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:52 crc kubenswrapper[4953]: I1211 10:12:52.673842 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d98f6e58-767e-4e80-8dc7-bf97cdc14997\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec306b9048e81de45ce4e5ae1f564ab611980d56edf94f34c48cba7299dd754e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7453febb17d4aadef8c87c8d256a0339b441e2bed33a20a3f7cf88b4d0ce5a83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5c40bd3d558c5cff3d458a0b5a993371c3e8b6afc0035a64a21ffc0cc6c2357\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b22d8239ad9f5511dc6ae773c7ea181c4e194b0847b58332e716953d9deb9cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:52Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:52 crc kubenswrapper[4953]: I1211 10:12:52.685187 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q2898" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed741fb7-1326-48b7-a713-17c9f0243eac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c91690c6fc715e967f98fc731db9ff317a21946b0903480ee2534f5e71ae7ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5z9nk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd6810974250266a6a2efbea13db5cb6f52a4bbdec05955f7b9f58e55d7a8c4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5z9nk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q2898\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:52Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:52 crc kubenswrapper[4953]: I1211 10:12:52.703615 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x6f57" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c09d8243-6693-433e-bce1-8a99e5e37b95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b88a1ca7bd533da2486525e0a6c3dc45272433e743146be01eb8013265f02d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://622e6781ae06491000d6c0e3d2922942c3709bb09483a8e0d2ab723972c2aa78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c34fbadf5e4606e6ad395137eff9b9edc19b5d02125e818a6d8a19d0d20649e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2587a8f43682f31a01fae073769712321abb001f975f5ac5413264489a110f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42759c1ba178a28ddf9b11b221249364f6993c1288309dff6b1c50be13c3b6fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99f628cfb5844a18bd0013d2cd5f7f545ef7a561017498bdfa4a6b6fc4686e54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dc0cdbe5f1b125694bc32b6055f6f98ac803834f27c54f96be12ec7c359b5c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60a64421d848d2d5154604bb89edadbac944c141172896eb9bc48b6fab7e7b77\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T10:12:18Z\\\",\\\"message\\\":\\\"r.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1211 10:12:18.510892 6563 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1211 10:12:18.511234 6563 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1211 10:12:18.511255 6563 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1211 10:12:18.511261 6563 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1211 10:12:18.511504 6563 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1211 10:12:18.511509 6563 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1211 10:12:18.511523 6563 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1211 10:12:18.511549 6563 factory.go:656] Stopping watch factory\\\\nI1211 10:12:18.511589 6563 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1211 10:12:18.511599 6563 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1211 10:12:18.511606 6563 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1211 10:12:18.511613 6563 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1211 10:12:18.511620 6563 handler.go:208] Removed *v1.Node event handler 2\\\\nI1211 10:12:18.511628 6563 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T10:12:17Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:12:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8da892281a7b8dea449aee461dcb35bef204e2a768689e751abcb21204091aaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c84dce3b218bcc630289f0bff286bad09d4481e1787ef2c048799bfc6c97108f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c84dce3b218bcc630289f0bff286bad09d4481e1787ef2c048799bfc6c97108f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x6f57\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:52Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:52 crc kubenswrapper[4953]: I1211 10:12:52.715114 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bjhsd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c4cea1-0872-4490-8195-2a195090982c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e2ab3c73fffd4d07174524dd41c285309cc588049ea3896875e75982d072ced\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnnf2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1469f484fec8f5c7863ebaa62188bc38d6553fe3ef65e315a928924306724842\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnnf2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bjhsd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:52Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:52 crc kubenswrapper[4953]: I1211 10:12:52.727115 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb9312a7af4fcd14d64411afec83b7315dbe399254aab23665cccfa0b04a62db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:52Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:52 crc kubenswrapper[4953]: I1211 10:12:52.738687 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e38e7bec81ab11b9afe5c592d5c57aa1c0527e5e4031265a00a99ef8cb3c6dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da0ab06260b0bf565e089d1d1a78ae71e0ce94f0d5e867393dafc543f9014367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:52Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:52 crc kubenswrapper[4953]: I1211 10:12:52.748637 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6787fb31-272a-4dd9-b0f2-bfb5630d6901\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42af1d5ca92f02433468753b3f0f0cb74ef360928733d71e4316fb8ed77aea63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ec7f5911594475d4a03216b385df264254e50cb55ef7eee3d2ac0a88e8ef1af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e43d812b41951ea02ea6aeaf53d101e762a3bc0513865818ff2dcc6506a24d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b17d140523000135ca46bbc525af1160b82222469a9ca408985ab27c2514f82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b17d140523000135ca46bbc525af1160b82222469a9ca408985ab27c2514f82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:23Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:22Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:52Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:52 crc kubenswrapper[4953]: I1211 10:12:52.757484 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7cgmm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5c6cf50-ac15-4b65-98a3-24d5b55a9f5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e8c3b294febaab8650ca738b055222b11b0f3502da927fb9bb1f2f30b97c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrv98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7cgmm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:52Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:52 crc kubenswrapper[4953]: I1211 10:12:52.765982 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:52 crc kubenswrapper[4953]: I1211 10:12:52.766005 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:52 crc kubenswrapper[4953]: I1211 10:12:52.766013 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:52 crc kubenswrapper[4953]: I1211 10:12:52.766026 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:52 crc kubenswrapper[4953]: I1211 10:12:52.766035 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:52Z","lastTransitionTime":"2025-12-11T10:12:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:52 crc kubenswrapper[4953]: I1211 10:12:52.767734 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ps59j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed9da9e3-3f97-49f6-9774-3c2f06987b9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8b7289e76184818bc11ef0e99cd573244647de790af79ac277a91ebf305bc1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vngds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ps59j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:52Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:52 crc kubenswrapper[4953]: I1211 10:12:52.783675 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pqtrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d80d6bd6-dd9c-433e-93cb-2be48e4cea72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7525c3e73b38b27709833d8bf03853f82b08bafa8734d97890332f8aff9d3317\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c456b7fb076e4e1f8ebe23ee26c07f2038800afe76c0e183ea82aebebe5e20d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c456b7fb076e4e1f8ebe23ee26c07f2038800afe76c0e183ea82aebebe5e20d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1199214a609bfaf541ddc0e45714a86f4165bdb240eda61b8ee35764cc2b6c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1199214a609bfaf541ddc0e45714a86f4165bdb240eda61b8ee35764cc2b6c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79056708a2a0ccc6dca203be18aec98296471e9c95fa3124c910deb77f04cb23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79056708a2a0ccc6dca203be18aec98296471e9c95fa3124c910deb77f04cb23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdf32e1c2b4bd849d12dcee505be41ebbd38d03660e37c1b36d65f81e55d5160\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdf32e1c2b4bd849d12dcee505be41ebbd38d03660e37c1b36d65f81e55d5160\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6dd8c365888d82936ae2eeef058fd79b7134d40d2096eeb655fc79faa658ce6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6dd8c365888d82936ae2eeef058fd79b7134d40d2096eeb655fc79faa658ce6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22373c7e841c5b2889f89395496fcd5cf912db482ef228c680812c667bead5da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22373c7e841c5b2889f89395496fcd5cf912db482ef228c680812c667bead5da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f54nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pqtrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:52Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:52 crc kubenswrapper[4953]: I1211 10:12:52.794563 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qm4mr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86f65b63-32e0-49cc-bc96-272ecfb987ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqpb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqpb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:12:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qm4mr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:52Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:52 crc kubenswrapper[4953]: I1211 10:12:52.808187 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:52Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:52 crc kubenswrapper[4953]: I1211 10:12:52.824895 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:52Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:52 crc kubenswrapper[4953]: I1211 10:12:52.839404 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7f8ca70-14ac-499f-9a73-c03f1cb9d3f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afbf1d478a1ccbd17c29483adf2e39e60be93dfde72d96dd4c45ee2b81c7db7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89487ecc0b25583d92a2adb537e660618a1f0477d9b0ca805c7d5cc120a38ef5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5850c59617cbc5cbf3d86246bfb8d7645964fdb32f406648e47de3d2e1dcca39\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b38e6fc7946d99ff7570627e9bfd01e9f5e029ad3f3e2cda276461f222d7950\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a91255550d88dd1963fef1112d90d2c1e779fc3e2dd1e7c824640879b8c6a58e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T10:11:37Z\\\",\\\"message\\\":\\\"W1211 10:11:26.311312 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1211 10:11:26.312053 1 crypto.go:601] Generating new CA for check-endpoints-signer@1765447886 cert, and key in /tmp/serving-cert-3652440615/serving-signer.crt, /tmp/serving-cert-3652440615/serving-signer.key\\\\nI1211 10:11:26.711906 1 observer_polling.go:159] Starting file observer\\\\nW1211 10:11:26.714018 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1211 10:11:26.714220 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1211 10:11:26.715195 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3652440615/tls.crt::/tmp/serving-cert-3652440615/tls.key\\\\\\\"\\\\nF1211 10:11:37.220702 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:26Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2348bd7a336966cd91aa6ba1cf71771e7fd111085acbb0481adee82d7a6e109\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4536100178c4611eea8cfe2b15d79f55fdd32b7004de523cc2694bd656ce5be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4536100178c4611eea8cfe2b15d79f55fdd32b7004de523cc2694bd656ce5be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:52Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:52 crc kubenswrapper[4953]: I1211 10:12:52.853755 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:52Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:52 crc kubenswrapper[4953]: I1211 10:12:52.864937 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e57ec14864d78b0463b4bd4af9dfa21aec61df60a63a38b7d98ba4871716edfc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:52Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:52 crc kubenswrapper[4953]: I1211 10:12:52.869094 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:52 crc kubenswrapper[4953]: I1211 10:12:52.869156 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:52 crc kubenswrapper[4953]: I1211 10:12:52.869170 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:52 crc kubenswrapper[4953]: I1211 10:12:52.869189 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:52 crc kubenswrapper[4953]: I1211 10:12:52.869202 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:52Z","lastTransitionTime":"2025-12-11T10:12:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:52 crc kubenswrapper[4953]: I1211 10:12:52.880719 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h4dvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"644e1d40-ab80-469e-94b4-540e52b8e2c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc80f2149ec8320584aa8fd55223ba13d53848232acd659a71bb35fdea7a043f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f734acf34a05a9425f305c809775bae58615ae1d5f89e3b519e54d7e7abb8bc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T10:12:37Z\\\",\\\"message\\\":\\\"2025-12-11T10:11:51+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_bd4647c9-41fa-450f-a887-bb37cf629b23\\\\n2025-12-11T10:11:51+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_bd4647c9-41fa-450f-a887-bb37cf629b23 to /host/opt/cni/bin/\\\\n2025-12-11T10:11:52Z [verbose] multus-daemon started\\\\n2025-12-11T10:11:52Z [verbose] Readiness Indicator file check\\\\n2025-12-11T10:12:37Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:47Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:12:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbwwr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h4dvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:52Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:52 crc kubenswrapper[4953]: I1211 10:12:52.893443 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d98f6e58-767e-4e80-8dc7-bf97cdc14997\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec306b9048e81de45ce4e5ae1f564ab611980d56edf94f34c48cba7299dd754e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7453febb17d4aadef8c87c8d256a0339b441e2bed33a20a3f7cf88b4d0ce5a83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5c40bd3d558c5cff3d458a0b5a993371c3e8b6afc0035a64a21ffc0cc6c2357\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b22d8239ad9f5511dc6ae773c7ea181c4e194b0847b58332e716953d9deb9cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:52Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:52 crc kubenswrapper[4953]: I1211 10:12:52.903887 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7daafa5e-2caa-42a9-8578-35292ce8ce51\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://976b60a31862ddd53a1343b6fc3d27137f731775f54572f0c6e202fe6d7db1de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://145c63ec5d9aa482290fb3b6e2dc891fc95675fc0124836381f31f6535eb4574\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://145c63ec5d9aa482290fb3b6e2dc891fc95675fc0124836381f31f6535eb4574\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:52Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:52 crc kubenswrapper[4953]: I1211 10:12:52.921452 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x6f57" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c09d8243-6693-433e-bce1-8a99e5e37b95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b88a1ca7bd533da2486525e0a6c3dc45272433e743146be01eb8013265f02d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://622e6781ae06491000d6c0e3d2922942c3709bb09483a8e0d2ab723972c2aa78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c34fbadf5e4606e6ad395137eff9b9edc19b5d02125e818a6d8a19d0d20649e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2587a8f43682f31a01fae073769712321abb001f975f5ac5413264489a110f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42759c1ba178a28ddf9b11b221249364f6993c1288309dff6b1c50be13c3b6fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99f628cfb5844a18bd0013d2cd5f7f545ef7a561017498bdfa4a6b6fc4686e54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dc0cdbe5f1b125694bc32b6055f6f98ac803834f27c54f96be12ec7c359b5c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60a64421d848d2d5154604bb89edadbac944c141172896eb9bc48b6fab7e7b77\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T10:12:18Z\\\",\\\"message\\\":\\\"r.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1211 10:12:18.510892 6563 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1211 10:12:18.511234 6563 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1211 10:12:18.511255 6563 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1211 10:12:18.511261 6563 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1211 10:12:18.511504 6563 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1211 10:12:18.511509 6563 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1211 10:12:18.511523 6563 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1211 10:12:18.511549 6563 factory.go:656] Stopping watch factory\\\\nI1211 10:12:18.511589 6563 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1211 10:12:18.511599 6563 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1211 10:12:18.511606 6563 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1211 10:12:18.511613 6563 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1211 10:12:18.511620 6563 handler.go:208] Removed *v1.Node event handler 2\\\\nI1211 10:12:18.511628 6563 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T10:12:17Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7dc0cdbe5f1b125694bc32b6055f6f98ac803834f27c54f96be12ec7c359b5c1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T10:12:52Z\\\",\\\"message\\\":\\\" reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1211 10:12:51.899769 7009 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1211 10:12:51.899980 7009 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI1211 10:12:51.900141 7009 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1211 10:12:51.900230 7009 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1211 10:12:51.900373 7009 factory.go:656] Stopping watch factory\\\\nI1211 10:12:51.900403 7009 handler.go:208] Removed *v1.Node event handler 2\\\\nI1211 10:12:51.909482 7009 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1211 10:12:51.909534 7009 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1211 10:12:51.909695 7009 ovnkube.go:599] Stopped ovnkube\\\\nI1211 10:12:51.909746 7009 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1211 10:12:51.909840 7009 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T10:12:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8da892281a7b8dea449aee461dcb35bef204e2a768689e751abcb21204091aaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c84dce3b218bcc630289f0bff286bad09d4481e1787ef2c048799bfc6c97108f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c84dce3b218bcc630289f0bff286bad09d4481e1787ef2c048799bfc6c97108f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x6f57\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:52Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:52 crc kubenswrapper[4953]: I1211 10:12:52.931995 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bjhsd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c4cea1-0872-4490-8195-2a195090982c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e2ab3c73fffd4d07174524dd41c285309cc588049ea3896875e75982d072ced\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnnf2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1469f484fec8f5c7863ebaa62188bc38d6553fe3ef65e315a928924306724842\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnnf2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bjhsd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:52Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:52 crc kubenswrapper[4953]: I1211 10:12:52.946838 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb9312a7af4fcd14d64411afec83b7315dbe399254aab23665cccfa0b04a62db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:52Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:52 crc kubenswrapper[4953]: I1211 10:12:52.960374 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q2898" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed741fb7-1326-48b7-a713-17c9f0243eac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c91690c6fc715e967f98fc731db9ff317a21946b0903480ee2534f5e71ae7ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5z9nk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd6810974250266a6a2efbea13db5cb6f52a4bbdec05955f7b9f58e55d7a8c4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5z9nk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q2898\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:52Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:52 crc kubenswrapper[4953]: I1211 10:12:52.970935 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:52 crc kubenswrapper[4953]: I1211 10:12:52.970981 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:52 crc kubenswrapper[4953]: I1211 10:12:52.971011 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:52 crc kubenswrapper[4953]: I1211 10:12:52.971027 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:52 crc kubenswrapper[4953]: I1211 10:12:52.971037 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:52Z","lastTransitionTime":"2025-12-11T10:12:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:52 crc kubenswrapper[4953]: I1211 10:12:52.978052 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e38e7bec81ab11b9afe5c592d5c57aa1c0527e5e4031265a00a99ef8cb3c6dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da0ab06260b0bf565e089d1d1a78ae71e0ce94f0d5e867393dafc543f9014367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:52Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:52 crc kubenswrapper[4953]: I1211 10:12:52.992747 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6787fb31-272a-4dd9-b0f2-bfb5630d6901\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T10:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42af1d5ca92f02433468753b3f0f0cb74ef360928733d71e4316fb8ed77aea63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ec7f5911594475d4a03216b385df264254e50cb55ef7eee3d2ac0a88e8ef1af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e43d812b41951ea02ea6aeaf53d101e762a3bc0513865818ff2dcc6506a24d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T10:11:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b17d140523000135ca46bbc525af1160b82222469a9ca408985ab27c2514f82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b17d140523000135ca46bbc525af1160b82222469a9ca408985ab27c2514f82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T10:11:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T10:11:23Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T10:11:22Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T10:12:52Z is after 2025-08-24T17:21:41Z" Dec 11 10:12:53 crc kubenswrapper[4953]: I1211 10:12:53.074653 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:53 crc kubenswrapper[4953]: I1211 10:12:53.074709 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:53 crc kubenswrapper[4953]: I1211 10:12:53.074720 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:53 crc kubenswrapper[4953]: I1211 10:12:53.074736 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:53 crc kubenswrapper[4953]: I1211 10:12:53.074748 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:53Z","lastTransitionTime":"2025-12-11T10:12:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:53 crc kubenswrapper[4953]: I1211 10:12:53.178106 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:53 crc kubenswrapper[4953]: I1211 10:12:53.178146 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:53 crc kubenswrapper[4953]: I1211 10:12:53.178162 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:53 crc kubenswrapper[4953]: I1211 10:12:53.178181 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:53 crc kubenswrapper[4953]: I1211 10:12:53.178193 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:53Z","lastTransitionTime":"2025-12-11T10:12:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:53 crc kubenswrapper[4953]: I1211 10:12:53.281208 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:53 crc kubenswrapper[4953]: I1211 10:12:53.281274 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:53 crc kubenswrapper[4953]: I1211 10:12:53.281286 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:53 crc kubenswrapper[4953]: I1211 10:12:53.281311 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:53 crc kubenswrapper[4953]: I1211 10:12:53.281325 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:53Z","lastTransitionTime":"2025-12-11T10:12:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:53 crc kubenswrapper[4953]: I1211 10:12:53.384364 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:53 crc kubenswrapper[4953]: I1211 10:12:53.384460 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:53 crc kubenswrapper[4953]: I1211 10:12:53.384479 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:53 crc kubenswrapper[4953]: I1211 10:12:53.384502 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:53 crc kubenswrapper[4953]: I1211 10:12:53.384520 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:53Z","lastTransitionTime":"2025-12-11T10:12:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:53 crc kubenswrapper[4953]: I1211 10:12:53.472785 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 10:12:53 crc kubenswrapper[4953]: I1211 10:12:53.472875 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 10:12:53 crc kubenswrapper[4953]: I1211 10:12:53.472882 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qm4mr" Dec 11 10:12:53 crc kubenswrapper[4953]: E1211 10:12:53.473001 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 10:12:53 crc kubenswrapper[4953]: I1211 10:12:53.473107 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 10:12:53 crc kubenswrapper[4953]: E1211 10:12:53.473238 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 10:12:53 crc kubenswrapper[4953]: E1211 10:12:53.473419 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qm4mr" podUID="86f65b63-32e0-49cc-bc96-272ecfb987ed" Dec 11 10:12:53 crc kubenswrapper[4953]: E1211 10:12:53.473519 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 10:12:53 crc kubenswrapper[4953]: I1211 10:12:53.486857 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:53 crc kubenswrapper[4953]: I1211 10:12:53.487107 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:53 crc kubenswrapper[4953]: I1211 10:12:53.487192 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:53 crc kubenswrapper[4953]: I1211 10:12:53.487319 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:53 crc kubenswrapper[4953]: I1211 10:12:53.487409 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:53Z","lastTransitionTime":"2025-12-11T10:12:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:53 crc kubenswrapper[4953]: I1211 10:12:53.590564 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:53 crc kubenswrapper[4953]: I1211 10:12:53.590645 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:53 crc kubenswrapper[4953]: I1211 10:12:53.590657 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:53 crc kubenswrapper[4953]: I1211 10:12:53.590676 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:53 crc kubenswrapper[4953]: I1211 10:12:53.590687 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:53Z","lastTransitionTime":"2025-12-11T10:12:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:53 crc kubenswrapper[4953]: I1211 10:12:53.615332 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x6f57_c09d8243-6693-433e-bce1-8a99e5e37b95/ovnkube-controller/3.log" Dec 11 10:12:53 crc kubenswrapper[4953]: I1211 10:12:53.693991 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:53 crc kubenswrapper[4953]: I1211 10:12:53.694046 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:53 crc kubenswrapper[4953]: I1211 10:12:53.694058 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:53 crc kubenswrapper[4953]: I1211 10:12:53.694078 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:53 crc kubenswrapper[4953]: I1211 10:12:53.694091 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:53Z","lastTransitionTime":"2025-12-11T10:12:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:53 crc kubenswrapper[4953]: I1211 10:12:53.796788 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:53 crc kubenswrapper[4953]: I1211 10:12:53.797175 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:53 crc kubenswrapper[4953]: I1211 10:12:53.797326 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:53 crc kubenswrapper[4953]: I1211 10:12:53.797484 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:53 crc kubenswrapper[4953]: I1211 10:12:53.797678 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:53Z","lastTransitionTime":"2025-12-11T10:12:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:53 crc kubenswrapper[4953]: I1211 10:12:53.900191 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:53 crc kubenswrapper[4953]: I1211 10:12:53.900504 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:53 crc kubenswrapper[4953]: I1211 10:12:53.900596 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:53 crc kubenswrapper[4953]: I1211 10:12:53.900667 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:53 crc kubenswrapper[4953]: I1211 10:12:53.900767 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:53Z","lastTransitionTime":"2025-12-11T10:12:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:54 crc kubenswrapper[4953]: I1211 10:12:54.003638 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:54 crc kubenswrapper[4953]: I1211 10:12:54.003693 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:54 crc kubenswrapper[4953]: I1211 10:12:54.003711 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:54 crc kubenswrapper[4953]: I1211 10:12:54.003743 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:54 crc kubenswrapper[4953]: I1211 10:12:54.003764 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:54Z","lastTransitionTime":"2025-12-11T10:12:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:54 crc kubenswrapper[4953]: I1211 10:12:54.106629 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:54 crc kubenswrapper[4953]: I1211 10:12:54.106690 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:54 crc kubenswrapper[4953]: I1211 10:12:54.106710 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:54 crc kubenswrapper[4953]: I1211 10:12:54.106735 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:54 crc kubenswrapper[4953]: I1211 10:12:54.106759 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:54Z","lastTransitionTime":"2025-12-11T10:12:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:54 crc kubenswrapper[4953]: I1211 10:12:54.209425 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:54 crc kubenswrapper[4953]: I1211 10:12:54.209470 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:54 crc kubenswrapper[4953]: I1211 10:12:54.209519 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:54 crc kubenswrapper[4953]: I1211 10:12:54.209544 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:54 crc kubenswrapper[4953]: I1211 10:12:54.209556 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:54Z","lastTransitionTime":"2025-12-11T10:12:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:54 crc kubenswrapper[4953]: I1211 10:12:54.312453 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:54 crc kubenswrapper[4953]: I1211 10:12:54.312656 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:54 crc kubenswrapper[4953]: I1211 10:12:54.312677 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:54 crc kubenswrapper[4953]: I1211 10:12:54.312698 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:54 crc kubenswrapper[4953]: I1211 10:12:54.312751 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:54Z","lastTransitionTime":"2025-12-11T10:12:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:54 crc kubenswrapper[4953]: I1211 10:12:54.416270 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:54 crc kubenswrapper[4953]: I1211 10:12:54.416332 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:54 crc kubenswrapper[4953]: I1211 10:12:54.416352 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:54 crc kubenswrapper[4953]: I1211 10:12:54.416372 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:54 crc kubenswrapper[4953]: I1211 10:12:54.416385 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:54Z","lastTransitionTime":"2025-12-11T10:12:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:54 crc kubenswrapper[4953]: I1211 10:12:54.519863 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:54 crc kubenswrapper[4953]: I1211 10:12:54.519906 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:54 crc kubenswrapper[4953]: I1211 10:12:54.519918 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:54 crc kubenswrapper[4953]: I1211 10:12:54.519935 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:54 crc kubenswrapper[4953]: I1211 10:12:54.519947 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:54Z","lastTransitionTime":"2025-12-11T10:12:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:54 crc kubenswrapper[4953]: I1211 10:12:54.623286 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:54 crc kubenswrapper[4953]: I1211 10:12:54.623808 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:54 crc kubenswrapper[4953]: I1211 10:12:54.624068 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:54 crc kubenswrapper[4953]: I1211 10:12:54.624281 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:54 crc kubenswrapper[4953]: I1211 10:12:54.624484 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:54Z","lastTransitionTime":"2025-12-11T10:12:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:54 crc kubenswrapper[4953]: I1211 10:12:54.727220 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:54 crc kubenswrapper[4953]: I1211 10:12:54.727293 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:54 crc kubenswrapper[4953]: I1211 10:12:54.727318 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:54 crc kubenswrapper[4953]: I1211 10:12:54.727347 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:54 crc kubenswrapper[4953]: I1211 10:12:54.727369 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:54Z","lastTransitionTime":"2025-12-11T10:12:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:54 crc kubenswrapper[4953]: I1211 10:12:54.830538 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:54 crc kubenswrapper[4953]: I1211 10:12:54.830656 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:54 crc kubenswrapper[4953]: I1211 10:12:54.830690 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:54 crc kubenswrapper[4953]: I1211 10:12:54.830719 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:54 crc kubenswrapper[4953]: I1211 10:12:54.830740 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:54Z","lastTransitionTime":"2025-12-11T10:12:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:54 crc kubenswrapper[4953]: I1211 10:12:54.933818 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:54 crc kubenswrapper[4953]: I1211 10:12:54.933863 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:54 crc kubenswrapper[4953]: I1211 10:12:54.933873 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:54 crc kubenswrapper[4953]: I1211 10:12:54.933889 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:54 crc kubenswrapper[4953]: I1211 10:12:54.933902 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:54Z","lastTransitionTime":"2025-12-11T10:12:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:55 crc kubenswrapper[4953]: I1211 10:12:55.035514 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:55 crc kubenswrapper[4953]: I1211 10:12:55.035555 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:55 crc kubenswrapper[4953]: I1211 10:12:55.035566 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:55 crc kubenswrapper[4953]: I1211 10:12:55.035601 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:55 crc kubenswrapper[4953]: I1211 10:12:55.035613 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:55Z","lastTransitionTime":"2025-12-11T10:12:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:55 crc kubenswrapper[4953]: I1211 10:12:55.138846 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:55 crc kubenswrapper[4953]: I1211 10:12:55.138939 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:55 crc kubenswrapper[4953]: I1211 10:12:55.138956 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:55 crc kubenswrapper[4953]: I1211 10:12:55.138976 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:55 crc kubenswrapper[4953]: I1211 10:12:55.138992 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:55Z","lastTransitionTime":"2025-12-11T10:12:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:55 crc kubenswrapper[4953]: I1211 10:12:55.241541 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:55 crc kubenswrapper[4953]: I1211 10:12:55.241600 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:55 crc kubenswrapper[4953]: I1211 10:12:55.241618 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:55 crc kubenswrapper[4953]: I1211 10:12:55.241636 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:55 crc kubenswrapper[4953]: I1211 10:12:55.241648 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:55Z","lastTransitionTime":"2025-12-11T10:12:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:55 crc kubenswrapper[4953]: I1211 10:12:55.345146 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:55 crc kubenswrapper[4953]: I1211 10:12:55.345301 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:55 crc kubenswrapper[4953]: I1211 10:12:55.345332 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:55 crc kubenswrapper[4953]: I1211 10:12:55.345420 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:55 crc kubenswrapper[4953]: I1211 10:12:55.345492 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:55Z","lastTransitionTime":"2025-12-11T10:12:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:55 crc kubenswrapper[4953]: I1211 10:12:55.448086 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:55 crc kubenswrapper[4953]: I1211 10:12:55.448152 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:55 crc kubenswrapper[4953]: I1211 10:12:55.448169 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:55 crc kubenswrapper[4953]: I1211 10:12:55.448195 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:55 crc kubenswrapper[4953]: I1211 10:12:55.448217 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:55Z","lastTransitionTime":"2025-12-11T10:12:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:55 crc kubenswrapper[4953]: I1211 10:12:55.472912 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 10:12:55 crc kubenswrapper[4953]: I1211 10:12:55.472971 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 10:12:55 crc kubenswrapper[4953]: I1211 10:12:55.473009 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 10:12:55 crc kubenswrapper[4953]: I1211 10:12:55.472933 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qm4mr" Dec 11 10:12:55 crc kubenswrapper[4953]: E1211 10:12:55.473115 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 10:12:55 crc kubenswrapper[4953]: E1211 10:12:55.473261 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 10:12:55 crc kubenswrapper[4953]: E1211 10:12:55.473399 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qm4mr" podUID="86f65b63-32e0-49cc-bc96-272ecfb987ed" Dec 11 10:12:55 crc kubenswrapper[4953]: E1211 10:12:55.473534 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 10:12:55 crc kubenswrapper[4953]: I1211 10:12:55.551072 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:55 crc kubenswrapper[4953]: I1211 10:12:55.551150 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:55 crc kubenswrapper[4953]: I1211 10:12:55.551167 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:55 crc kubenswrapper[4953]: I1211 10:12:55.551190 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:55 crc kubenswrapper[4953]: I1211 10:12:55.551210 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:55Z","lastTransitionTime":"2025-12-11T10:12:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:55 crc kubenswrapper[4953]: I1211 10:12:55.653819 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:55 crc kubenswrapper[4953]: I1211 10:12:55.653858 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:55 crc kubenswrapper[4953]: I1211 10:12:55.653867 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:55 crc kubenswrapper[4953]: I1211 10:12:55.653885 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:55 crc kubenswrapper[4953]: I1211 10:12:55.653895 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:55Z","lastTransitionTime":"2025-12-11T10:12:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:55 crc kubenswrapper[4953]: I1211 10:12:55.756660 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:55 crc kubenswrapper[4953]: I1211 10:12:55.756706 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:55 crc kubenswrapper[4953]: I1211 10:12:55.756716 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:55 crc kubenswrapper[4953]: I1211 10:12:55.756732 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:55 crc kubenswrapper[4953]: I1211 10:12:55.756745 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:55Z","lastTransitionTime":"2025-12-11T10:12:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:55 crc kubenswrapper[4953]: I1211 10:12:55.859376 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:55 crc kubenswrapper[4953]: I1211 10:12:55.859426 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:55 crc kubenswrapper[4953]: I1211 10:12:55.859438 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:55 crc kubenswrapper[4953]: I1211 10:12:55.859458 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:55 crc kubenswrapper[4953]: I1211 10:12:55.859472 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:55Z","lastTransitionTime":"2025-12-11T10:12:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:55 crc kubenswrapper[4953]: I1211 10:12:55.961741 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:55 crc kubenswrapper[4953]: I1211 10:12:55.961799 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:55 crc kubenswrapper[4953]: I1211 10:12:55.961822 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:55 crc kubenswrapper[4953]: I1211 10:12:55.961853 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:55 crc kubenswrapper[4953]: I1211 10:12:55.961875 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:55Z","lastTransitionTime":"2025-12-11T10:12:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:56 crc kubenswrapper[4953]: I1211 10:12:56.064422 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:56 crc kubenswrapper[4953]: I1211 10:12:56.064460 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:56 crc kubenswrapper[4953]: I1211 10:12:56.064468 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:56 crc kubenswrapper[4953]: I1211 10:12:56.064483 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:56 crc kubenswrapper[4953]: I1211 10:12:56.064492 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:56Z","lastTransitionTime":"2025-12-11T10:12:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:56 crc kubenswrapper[4953]: I1211 10:12:56.167352 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:56 crc kubenswrapper[4953]: I1211 10:12:56.167404 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:56 crc kubenswrapper[4953]: I1211 10:12:56.167421 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:56 crc kubenswrapper[4953]: I1211 10:12:56.167445 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:56 crc kubenswrapper[4953]: I1211 10:12:56.167462 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:56Z","lastTransitionTime":"2025-12-11T10:12:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:56 crc kubenswrapper[4953]: I1211 10:12:56.269859 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:56 crc kubenswrapper[4953]: I1211 10:12:56.269925 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:56 crc kubenswrapper[4953]: I1211 10:12:56.269936 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:56 crc kubenswrapper[4953]: I1211 10:12:56.269952 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:56 crc kubenswrapper[4953]: I1211 10:12:56.269963 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:56Z","lastTransitionTime":"2025-12-11T10:12:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:56 crc kubenswrapper[4953]: I1211 10:12:56.373474 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:56 crc kubenswrapper[4953]: I1211 10:12:56.373798 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:56 crc kubenswrapper[4953]: I1211 10:12:56.373889 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:56 crc kubenswrapper[4953]: I1211 10:12:56.373927 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:56 crc kubenswrapper[4953]: I1211 10:12:56.373951 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:56Z","lastTransitionTime":"2025-12-11T10:12:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:56 crc kubenswrapper[4953]: I1211 10:12:56.476699 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:56 crc kubenswrapper[4953]: I1211 10:12:56.476760 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:56 crc kubenswrapper[4953]: I1211 10:12:56.476785 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:56 crc kubenswrapper[4953]: I1211 10:12:56.476804 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:56 crc kubenswrapper[4953]: I1211 10:12:56.476828 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:56Z","lastTransitionTime":"2025-12-11T10:12:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:56 crc kubenswrapper[4953]: I1211 10:12:56.578733 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:56 crc kubenswrapper[4953]: I1211 10:12:56.578781 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:56 crc kubenswrapper[4953]: I1211 10:12:56.578793 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:56 crc kubenswrapper[4953]: I1211 10:12:56.578810 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:56 crc kubenswrapper[4953]: I1211 10:12:56.578823 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:56Z","lastTransitionTime":"2025-12-11T10:12:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:56 crc kubenswrapper[4953]: I1211 10:12:56.681403 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:56 crc kubenswrapper[4953]: I1211 10:12:56.681463 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:56 crc kubenswrapper[4953]: I1211 10:12:56.681481 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:56 crc kubenswrapper[4953]: I1211 10:12:56.681505 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:56 crc kubenswrapper[4953]: I1211 10:12:56.681523 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:56Z","lastTransitionTime":"2025-12-11T10:12:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:56 crc kubenswrapper[4953]: I1211 10:12:56.784929 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:56 crc kubenswrapper[4953]: I1211 10:12:56.785011 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:56 crc kubenswrapper[4953]: I1211 10:12:56.785033 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:56 crc kubenswrapper[4953]: I1211 10:12:56.785066 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:56 crc kubenswrapper[4953]: I1211 10:12:56.785090 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:56Z","lastTransitionTime":"2025-12-11T10:12:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:56 crc kubenswrapper[4953]: I1211 10:12:56.888343 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:56 crc kubenswrapper[4953]: I1211 10:12:56.888554 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:56 crc kubenswrapper[4953]: I1211 10:12:56.888568 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:56 crc kubenswrapper[4953]: I1211 10:12:56.888612 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:56 crc kubenswrapper[4953]: I1211 10:12:56.888625 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:56Z","lastTransitionTime":"2025-12-11T10:12:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:56 crc kubenswrapper[4953]: I1211 10:12:56.991413 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:56 crc kubenswrapper[4953]: I1211 10:12:56.991455 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:56 crc kubenswrapper[4953]: I1211 10:12:56.991465 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:56 crc kubenswrapper[4953]: I1211 10:12:56.991481 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:56 crc kubenswrapper[4953]: I1211 10:12:56.991493 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:56Z","lastTransitionTime":"2025-12-11T10:12:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:57 crc kubenswrapper[4953]: I1211 10:12:57.094633 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:57 crc kubenswrapper[4953]: I1211 10:12:57.094709 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:57 crc kubenswrapper[4953]: I1211 10:12:57.094739 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:57 crc kubenswrapper[4953]: I1211 10:12:57.094769 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:57 crc kubenswrapper[4953]: I1211 10:12:57.094792 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:57Z","lastTransitionTime":"2025-12-11T10:12:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:57 crc kubenswrapper[4953]: I1211 10:12:57.197885 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:57 crc kubenswrapper[4953]: I1211 10:12:57.197967 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:57 crc kubenswrapper[4953]: I1211 10:12:57.197991 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:57 crc kubenswrapper[4953]: I1211 10:12:57.198017 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:57 crc kubenswrapper[4953]: I1211 10:12:57.198038 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:57Z","lastTransitionTime":"2025-12-11T10:12:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:57 crc kubenswrapper[4953]: I1211 10:12:57.300891 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:57 crc kubenswrapper[4953]: I1211 10:12:57.300938 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:57 crc kubenswrapper[4953]: I1211 10:12:57.300953 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:57 crc kubenswrapper[4953]: I1211 10:12:57.300984 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:57 crc kubenswrapper[4953]: I1211 10:12:57.301002 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:57Z","lastTransitionTime":"2025-12-11T10:12:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:57 crc kubenswrapper[4953]: I1211 10:12:57.403523 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:57 crc kubenswrapper[4953]: I1211 10:12:57.403594 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:57 crc kubenswrapper[4953]: I1211 10:12:57.403606 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:57 crc kubenswrapper[4953]: I1211 10:12:57.403623 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:57 crc kubenswrapper[4953]: I1211 10:12:57.403633 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:57Z","lastTransitionTime":"2025-12-11T10:12:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:57 crc kubenswrapper[4953]: I1211 10:12:57.473049 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 10:12:57 crc kubenswrapper[4953]: I1211 10:12:57.473095 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qm4mr" Dec 11 10:12:57 crc kubenswrapper[4953]: I1211 10:12:57.473120 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 10:12:57 crc kubenswrapper[4953]: E1211 10:12:57.473191 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 10:12:57 crc kubenswrapper[4953]: I1211 10:12:57.473067 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 10:12:57 crc kubenswrapper[4953]: E1211 10:12:57.473271 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qm4mr" podUID="86f65b63-32e0-49cc-bc96-272ecfb987ed" Dec 11 10:12:57 crc kubenswrapper[4953]: E1211 10:12:57.473373 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 10:12:57 crc kubenswrapper[4953]: E1211 10:12:57.473408 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 10:12:57 crc kubenswrapper[4953]: I1211 10:12:57.510232 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:57 crc kubenswrapper[4953]: I1211 10:12:57.510269 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:57 crc kubenswrapper[4953]: I1211 10:12:57.510279 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:57 crc kubenswrapper[4953]: I1211 10:12:57.510294 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:57 crc kubenswrapper[4953]: I1211 10:12:57.510303 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:57Z","lastTransitionTime":"2025-12-11T10:12:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:57 crc kubenswrapper[4953]: I1211 10:12:57.612449 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:57 crc kubenswrapper[4953]: I1211 10:12:57.612494 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:57 crc kubenswrapper[4953]: I1211 10:12:57.612504 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:57 crc kubenswrapper[4953]: I1211 10:12:57.612517 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:57 crc kubenswrapper[4953]: I1211 10:12:57.612532 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:57Z","lastTransitionTime":"2025-12-11T10:12:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:57 crc kubenswrapper[4953]: I1211 10:12:57.714997 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:57 crc kubenswrapper[4953]: I1211 10:12:57.715059 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:57 crc kubenswrapper[4953]: I1211 10:12:57.715071 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:57 crc kubenswrapper[4953]: I1211 10:12:57.715088 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:57 crc kubenswrapper[4953]: I1211 10:12:57.715100 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:57Z","lastTransitionTime":"2025-12-11T10:12:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:57 crc kubenswrapper[4953]: I1211 10:12:57.817660 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:57 crc kubenswrapper[4953]: I1211 10:12:57.817698 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:57 crc kubenswrapper[4953]: I1211 10:12:57.817708 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:57 crc kubenswrapper[4953]: I1211 10:12:57.817739 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:57 crc kubenswrapper[4953]: I1211 10:12:57.817751 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:57Z","lastTransitionTime":"2025-12-11T10:12:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:57 crc kubenswrapper[4953]: I1211 10:12:57.921537 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:57 crc kubenswrapper[4953]: I1211 10:12:57.921602 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:57 crc kubenswrapper[4953]: I1211 10:12:57.921613 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:57 crc kubenswrapper[4953]: I1211 10:12:57.921631 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:57 crc kubenswrapper[4953]: I1211 10:12:57.921644 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:57Z","lastTransitionTime":"2025-12-11T10:12:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:58 crc kubenswrapper[4953]: I1211 10:12:58.024220 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:58 crc kubenswrapper[4953]: I1211 10:12:58.024263 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:58 crc kubenswrapper[4953]: I1211 10:12:58.024273 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:58 crc kubenswrapper[4953]: I1211 10:12:58.024295 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:58 crc kubenswrapper[4953]: I1211 10:12:58.024310 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:58Z","lastTransitionTime":"2025-12-11T10:12:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:58 crc kubenswrapper[4953]: I1211 10:12:58.127437 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:58 crc kubenswrapper[4953]: I1211 10:12:58.127501 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:58 crc kubenswrapper[4953]: I1211 10:12:58.127518 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:58 crc kubenswrapper[4953]: I1211 10:12:58.127543 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:58 crc kubenswrapper[4953]: I1211 10:12:58.127561 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:58Z","lastTransitionTime":"2025-12-11T10:12:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:58 crc kubenswrapper[4953]: I1211 10:12:58.230648 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:58 crc kubenswrapper[4953]: I1211 10:12:58.230719 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:58 crc kubenswrapper[4953]: I1211 10:12:58.230743 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:58 crc kubenswrapper[4953]: I1211 10:12:58.230774 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:58 crc kubenswrapper[4953]: I1211 10:12:58.230800 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:58Z","lastTransitionTime":"2025-12-11T10:12:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:58 crc kubenswrapper[4953]: I1211 10:12:58.333939 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:58 crc kubenswrapper[4953]: I1211 10:12:58.333980 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:58 crc kubenswrapper[4953]: I1211 10:12:58.333989 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:58 crc kubenswrapper[4953]: I1211 10:12:58.334002 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:58 crc kubenswrapper[4953]: I1211 10:12:58.334011 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:58Z","lastTransitionTime":"2025-12-11T10:12:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:58 crc kubenswrapper[4953]: I1211 10:12:58.436881 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:58 crc kubenswrapper[4953]: I1211 10:12:58.436922 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:58 crc kubenswrapper[4953]: I1211 10:12:58.436931 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:58 crc kubenswrapper[4953]: I1211 10:12:58.436946 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:58 crc kubenswrapper[4953]: I1211 10:12:58.436956 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:58Z","lastTransitionTime":"2025-12-11T10:12:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:58 crc kubenswrapper[4953]: I1211 10:12:58.539739 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:58 crc kubenswrapper[4953]: I1211 10:12:58.539806 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:58 crc kubenswrapper[4953]: I1211 10:12:58.539829 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:58 crc kubenswrapper[4953]: I1211 10:12:58.539857 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:58 crc kubenswrapper[4953]: I1211 10:12:58.539877 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:58Z","lastTransitionTime":"2025-12-11T10:12:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:58 crc kubenswrapper[4953]: I1211 10:12:58.642446 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:58 crc kubenswrapper[4953]: I1211 10:12:58.642538 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:58 crc kubenswrapper[4953]: I1211 10:12:58.642550 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:58 crc kubenswrapper[4953]: I1211 10:12:58.642586 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:58 crc kubenswrapper[4953]: I1211 10:12:58.642600 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:58Z","lastTransitionTime":"2025-12-11T10:12:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:58 crc kubenswrapper[4953]: I1211 10:12:58.745531 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:58 crc kubenswrapper[4953]: I1211 10:12:58.745624 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:58 crc kubenswrapper[4953]: I1211 10:12:58.745639 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:58 crc kubenswrapper[4953]: I1211 10:12:58.745660 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:58 crc kubenswrapper[4953]: I1211 10:12:58.745674 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:58Z","lastTransitionTime":"2025-12-11T10:12:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:58 crc kubenswrapper[4953]: I1211 10:12:58.847991 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:58 crc kubenswrapper[4953]: I1211 10:12:58.848072 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:58 crc kubenswrapper[4953]: I1211 10:12:58.848097 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:58 crc kubenswrapper[4953]: I1211 10:12:58.848120 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:58 crc kubenswrapper[4953]: I1211 10:12:58.848136 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:58Z","lastTransitionTime":"2025-12-11T10:12:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:58 crc kubenswrapper[4953]: I1211 10:12:58.951349 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:58 crc kubenswrapper[4953]: I1211 10:12:58.951765 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:58 crc kubenswrapper[4953]: I1211 10:12:58.951780 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:58 crc kubenswrapper[4953]: I1211 10:12:58.951797 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:58 crc kubenswrapper[4953]: I1211 10:12:58.951811 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:58Z","lastTransitionTime":"2025-12-11T10:12:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:59 crc kubenswrapper[4953]: I1211 10:12:59.054690 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:59 crc kubenswrapper[4953]: I1211 10:12:59.054807 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:59 crc kubenswrapper[4953]: I1211 10:12:59.054825 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:59 crc kubenswrapper[4953]: I1211 10:12:59.054853 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:59 crc kubenswrapper[4953]: I1211 10:12:59.054872 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:59Z","lastTransitionTime":"2025-12-11T10:12:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:59 crc kubenswrapper[4953]: I1211 10:12:59.157606 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:59 crc kubenswrapper[4953]: I1211 10:12:59.157653 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:59 crc kubenswrapper[4953]: I1211 10:12:59.157666 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:59 crc kubenswrapper[4953]: I1211 10:12:59.157684 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:59 crc kubenswrapper[4953]: I1211 10:12:59.157697 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:59Z","lastTransitionTime":"2025-12-11T10:12:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:59 crc kubenswrapper[4953]: I1211 10:12:59.261445 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:59 crc kubenswrapper[4953]: I1211 10:12:59.261502 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:59 crc kubenswrapper[4953]: I1211 10:12:59.261515 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:59 crc kubenswrapper[4953]: I1211 10:12:59.261534 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:59 crc kubenswrapper[4953]: I1211 10:12:59.261546 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:59Z","lastTransitionTime":"2025-12-11T10:12:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:59 crc kubenswrapper[4953]: I1211 10:12:59.364708 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:59 crc kubenswrapper[4953]: I1211 10:12:59.364787 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:59 crc kubenswrapper[4953]: I1211 10:12:59.364801 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:59 crc kubenswrapper[4953]: I1211 10:12:59.364818 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:59 crc kubenswrapper[4953]: I1211 10:12:59.364827 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:59Z","lastTransitionTime":"2025-12-11T10:12:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:59 crc kubenswrapper[4953]: I1211 10:12:59.467663 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:59 crc kubenswrapper[4953]: I1211 10:12:59.467714 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:59 crc kubenswrapper[4953]: I1211 10:12:59.467727 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:59 crc kubenswrapper[4953]: I1211 10:12:59.467745 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:59 crc kubenswrapper[4953]: I1211 10:12:59.467758 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:59Z","lastTransitionTime":"2025-12-11T10:12:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:59 crc kubenswrapper[4953]: I1211 10:12:59.473003 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 10:12:59 crc kubenswrapper[4953]: I1211 10:12:59.473042 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 10:12:59 crc kubenswrapper[4953]: I1211 10:12:59.473065 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qm4mr" Dec 11 10:12:59 crc kubenswrapper[4953]: E1211 10:12:59.473113 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 10:12:59 crc kubenswrapper[4953]: I1211 10:12:59.473003 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 10:12:59 crc kubenswrapper[4953]: E1211 10:12:59.473241 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qm4mr" podUID="86f65b63-32e0-49cc-bc96-272ecfb987ed" Dec 11 10:12:59 crc kubenswrapper[4953]: E1211 10:12:59.473283 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 10:12:59 crc kubenswrapper[4953]: E1211 10:12:59.473435 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 10:12:59 crc kubenswrapper[4953]: I1211 10:12:59.570339 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:59 crc kubenswrapper[4953]: I1211 10:12:59.570387 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:59 crc kubenswrapper[4953]: I1211 10:12:59.570402 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:59 crc kubenswrapper[4953]: I1211 10:12:59.570421 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:59 crc kubenswrapper[4953]: I1211 10:12:59.570433 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:59Z","lastTransitionTime":"2025-12-11T10:12:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:59 crc kubenswrapper[4953]: I1211 10:12:59.672427 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:59 crc kubenswrapper[4953]: I1211 10:12:59.672476 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:59 crc kubenswrapper[4953]: I1211 10:12:59.672488 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:59 crc kubenswrapper[4953]: I1211 10:12:59.672505 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:59 crc kubenswrapper[4953]: I1211 10:12:59.672516 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:59Z","lastTransitionTime":"2025-12-11T10:12:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:59 crc kubenswrapper[4953]: I1211 10:12:59.774536 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:59 crc kubenswrapper[4953]: I1211 10:12:59.774590 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:59 crc kubenswrapper[4953]: I1211 10:12:59.774599 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:59 crc kubenswrapper[4953]: I1211 10:12:59.774612 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:59 crc kubenswrapper[4953]: I1211 10:12:59.774621 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:59Z","lastTransitionTime":"2025-12-11T10:12:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:59 crc kubenswrapper[4953]: I1211 10:12:59.876812 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:59 crc kubenswrapper[4953]: I1211 10:12:59.876858 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:59 crc kubenswrapper[4953]: I1211 10:12:59.876868 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:59 crc kubenswrapper[4953]: I1211 10:12:59.876881 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:59 crc kubenswrapper[4953]: I1211 10:12:59.876890 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:59Z","lastTransitionTime":"2025-12-11T10:12:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:12:59 crc kubenswrapper[4953]: I1211 10:12:59.982506 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:12:59 crc kubenswrapper[4953]: I1211 10:12:59.983846 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:12:59 crc kubenswrapper[4953]: I1211 10:12:59.983876 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:12:59 crc kubenswrapper[4953]: I1211 10:12:59.983896 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:12:59 crc kubenswrapper[4953]: I1211 10:12:59.983910 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:12:59Z","lastTransitionTime":"2025-12-11T10:12:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:13:00 crc kubenswrapper[4953]: I1211 10:13:00.087100 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:13:00 crc kubenswrapper[4953]: I1211 10:13:00.087160 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:13:00 crc kubenswrapper[4953]: I1211 10:13:00.087170 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:13:00 crc kubenswrapper[4953]: I1211 10:13:00.087184 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:13:00 crc kubenswrapper[4953]: I1211 10:13:00.087195 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:13:00Z","lastTransitionTime":"2025-12-11T10:13:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:13:00 crc kubenswrapper[4953]: I1211 10:13:00.190371 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:13:00 crc kubenswrapper[4953]: I1211 10:13:00.190419 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:13:00 crc kubenswrapper[4953]: I1211 10:13:00.190429 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:13:00 crc kubenswrapper[4953]: I1211 10:13:00.190444 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:13:00 crc kubenswrapper[4953]: I1211 10:13:00.190455 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:13:00Z","lastTransitionTime":"2025-12-11T10:13:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:13:00 crc kubenswrapper[4953]: I1211 10:13:00.293551 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:13:00 crc kubenswrapper[4953]: I1211 10:13:00.293612 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:13:00 crc kubenswrapper[4953]: I1211 10:13:00.293622 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:13:00 crc kubenswrapper[4953]: I1211 10:13:00.293638 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:13:00 crc kubenswrapper[4953]: I1211 10:13:00.293649 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:13:00Z","lastTransitionTime":"2025-12-11T10:13:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:13:00 crc kubenswrapper[4953]: I1211 10:13:00.396611 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:13:00 crc kubenswrapper[4953]: I1211 10:13:00.396758 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:13:00 crc kubenswrapper[4953]: I1211 10:13:00.396778 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:13:00 crc kubenswrapper[4953]: I1211 10:13:00.396792 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:13:00 crc kubenswrapper[4953]: I1211 10:13:00.396804 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:13:00Z","lastTransitionTime":"2025-12-11T10:13:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:13:00 crc kubenswrapper[4953]: I1211 10:13:00.499767 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:13:00 crc kubenswrapper[4953]: I1211 10:13:00.499811 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:13:00 crc kubenswrapper[4953]: I1211 10:13:00.499824 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:13:00 crc kubenswrapper[4953]: I1211 10:13:00.499842 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:13:00 crc kubenswrapper[4953]: I1211 10:13:00.499854 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:13:00Z","lastTransitionTime":"2025-12-11T10:13:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:13:00 crc kubenswrapper[4953]: I1211 10:13:00.603134 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:13:00 crc kubenswrapper[4953]: I1211 10:13:00.603193 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:13:00 crc kubenswrapper[4953]: I1211 10:13:00.603210 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:13:00 crc kubenswrapper[4953]: I1211 10:13:00.603231 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:13:00 crc kubenswrapper[4953]: I1211 10:13:00.603246 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:13:00Z","lastTransitionTime":"2025-12-11T10:13:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:13:00 crc kubenswrapper[4953]: I1211 10:13:00.706213 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:13:00 crc kubenswrapper[4953]: I1211 10:13:00.706257 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:13:00 crc kubenswrapper[4953]: I1211 10:13:00.706271 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:13:00 crc kubenswrapper[4953]: I1211 10:13:00.706290 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:13:00 crc kubenswrapper[4953]: I1211 10:13:00.706302 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:13:00Z","lastTransitionTime":"2025-12-11T10:13:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:13:00 crc kubenswrapper[4953]: I1211 10:13:00.808489 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:13:00 crc kubenswrapper[4953]: I1211 10:13:00.808542 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:13:00 crc kubenswrapper[4953]: I1211 10:13:00.808563 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:13:00 crc kubenswrapper[4953]: I1211 10:13:00.808599 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:13:00 crc kubenswrapper[4953]: I1211 10:13:00.808612 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:13:00Z","lastTransitionTime":"2025-12-11T10:13:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:13:00 crc kubenswrapper[4953]: I1211 10:13:00.911802 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:13:00 crc kubenswrapper[4953]: I1211 10:13:00.911849 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:13:00 crc kubenswrapper[4953]: I1211 10:13:00.911860 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:13:00 crc kubenswrapper[4953]: I1211 10:13:00.911880 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:13:00 crc kubenswrapper[4953]: I1211 10:13:00.911894 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:13:00Z","lastTransitionTime":"2025-12-11T10:13:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:13:01 crc kubenswrapper[4953]: I1211 10:13:01.014281 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:13:01 crc kubenswrapper[4953]: I1211 10:13:01.014318 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:13:01 crc kubenswrapper[4953]: I1211 10:13:01.014327 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:13:01 crc kubenswrapper[4953]: I1211 10:13:01.014339 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:13:01 crc kubenswrapper[4953]: I1211 10:13:01.014348 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:13:01Z","lastTransitionTime":"2025-12-11T10:13:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:13:01 crc kubenswrapper[4953]: I1211 10:13:01.116907 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:13:01 crc kubenswrapper[4953]: I1211 10:13:01.116955 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:13:01 crc kubenswrapper[4953]: I1211 10:13:01.116975 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:13:01 crc kubenswrapper[4953]: I1211 10:13:01.116996 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:13:01 crc kubenswrapper[4953]: I1211 10:13:01.117008 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:13:01Z","lastTransitionTime":"2025-12-11T10:13:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:13:01 crc kubenswrapper[4953]: I1211 10:13:01.221106 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:13:01 crc kubenswrapper[4953]: I1211 10:13:01.221184 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:13:01 crc kubenswrapper[4953]: I1211 10:13:01.221199 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:13:01 crc kubenswrapper[4953]: I1211 10:13:01.221219 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:13:01 crc kubenswrapper[4953]: I1211 10:13:01.221233 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:13:01Z","lastTransitionTime":"2025-12-11T10:13:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:13:01 crc kubenswrapper[4953]: I1211 10:13:01.323995 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:13:01 crc kubenswrapper[4953]: I1211 10:13:01.324037 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:13:01 crc kubenswrapper[4953]: I1211 10:13:01.324049 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:13:01 crc kubenswrapper[4953]: I1211 10:13:01.324064 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:13:01 crc kubenswrapper[4953]: I1211 10:13:01.324075 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:13:01Z","lastTransitionTime":"2025-12-11T10:13:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:13:01 crc kubenswrapper[4953]: I1211 10:13:01.426864 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:13:01 crc kubenswrapper[4953]: I1211 10:13:01.426908 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:13:01 crc kubenswrapper[4953]: I1211 10:13:01.426921 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:13:01 crc kubenswrapper[4953]: I1211 10:13:01.426941 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:13:01 crc kubenswrapper[4953]: I1211 10:13:01.426954 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:13:01Z","lastTransitionTime":"2025-12-11T10:13:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:13:01 crc kubenswrapper[4953]: I1211 10:13:01.472849 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 10:13:01 crc kubenswrapper[4953]: I1211 10:13:01.472871 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 10:13:01 crc kubenswrapper[4953]: E1211 10:13:01.473043 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 10:13:01 crc kubenswrapper[4953]: I1211 10:13:01.472903 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 10:13:01 crc kubenswrapper[4953]: I1211 10:13:01.472871 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qm4mr" Dec 11 10:13:01 crc kubenswrapper[4953]: E1211 10:13:01.473328 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 10:13:01 crc kubenswrapper[4953]: E1211 10:13:01.473487 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 10:13:01 crc kubenswrapper[4953]: E1211 10:13:01.473758 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qm4mr" podUID="86f65b63-32e0-49cc-bc96-272ecfb987ed" Dec 11 10:13:01 crc kubenswrapper[4953]: I1211 10:13:01.493048 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Dec 11 10:13:01 crc kubenswrapper[4953]: I1211 10:13:01.530004 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:13:01 crc kubenswrapper[4953]: I1211 10:13:01.530048 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:13:01 crc kubenswrapper[4953]: I1211 10:13:01.530059 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:13:01 crc kubenswrapper[4953]: I1211 10:13:01.530075 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:13:01 crc kubenswrapper[4953]: I1211 10:13:01.530085 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:13:01Z","lastTransitionTime":"2025-12-11T10:13:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:13:01 crc kubenswrapper[4953]: I1211 10:13:01.632975 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:13:01 crc kubenswrapper[4953]: I1211 10:13:01.633031 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:13:01 crc kubenswrapper[4953]: I1211 10:13:01.633044 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:13:01 crc kubenswrapper[4953]: I1211 10:13:01.633070 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:13:01 crc kubenswrapper[4953]: I1211 10:13:01.633083 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:13:01Z","lastTransitionTime":"2025-12-11T10:13:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:13:01 crc kubenswrapper[4953]: I1211 10:13:01.735282 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:13:01 crc kubenswrapper[4953]: I1211 10:13:01.735324 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:13:01 crc kubenswrapper[4953]: I1211 10:13:01.735337 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:13:01 crc kubenswrapper[4953]: I1211 10:13:01.735353 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:13:01 crc kubenswrapper[4953]: I1211 10:13:01.735363 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:13:01Z","lastTransitionTime":"2025-12-11T10:13:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:13:01 crc kubenswrapper[4953]: I1211 10:13:01.838105 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:13:01 crc kubenswrapper[4953]: I1211 10:13:01.838359 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:13:01 crc kubenswrapper[4953]: I1211 10:13:01.838375 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:13:01 crc kubenswrapper[4953]: I1211 10:13:01.838392 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:13:01 crc kubenswrapper[4953]: I1211 10:13:01.838403 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:13:01Z","lastTransitionTime":"2025-12-11T10:13:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:13:01 crc kubenswrapper[4953]: I1211 10:13:01.935819 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 10:13:01 crc kubenswrapper[4953]: I1211 10:13:01.935905 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 10:13:01 crc kubenswrapper[4953]: I1211 10:13:01.935923 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 10:13:01 crc kubenswrapper[4953]: I1211 10:13:01.935979 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 10:13:01 crc kubenswrapper[4953]: I1211 10:13:01.935999 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T10:13:01Z","lastTransitionTime":"2025-12-11T10:13:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 10:13:02 crc kubenswrapper[4953]: I1211 10:13:02.016492 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-ljdgd"] Dec 11 10:13:02 crc kubenswrapper[4953]: I1211 10:13:02.017010 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ljdgd" Dec 11 10:13:02 crc kubenswrapper[4953]: I1211 10:13:02.020611 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Dec 11 10:13:02 crc kubenswrapper[4953]: I1211 10:13:02.020816 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Dec 11 10:13:02 crc kubenswrapper[4953]: I1211 10:13:02.021259 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Dec 11 10:13:02 crc kubenswrapper[4953]: I1211 10:13:02.021910 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Dec 11 10:13:02 crc kubenswrapper[4953]: I1211 10:13:02.071097 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=1.071079866 podStartE2EDuration="1.071079866s" podCreationTimestamp="2025-12-11 10:13:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:13:02.070416432 +0000 UTC m=+100.094275465" watchObservedRunningTime="2025-12-11 10:13:02.071079866 +0000 UTC m=+100.094938899" Dec 11 10:13:02 crc kubenswrapper[4953]: I1211 10:13:02.071337 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=49.071333334 podStartE2EDuration="49.071333334s" podCreationTimestamp="2025-12-11 10:12:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:13:02.04073731 +0000 UTC m=+100.064596383" watchObservedRunningTime="2025-12-11 10:13:02.071333334 +0000 UTC m=+100.095192367" Dec 11 10:13:02 crc kubenswrapper[4953]: I1211 10:13:02.118909 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9323677f-e42a-4e13-9037-590b83c55c77-service-ca\") pod \"cluster-version-operator-5c965bbfc6-ljdgd\" (UID: \"9323677f-e42a-4e13-9037-590b83c55c77\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ljdgd" Dec 11 10:13:02 crc kubenswrapper[4953]: I1211 10:13:02.118993 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/9323677f-e42a-4e13-9037-590b83c55c77-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-ljdgd\" (UID: \"9323677f-e42a-4e13-9037-590b83c55c77\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ljdgd" Dec 11 10:13:02 crc kubenswrapper[4953]: I1211 10:13:02.119012 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9323677f-e42a-4e13-9037-590b83c55c77-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-ljdgd\" (UID: \"9323677f-e42a-4e13-9037-590b83c55c77\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ljdgd" Dec 11 10:13:02 crc kubenswrapper[4953]: I1211 10:13:02.119036 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/9323677f-e42a-4e13-9037-590b83c55c77-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-ljdgd\" (UID: \"9323677f-e42a-4e13-9037-590b83c55c77\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ljdgd" Dec 11 10:13:02 crc kubenswrapper[4953]: I1211 10:13:02.119054 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9323677f-e42a-4e13-9037-590b83c55c77-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-ljdgd\" (UID: \"9323677f-e42a-4e13-9037-590b83c55c77\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ljdgd" Dec 11 10:13:02 crc kubenswrapper[4953]: I1211 10:13:02.169077 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-7cgmm" podStartSLOduration=78.169042222 podStartE2EDuration="1m18.169042222s" podCreationTimestamp="2025-12-11 10:11:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:13:02.167923063 +0000 UTC m=+100.191782096" watchObservedRunningTime="2025-12-11 10:13:02.169042222 +0000 UTC m=+100.192901255" Dec 11 10:13:02 crc kubenswrapper[4953]: I1211 10:13:02.182724 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-ps59j" podStartSLOduration=78.182700957 podStartE2EDuration="1m18.182700957s" podCreationTimestamp="2025-12-11 10:11:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:13:02.182347365 +0000 UTC m=+100.206206408" watchObservedRunningTime="2025-12-11 10:13:02.182700957 +0000 UTC m=+100.206559990" Dec 11 10:13:02 crc kubenswrapper[4953]: I1211 10:13:02.200494 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-pqtrx" podStartSLOduration=78.200452354 podStartE2EDuration="1m18.200452354s" podCreationTimestamp="2025-12-11 10:11:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:13:02.199977838 +0000 UTC m=+100.223836891" watchObservedRunningTime="2025-12-11 10:13:02.200452354 +0000 UTC m=+100.224311387" Dec 11 10:13:02 crc kubenswrapper[4953]: I1211 10:13:02.219902 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9323677f-e42a-4e13-9037-590b83c55c77-service-ca\") pod \"cluster-version-operator-5c965bbfc6-ljdgd\" (UID: \"9323677f-e42a-4e13-9037-590b83c55c77\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ljdgd" Dec 11 10:13:02 crc kubenswrapper[4953]: I1211 10:13:02.220031 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/9323677f-e42a-4e13-9037-590b83c55c77-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-ljdgd\" (UID: \"9323677f-e42a-4e13-9037-590b83c55c77\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ljdgd" Dec 11 10:13:02 crc kubenswrapper[4953]: I1211 10:13:02.220059 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9323677f-e42a-4e13-9037-590b83c55c77-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-ljdgd\" (UID: \"9323677f-e42a-4e13-9037-590b83c55c77\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ljdgd" Dec 11 10:13:02 crc kubenswrapper[4953]: I1211 10:13:02.220109 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/9323677f-e42a-4e13-9037-590b83c55c77-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-ljdgd\" (UID: \"9323677f-e42a-4e13-9037-590b83c55c77\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ljdgd" Dec 11 10:13:02 crc kubenswrapper[4953]: I1211 10:13:02.220147 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/9323677f-e42a-4e13-9037-590b83c55c77-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-ljdgd\" (UID: \"9323677f-e42a-4e13-9037-590b83c55c77\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ljdgd" Dec 11 10:13:02 crc kubenswrapper[4953]: I1211 10:13:02.220142 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9323677f-e42a-4e13-9037-590b83c55c77-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-ljdgd\" (UID: \"9323677f-e42a-4e13-9037-590b83c55c77\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ljdgd" Dec 11 10:13:02 crc kubenswrapper[4953]: I1211 10:13:02.220288 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/9323677f-e42a-4e13-9037-590b83c55c77-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-ljdgd\" (UID: \"9323677f-e42a-4e13-9037-590b83c55c77\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ljdgd" Dec 11 10:13:02 crc kubenswrapper[4953]: I1211 10:13:02.224013 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9323677f-e42a-4e13-9037-590b83c55c77-service-ca\") pod \"cluster-version-operator-5c965bbfc6-ljdgd\" (UID: \"9323677f-e42a-4e13-9037-590b83c55c77\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ljdgd" Dec 11 10:13:02 crc kubenswrapper[4953]: I1211 10:13:02.227029 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9323677f-e42a-4e13-9037-590b83c55c77-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-ljdgd\" (UID: \"9323677f-e42a-4e13-9037-590b83c55c77\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ljdgd" Dec 11 10:13:02 crc kubenswrapper[4953]: I1211 10:13:02.271913 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9323677f-e42a-4e13-9037-590b83c55c77-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-ljdgd\" (UID: \"9323677f-e42a-4e13-9037-590b83c55c77\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ljdgd" Dec 11 10:13:02 crc kubenswrapper[4953]: I1211 10:13:02.284222 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-h4dvx" podStartSLOduration=78.284202096 podStartE2EDuration="1m18.284202096s" podCreationTimestamp="2025-12-11 10:11:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:13:02.219217107 +0000 UTC m=+100.243076140" watchObservedRunningTime="2025-12-11 10:13:02.284202096 +0000 UTC m=+100.308061129" Dec 11 10:13:02 crc kubenswrapper[4953]: I1211 10:13:02.311183 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=76.311166545 podStartE2EDuration="1m16.311166545s" podCreationTimestamp="2025-12-11 10:11:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:13:02.283801083 +0000 UTC m=+100.307660116" watchObservedRunningTime="2025-12-11 10:13:02.311166545 +0000 UTC m=+100.335025578" Dec 11 10:13:02 crc kubenswrapper[4953]: I1211 10:13:02.311301 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=19.311294939 podStartE2EDuration="19.311294939s" podCreationTimestamp="2025-12-11 10:12:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:13:02.311082762 +0000 UTC m=+100.334941805" watchObservedRunningTime="2025-12-11 10:13:02.311294939 +0000 UTC m=+100.335153972" Dec 11 10:13:02 crc kubenswrapper[4953]: I1211 10:13:02.334641 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=78.33462436 podStartE2EDuration="1m18.33462436s" podCreationTimestamp="2025-12-11 10:11:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:13:02.334414263 +0000 UTC m=+100.358273296" watchObservedRunningTime="2025-12-11 10:13:02.33462436 +0000 UTC m=+100.358483393" Dec 11 10:13:02 crc kubenswrapper[4953]: I1211 10:13:02.335436 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ljdgd" Dec 11 10:13:02 crc kubenswrapper[4953]: I1211 10:13:02.445278 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podStartSLOduration=78.445261158 podStartE2EDuration="1m18.445261158s" podCreationTimestamp="2025-12-11 10:11:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:13:02.408306633 +0000 UTC m=+100.432165666" watchObservedRunningTime="2025-12-11 10:13:02.445261158 +0000 UTC m=+100.469120191" Dec 11 10:13:02 crc kubenswrapper[4953]: I1211 10:13:02.461639 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bjhsd" podStartSLOduration=76.461619527 podStartE2EDuration="1m16.461619527s" podCreationTimestamp="2025-12-11 10:11:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:13:02.461315136 +0000 UTC m=+100.485174179" watchObservedRunningTime="2025-12-11 10:13:02.461619527 +0000 UTC m=+100.485478560" Dec 11 10:13:02 crc kubenswrapper[4953]: I1211 10:13:02.651760 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ljdgd" event={"ID":"9323677f-e42a-4e13-9037-590b83c55c77","Type":"ContainerStarted","Data":"6ce21e8ed5e11b7a43b28e00f69ddd1e0be7d6998506dbd024f9955ce2850053"} Dec 11 10:13:03 crc kubenswrapper[4953]: I1211 10:13:03.472818 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 10:13:03 crc kubenswrapper[4953]: I1211 10:13:03.472931 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 10:13:03 crc kubenswrapper[4953]: I1211 10:13:03.472959 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 10:13:03 crc kubenswrapper[4953]: E1211 10:13:03.472996 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 10:13:03 crc kubenswrapper[4953]: E1211 10:13:03.473129 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 10:13:03 crc kubenswrapper[4953]: E1211 10:13:03.473234 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 10:13:03 crc kubenswrapper[4953]: I1211 10:13:03.473648 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qm4mr" Dec 11 10:13:03 crc kubenswrapper[4953]: E1211 10:13:03.473770 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qm4mr" podUID="86f65b63-32e0-49cc-bc96-272ecfb987ed" Dec 11 10:13:03 crc kubenswrapper[4953]: I1211 10:13:03.474145 4953 scope.go:117] "RemoveContainer" containerID="7dc0cdbe5f1b125694bc32b6055f6f98ac803834f27c54f96be12ec7c359b5c1" Dec 11 10:13:03 crc kubenswrapper[4953]: E1211 10:13:03.474337 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-x6f57_openshift-ovn-kubernetes(c09d8243-6693-433e-bce1-8a99e5e37b95)\"" pod="openshift-ovn-kubernetes/ovnkube-node-x6f57" podUID="c09d8243-6693-433e-bce1-8a99e5e37b95" Dec 11 10:13:03 crc kubenswrapper[4953]: I1211 10:13:03.720194 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ljdgd" event={"ID":"9323677f-e42a-4e13-9037-590b83c55c77","Type":"ContainerStarted","Data":"f4c8e79a0831904d17827c868b73cfe8d0177ae8143a7dfdbae3eb026e9ab9a1"} Dec 11 10:13:05 crc kubenswrapper[4953]: I1211 10:13:05.472634 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 10:13:05 crc kubenswrapper[4953]: I1211 10:13:05.472671 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 10:13:05 crc kubenswrapper[4953]: I1211 10:13:05.472647 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 10:13:05 crc kubenswrapper[4953]: E1211 10:13:05.472764 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 10:13:05 crc kubenswrapper[4953]: E1211 10:13:05.472904 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 10:13:05 crc kubenswrapper[4953]: I1211 10:13:05.472941 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qm4mr" Dec 11 10:13:05 crc kubenswrapper[4953]: E1211 10:13:05.473113 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qm4mr" podUID="86f65b63-32e0-49cc-bc96-272ecfb987ed" Dec 11 10:13:05 crc kubenswrapper[4953]: E1211 10:13:05.473745 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 10:13:05 crc kubenswrapper[4953]: I1211 10:13:05.631649 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/86f65b63-32e0-49cc-bc96-272ecfb987ed-metrics-certs\") pod \"network-metrics-daemon-qm4mr\" (UID: \"86f65b63-32e0-49cc-bc96-272ecfb987ed\") " pod="openshift-multus/network-metrics-daemon-qm4mr" Dec 11 10:13:05 crc kubenswrapper[4953]: E1211 10:13:05.631774 4953 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 11 10:13:05 crc kubenswrapper[4953]: E1211 10:13:05.631842 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/86f65b63-32e0-49cc-bc96-272ecfb987ed-metrics-certs podName:86f65b63-32e0-49cc-bc96-272ecfb987ed nodeName:}" failed. No retries permitted until 2025-12-11 10:14:09.631826414 +0000 UTC m=+167.655685447 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/86f65b63-32e0-49cc-bc96-272ecfb987ed-metrics-certs") pod "network-metrics-daemon-qm4mr" (UID: "86f65b63-32e0-49cc-bc96-272ecfb987ed") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 11 10:13:07 crc kubenswrapper[4953]: I1211 10:13:07.472908 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 10:13:07 crc kubenswrapper[4953]: I1211 10:13:07.472923 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qm4mr" Dec 11 10:13:07 crc kubenswrapper[4953]: I1211 10:13:07.472908 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 10:13:07 crc kubenswrapper[4953]: I1211 10:13:07.473055 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 10:13:07 crc kubenswrapper[4953]: E1211 10:13:07.473260 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 10:13:07 crc kubenswrapper[4953]: E1211 10:13:07.473329 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qm4mr" podUID="86f65b63-32e0-49cc-bc96-272ecfb987ed" Dec 11 10:13:07 crc kubenswrapper[4953]: E1211 10:13:07.473406 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 10:13:07 crc kubenswrapper[4953]: E1211 10:13:07.473463 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 10:13:09 crc kubenswrapper[4953]: I1211 10:13:09.473274 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 10:13:09 crc kubenswrapper[4953]: I1211 10:13:09.473276 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 10:13:09 crc kubenswrapper[4953]: E1211 10:13:09.473726 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 10:13:09 crc kubenswrapper[4953]: I1211 10:13:09.473352 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qm4mr" Dec 11 10:13:09 crc kubenswrapper[4953]: I1211 10:13:09.473328 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 10:13:09 crc kubenswrapper[4953]: E1211 10:13:09.473944 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 10:13:09 crc kubenswrapper[4953]: E1211 10:13:09.474022 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qm4mr" podUID="86f65b63-32e0-49cc-bc96-272ecfb987ed" Dec 11 10:13:09 crc kubenswrapper[4953]: E1211 10:13:09.474119 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 10:13:11 crc kubenswrapper[4953]: I1211 10:13:11.472447 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 10:13:11 crc kubenswrapper[4953]: I1211 10:13:11.472513 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 10:13:11 crc kubenswrapper[4953]: I1211 10:13:11.472507 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 10:13:11 crc kubenswrapper[4953]: I1211 10:13:11.472562 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qm4mr" Dec 11 10:13:11 crc kubenswrapper[4953]: E1211 10:13:11.472641 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 10:13:11 crc kubenswrapper[4953]: E1211 10:13:11.472745 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qm4mr" podUID="86f65b63-32e0-49cc-bc96-272ecfb987ed" Dec 11 10:13:11 crc kubenswrapper[4953]: E1211 10:13:11.472824 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 10:13:11 crc kubenswrapper[4953]: E1211 10:13:11.472887 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 10:13:13 crc kubenswrapper[4953]: I1211 10:13:13.472762 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 10:13:13 crc kubenswrapper[4953]: I1211 10:13:13.473089 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qm4mr" Dec 11 10:13:13 crc kubenswrapper[4953]: I1211 10:13:13.472964 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 10:13:13 crc kubenswrapper[4953]: E1211 10:13:13.473098 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 10:13:13 crc kubenswrapper[4953]: E1211 10:13:13.473219 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qm4mr" podUID="86f65b63-32e0-49cc-bc96-272ecfb987ed" Dec 11 10:13:13 crc kubenswrapper[4953]: E1211 10:13:13.473325 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 10:13:13 crc kubenswrapper[4953]: I1211 10:13:13.474008 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 10:13:13 crc kubenswrapper[4953]: E1211 10:13:13.474143 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 10:13:15 crc kubenswrapper[4953]: I1211 10:13:15.472320 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qm4mr" Dec 11 10:13:15 crc kubenswrapper[4953]: I1211 10:13:15.472387 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 10:13:15 crc kubenswrapper[4953]: I1211 10:13:15.472411 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 10:13:15 crc kubenswrapper[4953]: I1211 10:13:15.472415 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 10:13:15 crc kubenswrapper[4953]: E1211 10:13:15.472526 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qm4mr" podUID="86f65b63-32e0-49cc-bc96-272ecfb987ed" Dec 11 10:13:15 crc kubenswrapper[4953]: E1211 10:13:15.472661 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 10:13:15 crc kubenswrapper[4953]: E1211 10:13:15.472781 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 10:13:15 crc kubenswrapper[4953]: E1211 10:13:15.472824 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 10:13:17 crc kubenswrapper[4953]: I1211 10:13:17.472363 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 10:13:17 crc kubenswrapper[4953]: I1211 10:13:17.472368 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qm4mr" Dec 11 10:13:17 crc kubenswrapper[4953]: E1211 10:13:17.472485 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 10:13:17 crc kubenswrapper[4953]: I1211 10:13:17.472385 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 10:13:17 crc kubenswrapper[4953]: I1211 10:13:17.472454 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 10:13:17 crc kubenswrapper[4953]: E1211 10:13:17.472757 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 10:13:17 crc kubenswrapper[4953]: E1211 10:13:17.472649 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qm4mr" podUID="86f65b63-32e0-49cc-bc96-272ecfb987ed" Dec 11 10:13:17 crc kubenswrapper[4953]: E1211 10:13:17.472843 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 10:13:18 crc kubenswrapper[4953]: I1211 10:13:18.473939 4953 scope.go:117] "RemoveContainer" containerID="7dc0cdbe5f1b125694bc32b6055f6f98ac803834f27c54f96be12ec7c359b5c1" Dec 11 10:13:18 crc kubenswrapper[4953]: E1211 10:13:18.474267 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-x6f57_openshift-ovn-kubernetes(c09d8243-6693-433e-bce1-8a99e5e37b95)\"" pod="openshift-ovn-kubernetes/ovnkube-node-x6f57" podUID="c09d8243-6693-433e-bce1-8a99e5e37b95" Dec 11 10:13:19 crc kubenswrapper[4953]: I1211 10:13:19.473161 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 10:13:19 crc kubenswrapper[4953]: I1211 10:13:19.473269 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qm4mr" Dec 11 10:13:19 crc kubenswrapper[4953]: E1211 10:13:19.473320 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 10:13:19 crc kubenswrapper[4953]: I1211 10:13:19.473161 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 10:13:19 crc kubenswrapper[4953]: I1211 10:13:19.473269 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 10:13:19 crc kubenswrapper[4953]: E1211 10:13:19.473420 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qm4mr" podUID="86f65b63-32e0-49cc-bc96-272ecfb987ed" Dec 11 10:13:19 crc kubenswrapper[4953]: E1211 10:13:19.473478 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 10:13:19 crc kubenswrapper[4953]: E1211 10:13:19.473547 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 10:13:21 crc kubenswrapper[4953]: I1211 10:13:21.472822 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qm4mr" Dec 11 10:13:21 crc kubenswrapper[4953]: E1211 10:13:21.472998 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qm4mr" podUID="86f65b63-32e0-49cc-bc96-272ecfb987ed" Dec 11 10:13:21 crc kubenswrapper[4953]: I1211 10:13:21.473056 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 10:13:21 crc kubenswrapper[4953]: I1211 10:13:21.473102 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 10:13:21 crc kubenswrapper[4953]: I1211 10:13:21.473064 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 10:13:21 crc kubenswrapper[4953]: E1211 10:13:21.473312 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 10:13:21 crc kubenswrapper[4953]: E1211 10:13:21.473420 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 10:13:21 crc kubenswrapper[4953]: E1211 10:13:21.473517 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 10:13:22 crc kubenswrapper[4953]: E1211 10:13:22.418818 4953 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Dec 11 10:13:22 crc kubenswrapper[4953]: E1211 10:13:22.603422 4953 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 11 10:13:23 crc kubenswrapper[4953]: I1211 10:13:23.472224 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 10:13:23 crc kubenswrapper[4953]: I1211 10:13:23.472244 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 10:13:23 crc kubenswrapper[4953]: I1211 10:13:23.472292 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 10:13:23 crc kubenswrapper[4953]: E1211 10:13:23.472343 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 10:13:23 crc kubenswrapper[4953]: I1211 10:13:23.472241 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qm4mr" Dec 11 10:13:23 crc kubenswrapper[4953]: E1211 10:13:23.472599 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 10:13:23 crc kubenswrapper[4953]: E1211 10:13:23.472676 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qm4mr" podUID="86f65b63-32e0-49cc-bc96-272ecfb987ed" Dec 11 10:13:23 crc kubenswrapper[4953]: E1211 10:13:23.472970 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 10:13:24 crc kubenswrapper[4953]: I1211 10:13:24.791993 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-h4dvx_644e1d40-ab80-469e-94b4-540e52b8e2c0/kube-multus/1.log" Dec 11 10:13:24 crc kubenswrapper[4953]: I1211 10:13:24.792905 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-h4dvx_644e1d40-ab80-469e-94b4-540e52b8e2c0/kube-multus/0.log" Dec 11 10:13:24 crc kubenswrapper[4953]: I1211 10:13:24.792977 4953 generic.go:334] "Generic (PLEG): container finished" podID="644e1d40-ab80-469e-94b4-540e52b8e2c0" containerID="bc80f2149ec8320584aa8fd55223ba13d53848232acd659a71bb35fdea7a043f" exitCode=1 Dec 11 10:13:24 crc kubenswrapper[4953]: I1211 10:13:24.793034 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-h4dvx" event={"ID":"644e1d40-ab80-469e-94b4-540e52b8e2c0","Type":"ContainerDied","Data":"bc80f2149ec8320584aa8fd55223ba13d53848232acd659a71bb35fdea7a043f"} Dec 11 10:13:24 crc kubenswrapper[4953]: I1211 10:13:24.793087 4953 scope.go:117] "RemoveContainer" containerID="5f734acf34a05a9425f305c809775bae58615ae1d5f89e3b519e54d7e7abb8bc" Dec 11 10:13:24 crc kubenswrapper[4953]: I1211 10:13:24.793677 4953 scope.go:117] "RemoveContainer" containerID="bc80f2149ec8320584aa8fd55223ba13d53848232acd659a71bb35fdea7a043f" Dec 11 10:13:24 crc kubenswrapper[4953]: E1211 10:13:24.794060 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-h4dvx_openshift-multus(644e1d40-ab80-469e-94b4-540e52b8e2c0)\"" pod="openshift-multus/multus-h4dvx" podUID="644e1d40-ab80-469e-94b4-540e52b8e2c0" Dec 11 10:13:24 crc kubenswrapper[4953]: I1211 10:13:24.831978 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ljdgd" podStartSLOduration=100.83194752 podStartE2EDuration="1m40.83194752s" podCreationTimestamp="2025-12-11 10:11:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:13:03.750989565 +0000 UTC m=+101.774848598" watchObservedRunningTime="2025-12-11 10:13:24.83194752 +0000 UTC m=+122.855806593" Dec 11 10:13:25 crc kubenswrapper[4953]: I1211 10:13:25.472670 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 10:13:25 crc kubenswrapper[4953]: I1211 10:13:25.472773 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 10:13:25 crc kubenswrapper[4953]: I1211 10:13:25.472670 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 10:13:25 crc kubenswrapper[4953]: E1211 10:13:25.472837 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 10:13:25 crc kubenswrapper[4953]: I1211 10:13:25.472883 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qm4mr" Dec 11 10:13:25 crc kubenswrapper[4953]: E1211 10:13:25.473072 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 10:13:25 crc kubenswrapper[4953]: E1211 10:13:25.473286 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qm4mr" podUID="86f65b63-32e0-49cc-bc96-272ecfb987ed" Dec 11 10:13:25 crc kubenswrapper[4953]: E1211 10:13:25.473330 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 10:13:25 crc kubenswrapper[4953]: I1211 10:13:25.800500 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-h4dvx_644e1d40-ab80-469e-94b4-540e52b8e2c0/kube-multus/1.log" Dec 11 10:13:27 crc kubenswrapper[4953]: I1211 10:13:27.473169 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 10:13:27 crc kubenswrapper[4953]: I1211 10:13:27.473233 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 10:13:27 crc kubenswrapper[4953]: I1211 10:13:27.473244 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 10:13:27 crc kubenswrapper[4953]: I1211 10:13:27.473174 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qm4mr" Dec 11 10:13:27 crc kubenswrapper[4953]: E1211 10:13:27.473380 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 10:13:27 crc kubenswrapper[4953]: E1211 10:13:27.473493 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 10:13:27 crc kubenswrapper[4953]: E1211 10:13:27.473703 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qm4mr" podUID="86f65b63-32e0-49cc-bc96-272ecfb987ed" Dec 11 10:13:27 crc kubenswrapper[4953]: E1211 10:13:27.473826 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 10:13:27 crc kubenswrapper[4953]: E1211 10:13:27.605069 4953 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 11 10:13:29 crc kubenswrapper[4953]: I1211 10:13:29.472616 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qm4mr" Dec 11 10:13:29 crc kubenswrapper[4953]: E1211 10:13:29.472805 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qm4mr" podUID="86f65b63-32e0-49cc-bc96-272ecfb987ed" Dec 11 10:13:29 crc kubenswrapper[4953]: I1211 10:13:29.472819 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 10:13:29 crc kubenswrapper[4953]: I1211 10:13:29.472859 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 10:13:29 crc kubenswrapper[4953]: E1211 10:13:29.472934 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 10:13:29 crc kubenswrapper[4953]: E1211 10:13:29.473033 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 10:13:29 crc kubenswrapper[4953]: I1211 10:13:29.473267 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 10:13:29 crc kubenswrapper[4953]: E1211 10:13:29.473355 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 10:13:31 crc kubenswrapper[4953]: I1211 10:13:31.472493 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 10:13:31 crc kubenswrapper[4953]: E1211 10:13:31.472876 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 10:13:31 crc kubenswrapper[4953]: I1211 10:13:31.472597 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 10:13:31 crc kubenswrapper[4953]: E1211 10:13:31.472937 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 10:13:31 crc kubenswrapper[4953]: I1211 10:13:31.472549 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 10:13:31 crc kubenswrapper[4953]: E1211 10:13:31.472992 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 10:13:31 crc kubenswrapper[4953]: I1211 10:13:31.472622 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qm4mr" Dec 11 10:13:31 crc kubenswrapper[4953]: E1211 10:13:31.473049 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qm4mr" podUID="86f65b63-32e0-49cc-bc96-272ecfb987ed" Dec 11 10:13:32 crc kubenswrapper[4953]: E1211 10:13:32.606104 4953 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 11 10:13:33 crc kubenswrapper[4953]: I1211 10:13:33.494898 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 10:13:33 crc kubenswrapper[4953]: I1211 10:13:33.495003 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 10:13:33 crc kubenswrapper[4953]: I1211 10:13:33.494910 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qm4mr" Dec 11 10:13:33 crc kubenswrapper[4953]: I1211 10:13:33.494894 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 10:13:33 crc kubenswrapper[4953]: E1211 10:13:33.495252 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 10:13:33 crc kubenswrapper[4953]: E1211 10:13:33.495309 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qm4mr" podUID="86f65b63-32e0-49cc-bc96-272ecfb987ed" Dec 11 10:13:33 crc kubenswrapper[4953]: E1211 10:13:33.495417 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 10:13:33 crc kubenswrapper[4953]: E1211 10:13:33.495531 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 10:13:33 crc kubenswrapper[4953]: I1211 10:13:33.496285 4953 scope.go:117] "RemoveContainer" containerID="7dc0cdbe5f1b125694bc32b6055f6f98ac803834f27c54f96be12ec7c359b5c1" Dec 11 10:13:34 crc kubenswrapper[4953]: I1211 10:13:34.837973 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x6f57_c09d8243-6693-433e-bce1-8a99e5e37b95/ovnkube-controller/3.log" Dec 11 10:13:34 crc kubenswrapper[4953]: I1211 10:13:34.840356 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x6f57" event={"ID":"c09d8243-6693-433e-bce1-8a99e5e37b95","Type":"ContainerStarted","Data":"e1e0a7a3ed79a4ad164a0949259cb9d143376d0563f58526ab941a2f87b272f6"} Dec 11 10:13:34 crc kubenswrapper[4953]: I1211 10:13:34.840837 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-x6f57" Dec 11 10:13:35 crc kubenswrapper[4953]: I1211 10:13:35.451146 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-x6f57" podStartSLOduration=110.451123299 podStartE2EDuration="1m50.451123299s" podCreationTimestamp="2025-12-11 10:11:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:13:34.875687786 +0000 UTC m=+132.899546819" watchObservedRunningTime="2025-12-11 10:13:35.451123299 +0000 UTC m=+133.474982332" Dec 11 10:13:35 crc kubenswrapper[4953]: I1211 10:13:35.451958 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-qm4mr"] Dec 11 10:13:35 crc kubenswrapper[4953]: I1211 10:13:35.452066 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qm4mr" Dec 11 10:13:35 crc kubenswrapper[4953]: E1211 10:13:35.452160 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qm4mr" podUID="86f65b63-32e0-49cc-bc96-272ecfb987ed" Dec 11 10:13:35 crc kubenswrapper[4953]: I1211 10:13:35.473401 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 10:13:35 crc kubenswrapper[4953]: E1211 10:13:35.473517 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 10:13:35 crc kubenswrapper[4953]: I1211 10:13:35.473693 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 10:13:35 crc kubenswrapper[4953]: E1211 10:13:35.473738 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 10:13:35 crc kubenswrapper[4953]: I1211 10:13:35.473837 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 10:13:35 crc kubenswrapper[4953]: E1211 10:13:35.473880 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 10:13:36 crc kubenswrapper[4953]: I1211 10:13:36.473264 4953 scope.go:117] "RemoveContainer" containerID="bc80f2149ec8320584aa8fd55223ba13d53848232acd659a71bb35fdea7a043f" Dec 11 10:13:36 crc kubenswrapper[4953]: I1211 10:13:36.930195 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-h4dvx_644e1d40-ab80-469e-94b4-540e52b8e2c0/kube-multus/1.log" Dec 11 10:13:36 crc kubenswrapper[4953]: I1211 10:13:36.930260 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-h4dvx" event={"ID":"644e1d40-ab80-469e-94b4-540e52b8e2c0","Type":"ContainerStarted","Data":"9b6eb9191a87c2ce29c9393a9132ddb691923181877779b571678fb5a93b9feb"} Dec 11 10:13:37 crc kubenswrapper[4953]: I1211 10:13:37.505756 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 10:13:37 crc kubenswrapper[4953]: E1211 10:13:37.505879 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 10:13:37 crc kubenswrapper[4953]: I1211 10:13:37.506056 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 10:13:37 crc kubenswrapper[4953]: I1211 10:13:37.506053 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 10:13:37 crc kubenswrapper[4953]: E1211 10:13:37.506111 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 10:13:37 crc kubenswrapper[4953]: E1211 10:13:37.506220 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 10:13:37 crc kubenswrapper[4953]: I1211 10:13:37.506387 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qm4mr" Dec 11 10:13:37 crc kubenswrapper[4953]: E1211 10:13:37.506475 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qm4mr" podUID="86f65b63-32e0-49cc-bc96-272ecfb987ed" Dec 11 10:13:37 crc kubenswrapper[4953]: E1211 10:13:37.624268 4953 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 11 10:13:39 crc kubenswrapper[4953]: I1211 10:13:39.472767 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 10:13:39 crc kubenswrapper[4953]: I1211 10:13:39.472797 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 10:13:39 crc kubenswrapper[4953]: E1211 10:13:39.473009 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 10:13:39 crc kubenswrapper[4953]: I1211 10:13:39.472854 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qm4mr" Dec 11 10:13:39 crc kubenswrapper[4953]: I1211 10:13:39.472807 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 10:13:39 crc kubenswrapper[4953]: E1211 10:13:39.473100 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 10:13:39 crc kubenswrapper[4953]: E1211 10:13:39.473155 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qm4mr" podUID="86f65b63-32e0-49cc-bc96-272ecfb987ed" Dec 11 10:13:39 crc kubenswrapper[4953]: E1211 10:13:39.473474 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 10:13:41 crc kubenswrapper[4953]: I1211 10:13:41.472378 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 10:13:41 crc kubenswrapper[4953]: I1211 10:13:41.472525 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 10:13:41 crc kubenswrapper[4953]: I1211 10:13:41.472433 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qm4mr" Dec 11 10:13:41 crc kubenswrapper[4953]: I1211 10:13:41.472465 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 10:13:41 crc kubenswrapper[4953]: E1211 10:13:41.472688 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 10:13:41 crc kubenswrapper[4953]: E1211 10:13:41.472814 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 10:13:41 crc kubenswrapper[4953]: E1211 10:13:41.472960 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qm4mr" podUID="86f65b63-32e0-49cc-bc96-272ecfb987ed" Dec 11 10:13:41 crc kubenswrapper[4953]: E1211 10:13:41.473038 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.700656 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.756969 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-j88r5"] Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.757543 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-nzrxl"] Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.757906 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-hmk2h"] Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.758169 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hmk2h" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.758830 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-j88r5" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.759180 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-nzrxl" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.765682 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-8s4mq"] Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.770431 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-jnqj6"] Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.770923 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-9shds"] Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.771323 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-jnmj9"] Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.771733 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-vjv7f"] Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.772020 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-22hb8"] Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.772289 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8dr5c"] Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.772645 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-pzsms"] Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.772996 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-pzsms" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.773388 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-8s4mq" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.773675 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-wfrqd"] Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.773711 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-jnqj6" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.774038 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9shds" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.774095 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jnmj9" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.774463 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-vjv7f" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.774505 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-22hb8" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.774757 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8dr5c" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.775378 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.775923 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.776011 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.776924 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.777438 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.777947 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.778170 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.778763 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-crtp9"] Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.779321 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-crtp9" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.779932 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-wfrqd" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.781530 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.781713 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.781862 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.781885 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.782053 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.782211 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.782296 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.782385 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.782456 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.782541 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.782664 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.782741 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.781534 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.783868 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.784065 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-rxj74"] Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.802348 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.802521 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.803279 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.815426 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.821946 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-9jt44"] Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.822292 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-9jt44" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.822421 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.822588 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-rxj74" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.823617 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.823748 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.823783 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.823873 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.823929 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.823987 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.824046 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.824192 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.824298 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.824454 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.824550 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.827662 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.827817 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7ffjt"] Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.828146 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.828160 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-8lqhc"] Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.828541 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.828558 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-8lqhc" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.828699 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.828838 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.828862 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7ffjt" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.828950 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.829179 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.829294 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.829312 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.829401 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.829442 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.829485 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.829506 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-nzrxl"] Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.829531 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.829661 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.829684 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.829789 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.829830 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.829567 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.829853 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.829979 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.830014 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.830139 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.829407 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.830248 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.830287 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.830338 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.830433 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.830460 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.830629 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.830841 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.831189 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.831215 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.831303 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.831336 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.831473 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.832131 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.832233 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.832379 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.832524 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.832642 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.832724 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.832815 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.832905 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.833344 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.838525 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-x8dvj"] Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.839123 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-x8dvj" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.840225 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-v8699"] Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.841049 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-v8699" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.840281 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.841886 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-r99w9"] Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.841479 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.842316 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-r99w9" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.844849 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.845016 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.846094 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.846447 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.852994 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.853320 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.853548 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.859622 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9bt8h"] Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.860020 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kb52r"] Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.860326 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kb52r" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.860587 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9bt8h" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.889855 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-zlmqt"] Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.890435 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f3b9e0de-9d50-4564-b075-9e56de0d6d20-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-8s4mq\" (UID: \"f3b9e0de-9d50-4564-b075-9e56de0d6d20\") " pod="openshift-authentication/oauth-openshift-558db77b4-8s4mq" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.890492 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d16293c2-d5aa-41fe-859c-0cc5201b6f0b-metrics-certs\") pod \"router-default-5444994796-v8699\" (UID: \"d16293c2-d5aa-41fe-859c-0cc5201b6f0b\") " pod="openshift-ingress/router-default-5444994796-v8699" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.890520 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6a593442-828c-4cff-b9b9-4efa41ef6f44-console-serving-cert\") pod \"console-f9d7485db-wfrqd\" (UID: \"6a593442-828c-4cff-b9b9-4efa41ef6f44\") " pod="openshift-console/console-f9d7485db-wfrqd" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.890551 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtmkl\" (UniqueName: \"kubernetes.io/projected/6940241d-144c-44c2-bc2b-6b27c9ed106d-kube-api-access-wtmkl\") pod \"cluster-image-registry-operator-dc59b4c8b-7ffjt\" (UID: \"6940241d-144c-44c2-bc2b-6b27c9ed106d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7ffjt" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.890600 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b1583dc-078f-4ced-a9d9-a16856b18406-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-8lqhc\" (UID: \"0b1583dc-078f-4ced-a9d9-a16856b18406\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-8lqhc" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.890629 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/908334c7-0bff-48d7-b294-70e88f29aa95-images\") pod \"machine-api-operator-5694c8668f-nzrxl\" (UID: \"908334c7-0bff-48d7-b294-70e88f29aa95\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-nzrxl" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.890655 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/c97cb435-9028-4ea4-a6cb-7851c2845566-encryption-config\") pod \"apiserver-7bbb656c7d-9shds\" (UID: \"c97cb435-9028-4ea4-a6cb-7851c2845566\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9shds" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.890681 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a593442-828c-4cff-b9b9-4efa41ef6f44-trusted-ca-bundle\") pod \"console-f9d7485db-wfrqd\" (UID: \"6a593442-828c-4cff-b9b9-4efa41ef6f44\") " pod="openshift-console/console-f9d7485db-wfrqd" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.890706 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16c21d06-dc6b-45ea-8dc9-3a9de57e0b9b-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-kb52r\" (UID: \"16c21d06-dc6b-45ea-8dc9-3a9de57e0b9b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kb52r" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.890733 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwx8v\" (UniqueName: \"kubernetes.io/projected/2100f1b5-4d63-421f-8090-601fbb1ce20d-kube-api-access-rwx8v\") pod \"etcd-operator-b45778765-x8dvj\" (UID: \"2100f1b5-4d63-421f-8090-601fbb1ce20d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-x8dvj" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.890788 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a3347424-53c5-4365-bcca-5ec96a8b2c0b-serving-cert\") pod \"console-operator-58897d9998-pzsms\" (UID: \"a3347424-53c5-4365-bcca-5ec96a8b2c0b\") " pod="openshift-console-operator/console-operator-58897d9998-pzsms" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.890844 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pm972\" (UniqueName: \"kubernetes.io/projected/a3347424-53c5-4365-bcca-5ec96a8b2c0b-kube-api-access-pm972\") pod \"console-operator-58897d9998-pzsms\" (UID: \"a3347424-53c5-4365-bcca-5ec96a8b2c0b\") " pod="openshift-console-operator/console-operator-58897d9998-pzsms" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.890900 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35703302-61e8-4383-9d13-0449584419e4-config\") pod \"apiserver-76f77b778f-j88r5\" (UID: \"35703302-61e8-4383-9d13-0449584419e4\") " pod="openshift-apiserver/apiserver-76f77b778f-j88r5" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.890932 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bq8s\" (UniqueName: \"kubernetes.io/projected/0b1583dc-078f-4ced-a9d9-a16856b18406-kube-api-access-5bq8s\") pod \"openshift-controller-manager-operator-756b6f6bc6-8lqhc\" (UID: \"0b1583dc-078f-4ced-a9d9-a16856b18406\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-8lqhc" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.891013 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49bbe965-c5d1-4c35-a42b-3b8e7a264de7-config\") pod \"controller-manager-879f6c89f-jnqj6\" (UID: \"49bbe965-c5d1-4c35-a42b-3b8e7a264de7\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jnqj6" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.891042 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c573c49d-036d-4d92-a63d-4f830df8a262-config\") pod \"authentication-operator-69f744f599-vjv7f\" (UID: \"c573c49d-036d-4d92-a63d-4f830df8a262\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vjv7f" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.891101 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/908334c7-0bff-48d7-b294-70e88f29aa95-config\") pod \"machine-api-operator-5694c8668f-nzrxl\" (UID: \"908334c7-0bff-48d7-b294-70e88f29aa95\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-nzrxl" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.891131 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/35703302-61e8-4383-9d13-0449584419e4-node-pullsecrets\") pod \"apiserver-76f77b778f-j88r5\" (UID: \"35703302-61e8-4383-9d13-0449584419e4\") " pod="openshift-apiserver/apiserver-76f77b778f-j88r5" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.891152 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f3b9e0de-9d50-4564-b075-9e56de0d6d20-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-8s4mq\" (UID: \"f3b9e0de-9d50-4564-b075-9e56de0d6d20\") " pod="openshift-authentication/oauth-openshift-558db77b4-8s4mq" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.891177 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c573c49d-036d-4d92-a63d-4f830df8a262-service-ca-bundle\") pod \"authentication-operator-69f744f599-vjv7f\" (UID: \"c573c49d-036d-4d92-a63d-4f830df8a262\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vjv7f" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.891194 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6a593442-828c-4cff-b9b9-4efa41ef6f44-oauth-serving-cert\") pod \"console-f9d7485db-wfrqd\" (UID: \"6a593442-828c-4cff-b9b9-4efa41ef6f44\") " pod="openshift-console/console-f9d7485db-wfrqd" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.891213 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d16293c2-d5aa-41fe-859c-0cc5201b6f0b-service-ca-bundle\") pod \"router-default-5444994796-v8699\" (UID: \"d16293c2-d5aa-41fe-859c-0cc5201b6f0b\") " pod="openshift-ingress/router-default-5444994796-v8699" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.891240 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/16c21d06-dc6b-45ea-8dc9-3a9de57e0b9b-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-kb52r\" (UID: \"16c21d06-dc6b-45ea-8dc9-3a9de57e0b9b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kb52r" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.891276 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/35703302-61e8-4383-9d13-0449584419e4-serving-cert\") pod \"apiserver-76f77b778f-j88r5\" (UID: \"35703302-61e8-4383-9d13-0449584419e4\") " pod="openshift-apiserver/apiserver-76f77b778f-j88r5" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.891299 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f3b9e0de-9d50-4564-b075-9e56de0d6d20-audit-policies\") pod \"oauth-openshift-558db77b4-8s4mq\" (UID: \"f3b9e0de-9d50-4564-b075-9e56de0d6d20\") " pod="openshift-authentication/oauth-openshift-558db77b4-8s4mq" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.891326 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6a593442-828c-4cff-b9b9-4efa41ef6f44-console-oauth-config\") pod \"console-f9d7485db-wfrqd\" (UID: \"6a593442-828c-4cff-b9b9-4efa41ef6f44\") " pod="openshift-console/console-f9d7485db-wfrqd" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.891352 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2100f1b5-4d63-421f-8090-601fbb1ce20d-config\") pod \"etcd-operator-b45778765-x8dvj\" (UID: \"2100f1b5-4d63-421f-8090-601fbb1ce20d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-x8dvj" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.891396 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/2100f1b5-4d63-421f-8090-601fbb1ce20d-etcd-service-ca\") pod \"etcd-operator-b45778765-x8dvj\" (UID: \"2100f1b5-4d63-421f-8090-601fbb1ce20d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-x8dvj" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.891423 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c97cb435-9028-4ea4-a6cb-7851c2845566-audit-dir\") pod \"apiserver-7bbb656c7d-9shds\" (UID: \"c97cb435-9028-4ea4-a6cb-7851c2845566\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9shds" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.891442 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xb877\" (UniqueName: \"kubernetes.io/projected/74a6bf4e-fce1-4865-a637-13252c668255-kube-api-access-xb877\") pod \"dns-operator-744455d44c-rxj74\" (UID: \"74a6bf4e-fce1-4865-a637-13252c668255\") " pod="openshift-dns-operator/dns-operator-744455d44c-rxj74" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.891462 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c573c49d-036d-4d92-a63d-4f830df8a262-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-vjv7f\" (UID: \"c573c49d-036d-4d92-a63d-4f830df8a262\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vjv7f" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.891484 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swfgh\" (UniqueName: \"kubernetes.io/projected/c573c49d-036d-4d92-a63d-4f830df8a262-kube-api-access-swfgh\") pod \"authentication-operator-69f744f599-vjv7f\" (UID: \"c573c49d-036d-4d92-a63d-4f830df8a262\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vjv7f" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.891503 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7gr6\" (UniqueName: \"kubernetes.io/projected/6a593442-828c-4cff-b9b9-4efa41ef6f44-kube-api-access-s7gr6\") pod \"console-f9d7485db-wfrqd\" (UID: \"6a593442-828c-4cff-b9b9-4efa41ef6f44\") " pod="openshift-console/console-f9d7485db-wfrqd" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.891522 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/35703302-61e8-4383-9d13-0449584419e4-encryption-config\") pod \"apiserver-76f77b778f-j88r5\" (UID: \"35703302-61e8-4383-9d13-0449584419e4\") " pod="openshift-apiserver/apiserver-76f77b778f-j88r5" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.891546 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c97cb435-9028-4ea4-a6cb-7851c2845566-etcd-client\") pod \"apiserver-7bbb656c7d-9shds\" (UID: \"c97cb435-9028-4ea4-a6cb-7851c2845566\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9shds" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.891564 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f3b9e0de-9d50-4564-b075-9e56de0d6d20-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-8s4mq\" (UID: \"f3b9e0de-9d50-4564-b075-9e56de0d6d20\") " pod="openshift-authentication/oauth-openshift-558db77b4-8s4mq" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.891600 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cc3eba09-e19d-4f1e-abbf-01d6f9463022-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-22hb8\" (UID: \"cc3eba09-e19d-4f1e-abbf-01d6f9463022\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-22hb8" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.891646 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nm46s\" (UniqueName: \"kubernetes.io/projected/c97cb435-9028-4ea4-a6cb-7851c2845566-kube-api-access-nm46s\") pod \"apiserver-7bbb656c7d-9shds\" (UID: \"c97cb435-9028-4ea4-a6cb-7851c2845566\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9shds" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.891664 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45jtt\" (UniqueName: \"kubernetes.io/projected/b9ce2b59-c756-43bf-8114-9fe86a8c8cd9-kube-api-access-45jtt\") pod \"openshift-config-operator-7777fb866f-crtp9\" (UID: \"b9ce2b59-c756-43bf-8114-9fe86a8c8cd9\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-crtp9" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.891682 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7eac0fc7-e06a-4d6c-8e8a-a9cebae9d6cf-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-9bt8h\" (UID: \"7eac0fc7-e06a-4d6c-8e8a-a9cebae9d6cf\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9bt8h" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.891700 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qr7k7\" (UniqueName: \"kubernetes.io/projected/d16293c2-d5aa-41fe-859c-0cc5201b6f0b-kube-api-access-qr7k7\") pod \"router-default-5444994796-v8699\" (UID: \"d16293c2-d5aa-41fe-859c-0cc5201b6f0b\") " pod="openshift-ingress/router-default-5444994796-v8699" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.891718 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbdrq\" (UniqueName: \"kubernetes.io/projected/63ca4931-8019-4e0d-ab43-ae5bd50b8d91-kube-api-access-kbdrq\") pod \"downloads-7954f5f757-9jt44\" (UID: \"63ca4931-8019-4e0d-ab43-ae5bd50b8d91\") " pod="openshift-console/downloads-7954f5f757-9jt44" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.891745 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/c97cb435-9028-4ea4-a6cb-7851c2845566-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-9shds\" (UID: \"c97cb435-9028-4ea4-a6cb-7851c2845566\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9shds" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.891765 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/7c9ad729-d2d8-41a2-aac4-7c4909f0df98-machine-approver-tls\") pod \"machine-approver-56656f9798-jnmj9\" (UID: \"7c9ad729-d2d8-41a2-aac4-7c4909f0df98\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jnmj9" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.891790 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f3b9e0de-9d50-4564-b075-9e56de0d6d20-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-8s4mq\" (UID: \"f3b9e0de-9d50-4564-b075-9e56de0d6d20\") " pod="openshift-authentication/oauth-openshift-558db77b4-8s4mq" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.891808 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c573c49d-036d-4d92-a63d-4f830df8a262-serving-cert\") pod \"authentication-operator-69f744f599-vjv7f\" (UID: \"c573c49d-036d-4d92-a63d-4f830df8a262\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vjv7f" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.891832 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/49bbe965-c5d1-4c35-a42b-3b8e7a264de7-serving-cert\") pod \"controller-manager-879f6c89f-jnqj6\" (UID: \"49bbe965-c5d1-4c35-a42b-3b8e7a264de7\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jnqj6" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.891860 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7c9ad729-d2d8-41a2-aac4-7c4909f0df98-auth-proxy-config\") pod \"machine-approver-56656f9798-jnmj9\" (UID: \"7c9ad729-d2d8-41a2-aac4-7c4909f0df98\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jnmj9" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.891885 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3347424-53c5-4365-bcca-5ec96a8b2c0b-config\") pod \"console-operator-58897d9998-pzsms\" (UID: \"a3347424-53c5-4365-bcca-5ec96a8b2c0b\") " pod="openshift-console-operator/console-operator-58897d9998-pzsms" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.891905 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/908334c7-0bff-48d7-b294-70e88f29aa95-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-nzrxl\" (UID: \"908334c7-0bff-48d7-b294-70e88f29aa95\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-nzrxl" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.891924 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2sp8\" (UniqueName: \"kubernetes.io/projected/908334c7-0bff-48d7-b294-70e88f29aa95-kube-api-access-s2sp8\") pod \"machine-api-operator-5694c8668f-nzrxl\" (UID: \"908334c7-0bff-48d7-b294-70e88f29aa95\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-nzrxl" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.891943 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/35703302-61e8-4383-9d13-0449584419e4-trusted-ca-bundle\") pod \"apiserver-76f77b778f-j88r5\" (UID: \"35703302-61e8-4383-9d13-0449584419e4\") " pod="openshift-apiserver/apiserver-76f77b778f-j88r5" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.891960 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/49bbe965-c5d1-4c35-a42b-3b8e7a264de7-client-ca\") pod \"controller-manager-879f6c89f-jnqj6\" (UID: \"49bbe965-c5d1-4c35-a42b-3b8e7a264de7\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jnqj6" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.891977 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b9ce2b59-c756-43bf-8114-9fe86a8c8cd9-serving-cert\") pod \"openshift-config-operator-7777fb866f-crtp9\" (UID: \"b9ce2b59-c756-43bf-8114-9fe86a8c8cd9\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-crtp9" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.891995 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6a593442-828c-4cff-b9b9-4efa41ef6f44-console-config\") pod \"console-f9d7485db-wfrqd\" (UID: \"6a593442-828c-4cff-b9b9-4efa41ef6f44\") " pod="openshift-console/console-f9d7485db-wfrqd" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.892012 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7eac0fc7-e06a-4d6c-8e8a-a9cebae9d6cf-config\") pod \"kube-apiserver-operator-766d6c64bb-9bt8h\" (UID: \"7eac0fc7-e06a-4d6c-8e8a-a9cebae9d6cf\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9bt8h" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.892027 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/35703302-61e8-4383-9d13-0449584419e4-audit\") pod \"apiserver-76f77b778f-j88r5\" (UID: \"35703302-61e8-4383-9d13-0449584419e4\") " pod="openshift-apiserver/apiserver-76f77b778f-j88r5" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.892046 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/35703302-61e8-4383-9d13-0449584419e4-etcd-client\") pod \"apiserver-76f77b778f-j88r5\" (UID: \"35703302-61e8-4383-9d13-0449584419e4\") " pod="openshift-apiserver/apiserver-76f77b778f-j88r5" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.892065 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/b9ce2b59-c756-43bf-8114-9fe86a8c8cd9-available-featuregates\") pod \"openshift-config-operator-7777fb866f-crtp9\" (UID: \"b9ce2b59-c756-43bf-8114-9fe86a8c8cd9\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-crtp9" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.892098 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6a593442-828c-4cff-b9b9-4efa41ef6f44-service-ca\") pod \"console-f9d7485db-wfrqd\" (UID: \"6a593442-828c-4cff-b9b9-4efa41ef6f44\") " pod="openshift-console/console-f9d7485db-wfrqd" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.892119 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c97cb435-9028-4ea4-a6cb-7851c2845566-audit-policies\") pod \"apiserver-7bbb656c7d-9shds\" (UID: \"c97cb435-9028-4ea4-a6cb-7851c2845566\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9shds" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.892137 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c97cb435-9028-4ea4-a6cb-7851c2845566-serving-cert\") pod \"apiserver-7bbb656c7d-9shds\" (UID: \"c97cb435-9028-4ea4-a6cb-7851c2845566\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9shds" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.892155 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6940241d-144c-44c2-bc2b-6b27c9ed106d-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-7ffjt\" (UID: \"6940241d-144c-44c2-bc2b-6b27c9ed106d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7ffjt" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.892172 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/2100f1b5-4d63-421f-8090-601fbb1ce20d-etcd-client\") pod \"etcd-operator-b45778765-x8dvj\" (UID: \"2100f1b5-4d63-421f-8090-601fbb1ce20d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-x8dvj" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.892211 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc3eba09-e19d-4f1e-abbf-01d6f9463022-config\") pod \"openshift-apiserver-operator-796bbdcf4f-22hb8\" (UID: \"cc3eba09-e19d-4f1e-abbf-01d6f9463022\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-22hb8" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.892232 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f3b9e0de-9d50-4564-b075-9e56de0d6d20-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-8s4mq\" (UID: \"f3b9e0de-9d50-4564-b075-9e56de0d6d20\") " pod="openshift-authentication/oauth-openshift-558db77b4-8s4mq" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.892248 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f3b9e0de-9d50-4564-b075-9e56de0d6d20-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-8s4mq\" (UID: \"f3b9e0de-9d50-4564-b075-9e56de0d6d20\") " pod="openshift-authentication/oauth-openshift-558db77b4-8s4mq" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.892271 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/d16293c2-d5aa-41fe-859c-0cc5201b6f0b-default-certificate\") pod \"router-default-5444994796-v8699\" (UID: \"d16293c2-d5aa-41fe-859c-0cc5201b6f0b\") " pod="openshift-ingress/router-default-5444994796-v8699" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.892290 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/74a6bf4e-fce1-4865-a637-13252c668255-metrics-tls\") pod \"dns-operator-744455d44c-rxj74\" (UID: \"74a6bf4e-fce1-4865-a637-13252c668255\") " pod="openshift-dns-operator/dns-operator-744455d44c-rxj74" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.892308 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jv4z6\" (UniqueName: \"kubernetes.io/projected/35703302-61e8-4383-9d13-0449584419e4-kube-api-access-jv4z6\") pod \"apiserver-76f77b778f-j88r5\" (UID: \"35703302-61e8-4383-9d13-0449584419e4\") " pod="openshift-apiserver/apiserver-76f77b778f-j88r5" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.892331 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qp6wb\" (UniqueName: \"kubernetes.io/projected/f3b9e0de-9d50-4564-b075-9e56de0d6d20-kube-api-access-qp6wb\") pod \"oauth-openshift-558db77b4-8s4mq\" (UID: \"f3b9e0de-9d50-4564-b075-9e56de0d6d20\") " pod="openshift-authentication/oauth-openshift-558db77b4-8s4mq" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.892347 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16c21d06-dc6b-45ea-8dc9-3a9de57e0b9b-config\") pod \"kube-controller-manager-operator-78b949d7b-kb52r\" (UID: \"16c21d06-dc6b-45ea-8dc9-3a9de57e0b9b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kb52r" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.892390 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5a48387-bf09-4cf3-b4c0-2b75d3c9b3f9-config\") pod \"route-controller-manager-6576b87f9c-hmk2h\" (UID: \"d5a48387-bf09-4cf3-b4c0-2b75d3c9b3f9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hmk2h" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.892428 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f3b9e0de-9d50-4564-b075-9e56de0d6d20-audit-dir\") pod \"oauth-openshift-558db77b4-8s4mq\" (UID: \"f3b9e0de-9d50-4564-b075-9e56de0d6d20\") " pod="openshift-authentication/oauth-openshift-558db77b4-8s4mq" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.892463 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6x9hg\" (UniqueName: \"kubernetes.io/projected/872f79b9-6f54-4b5c-bc80-cd2404dc3156-kube-api-access-6x9hg\") pod \"cluster-samples-operator-665b6dd947-8dr5c\" (UID: \"872f79b9-6f54-4b5c-bc80-cd2404dc3156\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8dr5c" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.892488 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f3b9e0de-9d50-4564-b075-9e56de0d6d20-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-8s4mq\" (UID: \"f3b9e0de-9d50-4564-b075-9e56de0d6d20\") " pod="openshift-authentication/oauth-openshift-558db77b4-8s4mq" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.892516 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7eac0fc7-e06a-4d6c-8e8a-a9cebae9d6cf-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-9bt8h\" (UID: \"7eac0fc7-e06a-4d6c-8e8a-a9cebae9d6cf\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9bt8h" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.892545 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mkcj\" (UniqueName: \"kubernetes.io/projected/cc3eba09-e19d-4f1e-abbf-01d6f9463022-kube-api-access-2mkcj\") pod \"openshift-apiserver-operator-796bbdcf4f-22hb8\" (UID: \"cc3eba09-e19d-4f1e-abbf-01d6f9463022\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-22hb8" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.892586 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c97cb435-9028-4ea4-a6cb-7851c2845566-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-9shds\" (UID: \"c97cb435-9028-4ea4-a6cb-7851c2845566\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9shds" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.903531 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/872f79b9-6f54-4b5c-bc80-cd2404dc3156-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-8dr5c\" (UID: \"872f79b9-6f54-4b5c-bc80-cd2404dc3156\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8dr5c" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.903590 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/35703302-61e8-4383-9d13-0449584419e4-audit-dir\") pod \"apiserver-76f77b778f-j88r5\" (UID: \"35703302-61e8-4383-9d13-0449584419e4\") " pod="openshift-apiserver/apiserver-76f77b778f-j88r5" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.903617 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nd4hb\" (UniqueName: \"kubernetes.io/projected/49bbe965-c5d1-4c35-a42b-3b8e7a264de7-kube-api-access-nd4hb\") pod \"controller-manager-879f6c89f-jnqj6\" (UID: \"49bbe965-c5d1-4c35-a42b-3b8e7a264de7\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jnqj6" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.903639 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f3b9e0de-9d50-4564-b075-9e56de0d6d20-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-8s4mq\" (UID: \"f3b9e0de-9d50-4564-b075-9e56de0d6d20\") " pod="openshift-authentication/oauth-openshift-558db77b4-8s4mq" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.903658 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f3b9e0de-9d50-4564-b075-9e56de0d6d20-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-8s4mq\" (UID: \"f3b9e0de-9d50-4564-b075-9e56de0d6d20\") " pod="openshift-authentication/oauth-openshift-558db77b4-8s4mq" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.903677 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/49bbe965-c5d1-4c35-a42b-3b8e7a264de7-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-jnqj6\" (UID: \"49bbe965-c5d1-4c35-a42b-3b8e7a264de7\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jnqj6" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.903731 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c9ad729-d2d8-41a2-aac4-7c4909f0df98-config\") pod \"machine-approver-56656f9798-jnmj9\" (UID: \"7c9ad729-d2d8-41a2-aac4-7c4909f0df98\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jnmj9" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.903748 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d5a48387-bf09-4cf3-b4c0-2b75d3c9b3f9-client-ca\") pod \"route-controller-manager-6576b87f9c-hmk2h\" (UID: \"d5a48387-bf09-4cf3-b4c0-2b75d3c9b3f9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hmk2h" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.903767 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b1583dc-078f-4ced-a9d9-a16856b18406-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-8lqhc\" (UID: \"0b1583dc-078f-4ced-a9d9-a16856b18406\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-8lqhc" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.903802 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/35703302-61e8-4383-9d13-0449584419e4-etcd-serving-ca\") pod \"apiserver-76f77b778f-j88r5\" (UID: \"35703302-61e8-4383-9d13-0449584419e4\") " pod="openshift-apiserver/apiserver-76f77b778f-j88r5" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.903871 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f3b9e0de-9d50-4564-b075-9e56de0d6d20-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-8s4mq\" (UID: \"f3b9e0de-9d50-4564-b075-9e56de0d6d20\") " pod="openshift-authentication/oauth-openshift-558db77b4-8s4mq" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.903908 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4crb\" (UniqueName: \"kubernetes.io/projected/7c9ad729-d2d8-41a2-aac4-7c4909f0df98-kube-api-access-f4crb\") pod \"machine-approver-56656f9798-jnmj9\" (UID: \"7c9ad729-d2d8-41a2-aac4-7c4909f0df98\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jnmj9" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.903932 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/2100f1b5-4d63-421f-8090-601fbb1ce20d-etcd-ca\") pod \"etcd-operator-b45778765-x8dvj\" (UID: \"2100f1b5-4d63-421f-8090-601fbb1ce20d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-x8dvj" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.903979 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a3347424-53c5-4365-bcca-5ec96a8b2c0b-trusted-ca\") pod \"console-operator-58897d9998-pzsms\" (UID: \"a3347424-53c5-4365-bcca-5ec96a8b2c0b\") " pod="openshift-console-operator/console-operator-58897d9998-pzsms" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.904002 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d5a48387-bf09-4cf3-b4c0-2b75d3c9b3f9-serving-cert\") pod \"route-controller-manager-6576b87f9c-hmk2h\" (UID: \"d5a48387-bf09-4cf3-b4c0-2b75d3c9b3f9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hmk2h" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.904032 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6940241d-144c-44c2-bc2b-6b27c9ed106d-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-7ffjt\" (UID: \"6940241d-144c-44c2-bc2b-6b27c9ed106d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7ffjt" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.904059 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/d16293c2-d5aa-41fe-859c-0cc5201b6f0b-stats-auth\") pod \"router-default-5444994796-v8699\" (UID: \"d16293c2-d5aa-41fe-859c-0cc5201b6f0b\") " pod="openshift-ingress/router-default-5444994796-v8699" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.892731 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zlmqt" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.903111 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.903218 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.904087 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/35703302-61e8-4383-9d13-0449584419e4-image-import-ca\") pod \"apiserver-76f77b778f-j88r5\" (UID: \"35703302-61e8-4383-9d13-0449584419e4\") " pod="openshift-apiserver/apiserver-76f77b778f-j88r5" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.903334 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.904100 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.908214 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.909179 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.963481 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-qmmnp"] Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.964099 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-qmmnp" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.964668 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.964920 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.964921 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.918559 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hn9r\" (UniqueName: \"kubernetes.io/projected/d5a48387-bf09-4cf3-b4c0-2b75d3c9b3f9-kube-api-access-7hn9r\") pod \"route-controller-manager-6576b87f9c-hmk2h\" (UID: \"d5a48387-bf09-4cf3-b4c0-2b75d3c9b3f9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hmk2h" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.965115 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f3b9e0de-9d50-4564-b075-9e56de0d6d20-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-8s4mq\" (UID: \"f3b9e0de-9d50-4564-b075-9e56de0d6d20\") " pod="openshift-authentication/oauth-openshift-558db77b4-8s4mq" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.965135 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/6940241d-144c-44c2-bc2b-6b27c9ed106d-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-7ffjt\" (UID: \"6940241d-144c-44c2-bc2b-6b27c9ed106d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7ffjt" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.965153 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2100f1b5-4d63-421f-8090-601fbb1ce20d-serving-cert\") pod \"etcd-operator-b45778765-x8dvj\" (UID: \"2100f1b5-4d63-421f-8090-601fbb1ce20d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-x8dvj" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.965199 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.968545 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.968792 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.969141 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.970203 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.970483 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.972528 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.972651 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-bc5n5"] Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.972814 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.973423 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.973913 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-69q67"] Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.974290 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-svgfk"] Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.974612 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bc5n5" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.974820 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-svgfk" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.975101 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-69q67" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.983965 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.996786 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7b4cr"] Dec 11 10:13:42 crc kubenswrapper[4953]: I1211 10:13:42.997599 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7b4cr" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.002068 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.011482 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-krg44"] Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.012478 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-krg44" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.012852 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-hmk2h"] Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.014058 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-j88r5"] Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.017181 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-22hb8"] Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.018823 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-jb2sd"] Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.019510 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-jb2sd" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.020586 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-vjv7f"] Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.021092 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.023065 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424120-hdqwl"] Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.024042 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424120-hdqwl" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.024340 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-m69bw"] Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.025642 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-m69bw" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.025925 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-jnqj6"] Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.027735 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-9shds"] Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.029557 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-b55xt"] Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.030191 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-b55xt" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.030793 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-8s4mq"] Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.032075 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-xmb4p"] Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.032887 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-xmb4p" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.035924 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-wfrqd"] Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.037789 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6jmjq"] Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.038522 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6jmjq" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.040114 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-pzsms"] Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.041525 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.041722 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-ml8wp"] Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.042380 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-ml8wp" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.042990 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vrm5k"] Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.043546 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vrm5k" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.044426 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-b82cx"] Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.050543 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-cl6x8"] Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.051865 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-b82cx" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.053819 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-zlmqt"] Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.053972 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-cl6x8" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.055654 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9bt8h"] Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.061014 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-69q67"] Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.061394 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.062801 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kb52r"] Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.064058 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-bc5n5"] Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.067758 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-swfgh\" (UniqueName: \"kubernetes.io/projected/c573c49d-036d-4d92-a63d-4f830df8a262-kube-api-access-swfgh\") pod \"authentication-operator-69f744f599-vjv7f\" (UID: \"c573c49d-036d-4d92-a63d-4f830df8a262\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vjv7f" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.067841 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s7gr6\" (UniqueName: \"kubernetes.io/projected/6a593442-828c-4cff-b9b9-4efa41ef6f44-kube-api-access-s7gr6\") pod \"console-f9d7485db-wfrqd\" (UID: \"6a593442-828c-4cff-b9b9-4efa41ef6f44\") " pod="openshift-console/console-f9d7485db-wfrqd" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.067891 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/35703302-61e8-4383-9d13-0449584419e4-encryption-config\") pod \"apiserver-76f77b778f-j88r5\" (UID: \"35703302-61e8-4383-9d13-0449584419e4\") " pod="openshift-apiserver/apiserver-76f77b778f-j88r5" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.067937 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c97cb435-9028-4ea4-a6cb-7851c2845566-etcd-client\") pod \"apiserver-7bbb656c7d-9shds\" (UID: \"c97cb435-9028-4ea4-a6cb-7851c2845566\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9shds" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.067983 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f3b9e0de-9d50-4564-b075-9e56de0d6d20-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-8s4mq\" (UID: \"f3b9e0de-9d50-4564-b075-9e56de0d6d20\") " pod="openshift-authentication/oauth-openshift-558db77b4-8s4mq" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.068029 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45jtt\" (UniqueName: \"kubernetes.io/projected/b9ce2b59-c756-43bf-8114-9fe86a8c8cd9-kube-api-access-45jtt\") pod \"openshift-config-operator-7777fb866f-crtp9\" (UID: \"b9ce2b59-c756-43bf-8114-9fe86a8c8cd9\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-crtp9" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.068066 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7eac0fc7-e06a-4d6c-8e8a-a9cebae9d6cf-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-9bt8h\" (UID: \"7eac0fc7-e06a-4d6c-8e8a-a9cebae9d6cf\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9bt8h" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.068106 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qr7k7\" (UniqueName: \"kubernetes.io/projected/d16293c2-d5aa-41fe-859c-0cc5201b6f0b-kube-api-access-qr7k7\") pod \"router-default-5444994796-v8699\" (UID: \"d16293c2-d5aa-41fe-859c-0cc5201b6f0b\") " pod="openshift-ingress/router-default-5444994796-v8699" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.068152 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cc3eba09-e19d-4f1e-abbf-01d6f9463022-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-22hb8\" (UID: \"cc3eba09-e19d-4f1e-abbf-01d6f9463022\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-22hb8" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.068183 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nm46s\" (UniqueName: \"kubernetes.io/projected/c97cb435-9028-4ea4-a6cb-7851c2845566-kube-api-access-nm46s\") pod \"apiserver-7bbb656c7d-9shds\" (UID: \"c97cb435-9028-4ea4-a6cb-7851c2845566\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9shds" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.068214 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kbdrq\" (UniqueName: \"kubernetes.io/projected/63ca4931-8019-4e0d-ab43-ae5bd50b8d91-kube-api-access-kbdrq\") pod \"downloads-7954f5f757-9jt44\" (UID: \"63ca4931-8019-4e0d-ab43-ae5bd50b8d91\") " pod="openshift-console/downloads-7954f5f757-9jt44" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.068245 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/c97cb435-9028-4ea4-a6cb-7851c2845566-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-9shds\" (UID: \"c97cb435-9028-4ea4-a6cb-7851c2845566\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9shds" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.068291 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/7c9ad729-d2d8-41a2-aac4-7c4909f0df98-machine-approver-tls\") pod \"machine-approver-56656f9798-jnmj9\" (UID: \"7c9ad729-d2d8-41a2-aac4-7c4909f0df98\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jnmj9" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.068335 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f3b9e0de-9d50-4564-b075-9e56de0d6d20-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-8s4mq\" (UID: \"f3b9e0de-9d50-4564-b075-9e56de0d6d20\") " pod="openshift-authentication/oauth-openshift-558db77b4-8s4mq" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.068416 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c573c49d-036d-4d92-a63d-4f830df8a262-serving-cert\") pod \"authentication-operator-69f744f599-vjv7f\" (UID: \"c573c49d-036d-4d92-a63d-4f830df8a262\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vjv7f" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.068493 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7c9ad729-d2d8-41a2-aac4-7c4909f0df98-auth-proxy-config\") pod \"machine-approver-56656f9798-jnmj9\" (UID: \"7c9ad729-d2d8-41a2-aac4-7c4909f0df98\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jnmj9" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.068539 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/49bbe965-c5d1-4c35-a42b-3b8e7a264de7-serving-cert\") pod \"controller-manager-879f6c89f-jnqj6\" (UID: \"49bbe965-c5d1-4c35-a42b-3b8e7a264de7\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jnqj6" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.068629 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2sp8\" (UniqueName: \"kubernetes.io/projected/908334c7-0bff-48d7-b294-70e88f29aa95-kube-api-access-s2sp8\") pod \"machine-api-operator-5694c8668f-nzrxl\" (UID: \"908334c7-0bff-48d7-b294-70e88f29aa95\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-nzrxl" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.068678 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/35703302-61e8-4383-9d13-0449584419e4-trusted-ca-bundle\") pod \"apiserver-76f77b778f-j88r5\" (UID: \"35703302-61e8-4383-9d13-0449584419e4\") " pod="openshift-apiserver/apiserver-76f77b778f-j88r5" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.068762 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/49bbe965-c5d1-4c35-a42b-3b8e7a264de7-client-ca\") pod \"controller-manager-879f6c89f-jnqj6\" (UID: \"49bbe965-c5d1-4c35-a42b-3b8e7a264de7\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jnqj6" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.068836 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3347424-53c5-4365-bcca-5ec96a8b2c0b-config\") pod \"console-operator-58897d9998-pzsms\" (UID: \"a3347424-53c5-4365-bcca-5ec96a8b2c0b\") " pod="openshift-console-operator/console-operator-58897d9998-pzsms" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.068886 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/908334c7-0bff-48d7-b294-70e88f29aa95-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-nzrxl\" (UID: \"908334c7-0bff-48d7-b294-70e88f29aa95\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-nzrxl" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.068930 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b9ce2b59-c756-43bf-8114-9fe86a8c8cd9-serving-cert\") pod \"openshift-config-operator-7777fb866f-crtp9\" (UID: \"b9ce2b59-c756-43bf-8114-9fe86a8c8cd9\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-crtp9" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.068966 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6a593442-828c-4cff-b9b9-4efa41ef6f44-console-config\") pod \"console-f9d7485db-wfrqd\" (UID: \"6a593442-828c-4cff-b9b9-4efa41ef6f44\") " pod="openshift-console/console-f9d7485db-wfrqd" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.068996 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7eac0fc7-e06a-4d6c-8e8a-a9cebae9d6cf-config\") pod \"kube-apiserver-operator-766d6c64bb-9bt8h\" (UID: \"7eac0fc7-e06a-4d6c-8e8a-a9cebae9d6cf\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9bt8h" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.069044 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/35703302-61e8-4383-9d13-0449584419e4-audit\") pod \"apiserver-76f77b778f-j88r5\" (UID: \"35703302-61e8-4383-9d13-0449584419e4\") " pod="openshift-apiserver/apiserver-76f77b778f-j88r5" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.069074 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/35703302-61e8-4383-9d13-0449584419e4-etcd-client\") pod \"apiserver-76f77b778f-j88r5\" (UID: \"35703302-61e8-4383-9d13-0449584419e4\") " pod="openshift-apiserver/apiserver-76f77b778f-j88r5" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.069106 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/b9ce2b59-c756-43bf-8114-9fe86a8c8cd9-available-featuregates\") pod \"openshift-config-operator-7777fb866f-crtp9\" (UID: \"b9ce2b59-c756-43bf-8114-9fe86a8c8cd9\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-crtp9" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.069146 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6a593442-828c-4cff-b9b9-4efa41ef6f44-service-ca\") pod \"console-f9d7485db-wfrqd\" (UID: \"6a593442-828c-4cff-b9b9-4efa41ef6f44\") " pod="openshift-console/console-f9d7485db-wfrqd" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.069175 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6940241d-144c-44c2-bc2b-6b27c9ed106d-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-7ffjt\" (UID: \"6940241d-144c-44c2-bc2b-6b27c9ed106d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7ffjt" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.069204 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/2100f1b5-4d63-421f-8090-601fbb1ce20d-etcd-client\") pod \"etcd-operator-b45778765-x8dvj\" (UID: \"2100f1b5-4d63-421f-8090-601fbb1ce20d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-x8dvj" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.069237 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c97cb435-9028-4ea4-a6cb-7851c2845566-audit-policies\") pod \"apiserver-7bbb656c7d-9shds\" (UID: \"c97cb435-9028-4ea4-a6cb-7851c2845566\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9shds" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.069263 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c97cb435-9028-4ea4-a6cb-7851c2845566-serving-cert\") pod \"apiserver-7bbb656c7d-9shds\" (UID: \"c97cb435-9028-4ea4-a6cb-7851c2845566\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9shds" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.069308 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f3b9e0de-9d50-4564-b075-9e56de0d6d20-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-8s4mq\" (UID: \"f3b9e0de-9d50-4564-b075-9e56de0d6d20\") " pod="openshift-authentication/oauth-openshift-558db77b4-8s4mq" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.069339 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f3b9e0de-9d50-4564-b075-9e56de0d6d20-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-8s4mq\" (UID: \"f3b9e0de-9d50-4564-b075-9e56de0d6d20\") " pod="openshift-authentication/oauth-openshift-558db77b4-8s4mq" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.074757 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/35703302-61e8-4383-9d13-0449584419e4-audit\") pod \"apiserver-76f77b778f-j88r5\" (UID: \"35703302-61e8-4383-9d13-0449584419e4\") " pod="openshift-apiserver/apiserver-76f77b778f-j88r5" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.074916 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3347424-53c5-4365-bcca-5ec96a8b2c0b-config\") pod \"console-operator-58897d9998-pzsms\" (UID: \"a3347424-53c5-4365-bcca-5ec96a8b2c0b\") " pod="openshift-console-operator/console-operator-58897d9998-pzsms" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.077239 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/35703302-61e8-4383-9d13-0449584419e4-trusted-ca-bundle\") pod \"apiserver-76f77b778f-j88r5\" (UID: \"35703302-61e8-4383-9d13-0449584419e4\") " pod="openshift-apiserver/apiserver-76f77b778f-j88r5" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.078905 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/49bbe965-c5d1-4c35-a42b-3b8e7a264de7-client-ca\") pod \"controller-manager-879f6c89f-jnqj6\" (UID: \"49bbe965-c5d1-4c35-a42b-3b8e7a264de7\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jnqj6" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.082274 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/d16293c2-d5aa-41fe-859c-0cc5201b6f0b-default-certificate\") pod \"router-default-5444994796-v8699\" (UID: \"d16293c2-d5aa-41fe-859c-0cc5201b6f0b\") " pod="openshift-ingress/router-default-5444994796-v8699" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.082376 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc3eba09-e19d-4f1e-abbf-01d6f9463022-config\") pod \"openshift-apiserver-operator-796bbdcf4f-22hb8\" (UID: \"cc3eba09-e19d-4f1e-abbf-01d6f9463022\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-22hb8" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.082398 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/74a6bf4e-fce1-4865-a637-13252c668255-metrics-tls\") pod \"dns-operator-744455d44c-rxj74\" (UID: \"74a6bf4e-fce1-4865-a637-13252c668255\") " pod="openshift-dns-operator/dns-operator-744455d44c-rxj74" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.082419 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jv4z6\" (UniqueName: \"kubernetes.io/projected/35703302-61e8-4383-9d13-0449584419e4-kube-api-access-jv4z6\") pod \"apiserver-76f77b778f-j88r5\" (UID: \"35703302-61e8-4383-9d13-0449584419e4\") " pod="openshift-apiserver/apiserver-76f77b778f-j88r5" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.082440 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qp6wb\" (UniqueName: \"kubernetes.io/projected/f3b9e0de-9d50-4564-b075-9e56de0d6d20-kube-api-access-qp6wb\") pod \"oauth-openshift-558db77b4-8s4mq\" (UID: \"f3b9e0de-9d50-4564-b075-9e56de0d6d20\") " pod="openshift-authentication/oauth-openshift-558db77b4-8s4mq" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.082463 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16c21d06-dc6b-45ea-8dc9-3a9de57e0b9b-config\") pod \"kube-controller-manager-operator-78b949d7b-kb52r\" (UID: \"16c21d06-dc6b-45ea-8dc9-3a9de57e0b9b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kb52r" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.082480 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f3b9e0de-9d50-4564-b075-9e56de0d6d20-audit-dir\") pod \"oauth-openshift-558db77b4-8s4mq\" (UID: \"f3b9e0de-9d50-4564-b075-9e56de0d6d20\") " pod="openshift-authentication/oauth-openshift-558db77b4-8s4mq" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.082496 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6x9hg\" (UniqueName: \"kubernetes.io/projected/872f79b9-6f54-4b5c-bc80-cd2404dc3156-kube-api-access-6x9hg\") pod \"cluster-samples-operator-665b6dd947-8dr5c\" (UID: \"872f79b9-6f54-4b5c-bc80-cd2404dc3156\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8dr5c" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.082519 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5a48387-bf09-4cf3-b4c0-2b75d3c9b3f9-config\") pod \"route-controller-manager-6576b87f9c-hmk2h\" (UID: \"d5a48387-bf09-4cf3-b4c0-2b75d3c9b3f9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hmk2h" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.082542 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f3b9e0de-9d50-4564-b075-9e56de0d6d20-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-8s4mq\" (UID: \"f3b9e0de-9d50-4564-b075-9e56de0d6d20\") " pod="openshift-authentication/oauth-openshift-558db77b4-8s4mq" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.082560 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7eac0fc7-e06a-4d6c-8e8a-a9cebae9d6cf-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-9bt8h\" (UID: \"7eac0fc7-e06a-4d6c-8e8a-a9cebae9d6cf\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9bt8h" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.082596 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2mkcj\" (UniqueName: \"kubernetes.io/projected/cc3eba09-e19d-4f1e-abbf-01d6f9463022-kube-api-access-2mkcj\") pod \"openshift-apiserver-operator-796bbdcf4f-22hb8\" (UID: \"cc3eba09-e19d-4f1e-abbf-01d6f9463022\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-22hb8" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.082615 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c97cb435-9028-4ea4-a6cb-7851c2845566-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-9shds\" (UID: \"c97cb435-9028-4ea4-a6cb-7851c2845566\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9shds" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.082688 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/872f79b9-6f54-4b5c-bc80-cd2404dc3156-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-8dr5c\" (UID: \"872f79b9-6f54-4b5c-bc80-cd2404dc3156\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8dr5c" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.082738 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nd4hb\" (UniqueName: \"kubernetes.io/projected/49bbe965-c5d1-4c35-a42b-3b8e7a264de7-kube-api-access-nd4hb\") pod \"controller-manager-879f6c89f-jnqj6\" (UID: \"49bbe965-c5d1-4c35-a42b-3b8e7a264de7\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jnqj6" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.082762 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f3b9e0de-9d50-4564-b075-9e56de0d6d20-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-8s4mq\" (UID: \"f3b9e0de-9d50-4564-b075-9e56de0d6d20\") " pod="openshift-authentication/oauth-openshift-558db77b4-8s4mq" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.082784 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f3b9e0de-9d50-4564-b075-9e56de0d6d20-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-8s4mq\" (UID: \"f3b9e0de-9d50-4564-b075-9e56de0d6d20\") " pod="openshift-authentication/oauth-openshift-558db77b4-8s4mq" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.082807 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/35703302-61e8-4383-9d13-0449584419e4-audit-dir\") pod \"apiserver-76f77b778f-j88r5\" (UID: \"35703302-61e8-4383-9d13-0449584419e4\") " pod="openshift-apiserver/apiserver-76f77b778f-j88r5" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.082827 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/49bbe965-c5d1-4c35-a42b-3b8e7a264de7-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-jnqj6\" (UID: \"49bbe965-c5d1-4c35-a42b-3b8e7a264de7\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jnqj6" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.082861 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c9ad729-d2d8-41a2-aac4-7c4909f0df98-config\") pod \"machine-approver-56656f9798-jnmj9\" (UID: \"7c9ad729-d2d8-41a2-aac4-7c4909f0df98\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jnmj9" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.082882 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d5a48387-bf09-4cf3-b4c0-2b75d3c9b3f9-client-ca\") pod \"route-controller-manager-6576b87f9c-hmk2h\" (UID: \"d5a48387-bf09-4cf3-b4c0-2b75d3c9b3f9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hmk2h" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.082930 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b1583dc-078f-4ced-a9d9-a16856b18406-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-8lqhc\" (UID: \"0b1583dc-078f-4ced-a9d9-a16856b18406\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-8lqhc" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.082951 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f3b9e0de-9d50-4564-b075-9e56de0d6d20-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-8s4mq\" (UID: \"f3b9e0de-9d50-4564-b075-9e56de0d6d20\") " pod="openshift-authentication/oauth-openshift-558db77b4-8s4mq" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.082973 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/35703302-61e8-4383-9d13-0449584419e4-etcd-serving-ca\") pod \"apiserver-76f77b778f-j88r5\" (UID: \"35703302-61e8-4383-9d13-0449584419e4\") " pod="openshift-apiserver/apiserver-76f77b778f-j88r5" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.082979 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6940241d-144c-44c2-bc2b-6b27c9ed106d-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-7ffjt\" (UID: \"6940241d-144c-44c2-bc2b-6b27c9ed106d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7ffjt" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.082999 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f4crb\" (UniqueName: \"kubernetes.io/projected/7c9ad729-d2d8-41a2-aac4-7c4909f0df98-kube-api-access-f4crb\") pod \"machine-approver-56656f9798-jnmj9\" (UID: \"7c9ad729-d2d8-41a2-aac4-7c4909f0df98\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jnmj9" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.083095 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/2100f1b5-4d63-421f-8090-601fbb1ce20d-etcd-ca\") pod \"etcd-operator-b45778765-x8dvj\" (UID: \"2100f1b5-4d63-421f-8090-601fbb1ce20d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-x8dvj" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.083128 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/d16293c2-d5aa-41fe-859c-0cc5201b6f0b-stats-auth\") pod \"router-default-5444994796-v8699\" (UID: \"d16293c2-d5aa-41fe-859c-0cc5201b6f0b\") " pod="openshift-ingress/router-default-5444994796-v8699" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.083177 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a3347424-53c5-4365-bcca-5ec96a8b2c0b-trusted-ca\") pod \"console-operator-58897d9998-pzsms\" (UID: \"a3347424-53c5-4365-bcca-5ec96a8b2c0b\") " pod="openshift-console-operator/console-operator-58897d9998-pzsms" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.083196 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d5a48387-bf09-4cf3-b4c0-2b75d3c9b3f9-serving-cert\") pod \"route-controller-manager-6576b87f9c-hmk2h\" (UID: \"d5a48387-bf09-4cf3-b4c0-2b75d3c9b3f9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hmk2h" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.083213 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6940241d-144c-44c2-bc2b-6b27c9ed106d-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-7ffjt\" (UID: \"6940241d-144c-44c2-bc2b-6b27c9ed106d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7ffjt" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.083232 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/35703302-61e8-4383-9d13-0449584419e4-image-import-ca\") pod \"apiserver-76f77b778f-j88r5\" (UID: \"35703302-61e8-4383-9d13-0449584419e4\") " pod="openshift-apiserver/apiserver-76f77b778f-j88r5" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.083249 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hn9r\" (UniqueName: \"kubernetes.io/projected/d5a48387-bf09-4cf3-b4c0-2b75d3c9b3f9-kube-api-access-7hn9r\") pod \"route-controller-manager-6576b87f9c-hmk2h\" (UID: \"d5a48387-bf09-4cf3-b4c0-2b75d3c9b3f9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hmk2h" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.083290 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f3b9e0de-9d50-4564-b075-9e56de0d6d20-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-8s4mq\" (UID: \"f3b9e0de-9d50-4564-b075-9e56de0d6d20\") " pod="openshift-authentication/oauth-openshift-558db77b4-8s4mq" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.083321 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/6940241d-144c-44c2-bc2b-6b27c9ed106d-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-7ffjt\" (UID: \"6940241d-144c-44c2-bc2b-6b27c9ed106d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7ffjt" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.083350 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2100f1b5-4d63-421f-8090-601fbb1ce20d-serving-cert\") pod \"etcd-operator-b45778765-x8dvj\" (UID: \"2100f1b5-4d63-421f-8090-601fbb1ce20d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-x8dvj" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.083386 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f3b9e0de-9d50-4564-b075-9e56de0d6d20-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-8s4mq\" (UID: \"f3b9e0de-9d50-4564-b075-9e56de0d6d20\") " pod="openshift-authentication/oauth-openshift-558db77b4-8s4mq" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.083414 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d16293c2-d5aa-41fe-859c-0cc5201b6f0b-metrics-certs\") pod \"router-default-5444994796-v8699\" (UID: \"d16293c2-d5aa-41fe-859c-0cc5201b6f0b\") " pod="openshift-ingress/router-default-5444994796-v8699" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.083442 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6a593442-828c-4cff-b9b9-4efa41ef6f44-console-serving-cert\") pod \"console-f9d7485db-wfrqd\" (UID: \"6a593442-828c-4cff-b9b9-4efa41ef6f44\") " pod="openshift-console/console-f9d7485db-wfrqd" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.083470 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wtmkl\" (UniqueName: \"kubernetes.io/projected/6940241d-144c-44c2-bc2b-6b27c9ed106d-kube-api-access-wtmkl\") pod \"cluster-image-registry-operator-dc59b4c8b-7ffjt\" (UID: \"6940241d-144c-44c2-bc2b-6b27c9ed106d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7ffjt" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.083498 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b1583dc-078f-4ced-a9d9-a16856b18406-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-8lqhc\" (UID: \"0b1583dc-078f-4ced-a9d9-a16856b18406\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-8lqhc" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.083523 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/908334c7-0bff-48d7-b294-70e88f29aa95-images\") pod \"machine-api-operator-5694c8668f-nzrxl\" (UID: \"908334c7-0bff-48d7-b294-70e88f29aa95\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-nzrxl" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.083545 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/c97cb435-9028-4ea4-a6cb-7851c2845566-encryption-config\") pod \"apiserver-7bbb656c7d-9shds\" (UID: \"c97cb435-9028-4ea4-a6cb-7851c2845566\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9shds" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.083594 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a593442-828c-4cff-b9b9-4efa41ef6f44-trusted-ca-bundle\") pod \"console-f9d7485db-wfrqd\" (UID: \"6a593442-828c-4cff-b9b9-4efa41ef6f44\") " pod="openshift-console/console-f9d7485db-wfrqd" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.083636 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16c21d06-dc6b-45ea-8dc9-3a9de57e0b9b-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-kb52r\" (UID: \"16c21d06-dc6b-45ea-8dc9-3a9de57e0b9b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kb52r" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.083674 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rwx8v\" (UniqueName: \"kubernetes.io/projected/2100f1b5-4d63-421f-8090-601fbb1ce20d-kube-api-access-rwx8v\") pod \"etcd-operator-b45778765-x8dvj\" (UID: \"2100f1b5-4d63-421f-8090-601fbb1ce20d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-x8dvj" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.083700 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5bq8s\" (UniqueName: \"kubernetes.io/projected/0b1583dc-078f-4ced-a9d9-a16856b18406-kube-api-access-5bq8s\") pod \"openshift-controller-manager-operator-756b6f6bc6-8lqhc\" (UID: \"0b1583dc-078f-4ced-a9d9-a16856b18406\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-8lqhc" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.083724 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a3347424-53c5-4365-bcca-5ec96a8b2c0b-serving-cert\") pod \"console-operator-58897d9998-pzsms\" (UID: \"a3347424-53c5-4365-bcca-5ec96a8b2c0b\") " pod="openshift-console-operator/console-operator-58897d9998-pzsms" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.083752 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pm972\" (UniqueName: \"kubernetes.io/projected/a3347424-53c5-4365-bcca-5ec96a8b2c0b-kube-api-access-pm972\") pod \"console-operator-58897d9998-pzsms\" (UID: \"a3347424-53c5-4365-bcca-5ec96a8b2c0b\") " pod="openshift-console-operator/console-operator-58897d9998-pzsms" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.083790 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35703302-61e8-4383-9d13-0449584419e4-config\") pod \"apiserver-76f77b778f-j88r5\" (UID: \"35703302-61e8-4383-9d13-0449584419e4\") " pod="openshift-apiserver/apiserver-76f77b778f-j88r5" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.083817 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49bbe965-c5d1-4c35-a42b-3b8e7a264de7-config\") pod \"controller-manager-879f6c89f-jnqj6\" (UID: \"49bbe965-c5d1-4c35-a42b-3b8e7a264de7\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jnqj6" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.083845 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c573c49d-036d-4d92-a63d-4f830df8a262-config\") pod \"authentication-operator-69f744f599-vjv7f\" (UID: \"c573c49d-036d-4d92-a63d-4f830df8a262\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vjv7f" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.083878 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f3b9e0de-9d50-4564-b075-9e56de0d6d20-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-8s4mq\" (UID: \"f3b9e0de-9d50-4564-b075-9e56de0d6d20\") " pod="openshift-authentication/oauth-openshift-558db77b4-8s4mq" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.083898 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c573c49d-036d-4d92-a63d-4f830df8a262-service-ca-bundle\") pod \"authentication-operator-69f744f599-vjv7f\" (UID: \"c573c49d-036d-4d92-a63d-4f830df8a262\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vjv7f" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.083915 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6a593442-828c-4cff-b9b9-4efa41ef6f44-oauth-serving-cert\") pod \"console-f9d7485db-wfrqd\" (UID: \"6a593442-828c-4cff-b9b9-4efa41ef6f44\") " pod="openshift-console/console-f9d7485db-wfrqd" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.083955 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/908334c7-0bff-48d7-b294-70e88f29aa95-config\") pod \"machine-api-operator-5694c8668f-nzrxl\" (UID: \"908334c7-0bff-48d7-b294-70e88f29aa95\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-nzrxl" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.083983 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/35703302-61e8-4383-9d13-0449584419e4-node-pullsecrets\") pod \"apiserver-76f77b778f-j88r5\" (UID: \"35703302-61e8-4383-9d13-0449584419e4\") " pod="openshift-apiserver/apiserver-76f77b778f-j88r5" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.084007 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d16293c2-d5aa-41fe-859c-0cc5201b6f0b-service-ca-bundle\") pod \"router-default-5444994796-v8699\" (UID: \"d16293c2-d5aa-41fe-859c-0cc5201b6f0b\") " pod="openshift-ingress/router-default-5444994796-v8699" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.084044 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/16c21d06-dc6b-45ea-8dc9-3a9de57e0b9b-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-kb52r\" (UID: \"16c21d06-dc6b-45ea-8dc9-3a9de57e0b9b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kb52r" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.084077 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2100f1b5-4d63-421f-8090-601fbb1ce20d-config\") pod \"etcd-operator-b45778765-x8dvj\" (UID: \"2100f1b5-4d63-421f-8090-601fbb1ce20d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-x8dvj" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.084101 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/2100f1b5-4d63-421f-8090-601fbb1ce20d-etcd-service-ca\") pod \"etcd-operator-b45778765-x8dvj\" (UID: \"2100f1b5-4d63-421f-8090-601fbb1ce20d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-x8dvj" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.084126 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/35703302-61e8-4383-9d13-0449584419e4-serving-cert\") pod \"apiserver-76f77b778f-j88r5\" (UID: \"35703302-61e8-4383-9d13-0449584419e4\") " pod="openshift-apiserver/apiserver-76f77b778f-j88r5" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.084152 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f3b9e0de-9d50-4564-b075-9e56de0d6d20-audit-policies\") pod \"oauth-openshift-558db77b4-8s4mq\" (UID: \"f3b9e0de-9d50-4564-b075-9e56de0d6d20\") " pod="openshift-authentication/oauth-openshift-558db77b4-8s4mq" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.084175 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6a593442-828c-4cff-b9b9-4efa41ef6f44-console-oauth-config\") pod \"console-f9d7485db-wfrqd\" (UID: \"6a593442-828c-4cff-b9b9-4efa41ef6f44\") " pod="openshift-console/console-f9d7485db-wfrqd" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.084201 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c97cb435-9028-4ea4-a6cb-7851c2845566-audit-dir\") pod \"apiserver-7bbb656c7d-9shds\" (UID: \"c97cb435-9028-4ea4-a6cb-7851c2845566\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9shds" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.084224 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xb877\" (UniqueName: \"kubernetes.io/projected/74a6bf4e-fce1-4865-a637-13252c668255-kube-api-access-xb877\") pod \"dns-operator-744455d44c-rxj74\" (UID: \"74a6bf4e-fce1-4865-a637-13252c668255\") " pod="openshift-dns-operator/dns-operator-744455d44c-rxj74" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.084253 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c573c49d-036d-4d92-a63d-4f830df8a262-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-vjv7f\" (UID: \"c573c49d-036d-4d92-a63d-4f830df8a262\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vjv7f" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.084733 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.084782 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6a593442-828c-4cff-b9b9-4efa41ef6f44-console-config\") pod \"console-f9d7485db-wfrqd\" (UID: \"6a593442-828c-4cff-b9b9-4efa41ef6f44\") " pod="openshift-console/console-f9d7485db-wfrqd" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.085776 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c573c49d-036d-4d92-a63d-4f830df8a262-config\") pod \"authentication-operator-69f744f599-vjv7f\" (UID: \"c573c49d-036d-4d92-a63d-4f830df8a262\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vjv7f" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.086365 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c573c49d-036d-4d92-a63d-4f830df8a262-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-vjv7f\" (UID: \"c573c49d-036d-4d92-a63d-4f830df8a262\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vjv7f" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.086437 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c97cb435-9028-4ea4-a6cb-7851c2845566-audit-dir\") pod \"apiserver-7bbb656c7d-9shds\" (UID: \"c97cb435-9028-4ea4-a6cb-7851c2845566\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9shds" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.086943 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f3b9e0de-9d50-4564-b075-9e56de0d6d20-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-8s4mq\" (UID: \"f3b9e0de-9d50-4564-b075-9e56de0d6d20\") " pod="openshift-authentication/oauth-openshift-558db77b4-8s4mq" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.086971 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f3b9e0de-9d50-4564-b075-9e56de0d6d20-audit-policies\") pod \"oauth-openshift-558db77b4-8s4mq\" (UID: \"f3b9e0de-9d50-4564-b075-9e56de0d6d20\") " pod="openshift-authentication/oauth-openshift-558db77b4-8s4mq" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.087039 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-jb2sd"] Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.087078 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-47rbm"] Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.087187 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c97cb435-9028-4ea4-a6cb-7851c2845566-audit-policies\") pod \"apiserver-7bbb656c7d-9shds\" (UID: \"c97cb435-9028-4ea4-a6cb-7851c2845566\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9shds" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.087552 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/35703302-61e8-4383-9d13-0449584419e4-image-import-ca\") pod \"apiserver-76f77b778f-j88r5\" (UID: \"35703302-61e8-4383-9d13-0449584419e4\") " pod="openshift-apiserver/apiserver-76f77b778f-j88r5" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.087587 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49bbe965-c5d1-4c35-a42b-3b8e7a264de7-config\") pod \"controller-manager-879f6c89f-jnqj6\" (UID: \"49bbe965-c5d1-4c35-a42b-3b8e7a264de7\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jnqj6" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.087773 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-b55xt"] Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.087881 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-47rbm" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.088869 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/2100f1b5-4d63-421f-8090-601fbb1ce20d-etcd-ca\") pod \"etcd-operator-b45778765-x8dvj\" (UID: \"2100f1b5-4d63-421f-8090-601fbb1ce20d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-x8dvj" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.088936 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/2100f1b5-4d63-421f-8090-601fbb1ce20d-etcd-client\") pod \"etcd-operator-b45778765-x8dvj\" (UID: \"2100f1b5-4d63-421f-8090-601fbb1ce20d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-x8dvj" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.089116 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/908334c7-0bff-48d7-b294-70e88f29aa95-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-nzrxl\" (UID: \"908334c7-0bff-48d7-b294-70e88f29aa95\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-nzrxl" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.089286 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/b9ce2b59-c756-43bf-8114-9fe86a8c8cd9-available-featuregates\") pod \"openshift-config-operator-7777fb866f-crtp9\" (UID: \"b9ce2b59-c756-43bf-8114-9fe86a8c8cd9\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-crtp9" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.089363 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/35703302-61e8-4383-9d13-0449584419e4-audit-dir\") pod \"apiserver-76f77b778f-j88r5\" (UID: \"35703302-61e8-4383-9d13-0449584419e4\") " pod="openshift-apiserver/apiserver-76f77b778f-j88r5" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.089763 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f3b9e0de-9d50-4564-b075-9e56de0d6d20-audit-dir\") pod \"oauth-openshift-558db77b4-8s4mq\" (UID: \"f3b9e0de-9d50-4564-b075-9e56de0d6d20\") " pod="openshift-authentication/oauth-openshift-558db77b4-8s4mq" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.090810 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a3347424-53c5-4365-bcca-5ec96a8b2c0b-trusted-ca\") pod \"console-operator-58897d9998-pzsms\" (UID: \"a3347424-53c5-4365-bcca-5ec96a8b2c0b\") " pod="openshift-console-operator/console-operator-58897d9998-pzsms" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.090996 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c573c49d-036d-4d92-a63d-4f830df8a262-service-ca-bundle\") pod \"authentication-operator-69f744f599-vjv7f\" (UID: \"c573c49d-036d-4d92-a63d-4f830df8a262\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vjv7f" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.091234 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6a593442-828c-4cff-b9b9-4efa41ef6f44-oauth-serving-cert\") pod \"console-f9d7485db-wfrqd\" (UID: \"6a593442-828c-4cff-b9b9-4efa41ef6f44\") " pod="openshift-console/console-f9d7485db-wfrqd" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.091543 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f3b9e0de-9d50-4564-b075-9e56de0d6d20-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-8s4mq\" (UID: \"f3b9e0de-9d50-4564-b075-9e56de0d6d20\") " pod="openshift-authentication/oauth-openshift-558db77b4-8s4mq" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.091618 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/908334c7-0bff-48d7-b294-70e88f29aa95-images\") pod \"machine-api-operator-5694c8668f-nzrxl\" (UID: \"908334c7-0bff-48d7-b294-70e88f29aa95\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-nzrxl" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.092265 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/908334c7-0bff-48d7-b294-70e88f29aa95-config\") pod \"machine-api-operator-5694c8668f-nzrxl\" (UID: \"908334c7-0bff-48d7-b294-70e88f29aa95\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-nzrxl" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.092310 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/35703302-61e8-4383-9d13-0449584419e4-node-pullsecrets\") pod \"apiserver-76f77b778f-j88r5\" (UID: \"35703302-61e8-4383-9d13-0449584419e4\") " pod="openshift-apiserver/apiserver-76f77b778f-j88r5" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.092373 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/35703302-61e8-4383-9d13-0449584419e4-encryption-config\") pod \"apiserver-76f77b778f-j88r5\" (UID: \"35703302-61e8-4383-9d13-0449584419e4\") " pod="openshift-apiserver/apiserver-76f77b778f-j88r5" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.092818 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f3b9e0de-9d50-4564-b075-9e56de0d6d20-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-8s4mq\" (UID: \"f3b9e0de-9d50-4564-b075-9e56de0d6d20\") " pod="openshift-authentication/oauth-openshift-558db77b4-8s4mq" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.092990 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2100f1b5-4d63-421f-8090-601fbb1ce20d-config\") pod \"etcd-operator-b45778765-x8dvj\" (UID: \"2100f1b5-4d63-421f-8090-601fbb1ce20d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-x8dvj" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.093134 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-qmmnp"] Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.093158 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f3b9e0de-9d50-4564-b075-9e56de0d6d20-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-8s4mq\" (UID: \"f3b9e0de-9d50-4564-b075-9e56de0d6d20\") " pod="openshift-authentication/oauth-openshift-558db77b4-8s4mq" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.093168 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-crtp9"] Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.093181 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-x8dvj"] Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.093194 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc3eba09-e19d-4f1e-abbf-01d6f9463022-config\") pod \"openshift-apiserver-operator-796bbdcf4f-22hb8\" (UID: \"cc3eba09-e19d-4f1e-abbf-01d6f9463022\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-22hb8" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.093415 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b1583dc-078f-4ced-a9d9-a16856b18406-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-8lqhc\" (UID: \"0b1583dc-078f-4ced-a9d9-a16856b18406\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-8lqhc" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.094206 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5a48387-bf09-4cf3-b4c0-2b75d3c9b3f9-config\") pod \"route-controller-manager-6576b87f9c-hmk2h\" (UID: \"d5a48387-bf09-4cf3-b4c0-2b75d3c9b3f9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hmk2h" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.094594 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a593442-828c-4cff-b9b9-4efa41ef6f44-trusted-ca-bundle\") pod \"console-f9d7485db-wfrqd\" (UID: \"6a593442-828c-4cff-b9b9-4efa41ef6f44\") " pod="openshift-console/console-f9d7485db-wfrqd" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.095185 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cc3eba09-e19d-4f1e-abbf-01d6f9463022-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-22hb8\" (UID: \"cc3eba09-e19d-4f1e-abbf-01d6f9463022\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-22hb8" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.095471 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/49bbe965-c5d1-4c35-a42b-3b8e7a264de7-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-jnqj6\" (UID: \"49bbe965-c5d1-4c35-a42b-3b8e7a264de7\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jnqj6" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.096099 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/872f79b9-6f54-4b5c-bc80-cd2404dc3156-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-8dr5c\" (UID: \"872f79b9-6f54-4b5c-bc80-cd2404dc3156\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8dr5c" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.096160 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c97cb435-9028-4ea4-a6cb-7851c2845566-etcd-client\") pod \"apiserver-7bbb656c7d-9shds\" (UID: \"c97cb435-9028-4ea4-a6cb-7851c2845566\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9shds" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.096187 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6a593442-828c-4cff-b9b9-4efa41ef6f44-console-serving-cert\") pod \"console-f9d7485db-wfrqd\" (UID: \"6a593442-828c-4cff-b9b9-4efa41ef6f44\") " pod="openshift-console/console-f9d7485db-wfrqd" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.096215 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-rxj74"] Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.096243 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-9jt44"] Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.096382 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6a593442-828c-4cff-b9b9-4efa41ef6f44-service-ca\") pod \"console-f9d7485db-wfrqd\" (UID: \"6a593442-828c-4cff-b9b9-4efa41ef6f44\") " pod="openshift-console/console-f9d7485db-wfrqd" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.096943 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f3b9e0de-9d50-4564-b075-9e56de0d6d20-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-8s4mq\" (UID: \"f3b9e0de-9d50-4564-b075-9e56de0d6d20\") " pod="openshift-authentication/oauth-openshift-558db77b4-8s4mq" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.097193 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/c97cb435-9028-4ea4-a6cb-7851c2845566-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-9shds\" (UID: \"c97cb435-9028-4ea4-a6cb-7851c2845566\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9shds" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.097199 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f3b9e0de-9d50-4564-b075-9e56de0d6d20-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-8s4mq\" (UID: \"f3b9e0de-9d50-4564-b075-9e56de0d6d20\") " pod="openshift-authentication/oauth-openshift-558db77b4-8s4mq" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.097496 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35703302-61e8-4383-9d13-0449584419e4-config\") pod \"apiserver-76f77b778f-j88r5\" (UID: \"35703302-61e8-4383-9d13-0449584419e4\") " pod="openshift-apiserver/apiserver-76f77b778f-j88r5" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.097551 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/35703302-61e8-4383-9d13-0449584419e4-serving-cert\") pod \"apiserver-76f77b778f-j88r5\" (UID: \"35703302-61e8-4383-9d13-0449584419e4\") " pod="openshift-apiserver/apiserver-76f77b778f-j88r5" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.097706 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b1583dc-078f-4ced-a9d9-a16856b18406-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-8lqhc\" (UID: \"0b1583dc-078f-4ced-a9d9-a16856b18406\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-8lqhc" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.097897 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c9ad729-d2d8-41a2-aac4-7c4909f0df98-config\") pod \"machine-approver-56656f9798-jnmj9\" (UID: \"7c9ad729-d2d8-41a2-aac4-7c4909f0df98\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jnmj9" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.097996 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-r99w9"] Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.098182 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7c9ad729-d2d8-41a2-aac4-7c4909f0df98-auth-proxy-config\") pod \"machine-approver-56656f9798-jnmj9\" (UID: \"7c9ad729-d2d8-41a2-aac4-7c4909f0df98\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jnmj9" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.098651 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f3b9e0de-9d50-4564-b075-9e56de0d6d20-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-8s4mq\" (UID: \"f3b9e0de-9d50-4564-b075-9e56de0d6d20\") " pod="openshift-authentication/oauth-openshift-558db77b4-8s4mq" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.098969 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/35703302-61e8-4383-9d13-0449584419e4-etcd-serving-ca\") pod \"apiserver-76f77b778f-j88r5\" (UID: \"35703302-61e8-4383-9d13-0449584419e4\") " pod="openshift-apiserver/apiserver-76f77b778f-j88r5" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.099075 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-krg44"] Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.099522 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c573c49d-036d-4d92-a63d-4f830df8a262-serving-cert\") pod \"authentication-operator-69f744f599-vjv7f\" (UID: \"c573c49d-036d-4d92-a63d-4f830df8a262\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vjv7f" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.100216 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/74a6bf4e-fce1-4865-a637-13252c668255-metrics-tls\") pod \"dns-operator-744455d44c-rxj74\" (UID: \"74a6bf4e-fce1-4865-a637-13252c668255\") " pod="openshift-dns-operator/dns-operator-744455d44c-rxj74" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.100654 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b9ce2b59-c756-43bf-8114-9fe86a8c8cd9-serving-cert\") pod \"openshift-config-operator-7777fb866f-crtp9\" (UID: \"b9ce2b59-c756-43bf-8114-9fe86a8c8cd9\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-crtp9" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.100677 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f3b9e0de-9d50-4564-b075-9e56de0d6d20-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-8s4mq\" (UID: \"f3b9e0de-9d50-4564-b075-9e56de0d6d20\") " pod="openshift-authentication/oauth-openshift-558db77b4-8s4mq" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.100693 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-8lqhc"] Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.100962 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a3347424-53c5-4365-bcca-5ec96a8b2c0b-serving-cert\") pod \"console-operator-58897d9998-pzsms\" (UID: \"a3347424-53c5-4365-bcca-5ec96a8b2c0b\") " pod="openshift-console-operator/console-operator-58897d9998-pzsms" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.100985 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6a593442-828c-4cff-b9b9-4efa41ef6f44-console-oauth-config\") pod \"console-f9d7485db-wfrqd\" (UID: \"6a593442-828c-4cff-b9b9-4efa41ef6f44\") " pod="openshift-console/console-f9d7485db-wfrqd" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.101238 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.101340 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c97cb435-9028-4ea4-a6cb-7851c2845566-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-9shds\" (UID: \"c97cb435-9028-4ea4-a6cb-7851c2845566\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9shds" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.101666 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424120-hdqwl"] Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.102734 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f3b9e0de-9d50-4564-b075-9e56de0d6d20-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-8s4mq\" (UID: \"f3b9e0de-9d50-4564-b075-9e56de0d6d20\") " pod="openshift-authentication/oauth-openshift-558db77b4-8s4mq" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.102824 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8dr5c"] Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.103018 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/49bbe965-c5d1-4c35-a42b-3b8e7a264de7-serving-cert\") pod \"controller-manager-879f6c89f-jnqj6\" (UID: \"49bbe965-c5d1-4c35-a42b-3b8e7a264de7\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jnqj6" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.104063 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-svgfk"] Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.104224 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c97cb435-9028-4ea4-a6cb-7851c2845566-serving-cert\") pod \"apiserver-7bbb656c7d-9shds\" (UID: \"c97cb435-9028-4ea4-a6cb-7851c2845566\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9shds" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.104292 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d5a48387-bf09-4cf3-b4c0-2b75d3c9b3f9-serving-cert\") pod \"route-controller-manager-6576b87f9c-hmk2h\" (UID: \"d5a48387-bf09-4cf3-b4c0-2b75d3c9b3f9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hmk2h" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.104283 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/35703302-61e8-4383-9d13-0449584419e4-etcd-client\") pod \"apiserver-76f77b778f-j88r5\" (UID: \"35703302-61e8-4383-9d13-0449584419e4\") " pod="openshift-apiserver/apiserver-76f77b778f-j88r5" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.104799 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/c97cb435-9028-4ea4-a6cb-7851c2845566-encryption-config\") pod \"apiserver-7bbb656c7d-9shds\" (UID: \"c97cb435-9028-4ea4-a6cb-7851c2845566\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9shds" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.104829 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2100f1b5-4d63-421f-8090-601fbb1ce20d-serving-cert\") pod \"etcd-operator-b45778765-x8dvj\" (UID: \"2100f1b5-4d63-421f-8090-601fbb1ce20d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-x8dvj" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.104987 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f3b9e0de-9d50-4564-b075-9e56de0d6d20-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-8s4mq\" (UID: \"f3b9e0de-9d50-4564-b075-9e56de0d6d20\") " pod="openshift-authentication/oauth-openshift-558db77b4-8s4mq" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.105166 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/7c9ad729-d2d8-41a2-aac4-7c4909f0df98-machine-approver-tls\") pod \"machine-approver-56656f9798-jnmj9\" (UID: \"7c9ad729-d2d8-41a2-aac4-7c4909f0df98\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jnmj9" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.106119 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-m69bw"] Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.106655 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/6940241d-144c-44c2-bc2b-6b27c9ed106d-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-7ffjt\" (UID: \"6940241d-144c-44c2-bc2b-6b27c9ed106d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7ffjt" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.107205 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f3b9e0de-9d50-4564-b075-9e56de0d6d20-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-8s4mq\" (UID: \"f3b9e0de-9d50-4564-b075-9e56de0d6d20\") " pod="openshift-authentication/oauth-openshift-558db77b4-8s4mq" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.107464 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7b4cr"] Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.108893 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7ffjt"] Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.110718 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-xmb4p"] Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.111249 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/2100f1b5-4d63-421f-8090-601fbb1ce20d-etcd-service-ca\") pod \"etcd-operator-b45778765-x8dvj\" (UID: \"2100f1b5-4d63-421f-8090-601fbb1ce20d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-x8dvj" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.112347 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-ml8wp"] Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.113462 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-65zv8"] Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.114361 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-65zv8" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.114801 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-9sdjn"] Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.115848 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-9sdjn" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.116067 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6jmjq"] Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.117838 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-b82cx"] Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.120186 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vrm5k"] Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.121457 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.121877 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-9sdjn"] Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.123067 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-65zv8"] Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.124400 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-cl6x8"] Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.125827 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d5a48387-bf09-4cf3-b4c0-2b75d3c9b3f9-client-ca\") pod \"route-controller-manager-6576b87f9c-hmk2h\" (UID: \"d5a48387-bf09-4cf3-b4c0-2b75d3c9b3f9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hmk2h" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.140698 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.161051 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.168955 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/d16293c2-d5aa-41fe-859c-0cc5201b6f0b-stats-auth\") pod \"router-default-5444994796-v8699\" (UID: \"d16293c2-d5aa-41fe-859c-0cc5201b6f0b\") " pod="openshift-ingress/router-default-5444994796-v8699" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.182256 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.194779 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d16293c2-d5aa-41fe-859c-0cc5201b6f0b-metrics-certs\") pod \"router-default-5444994796-v8699\" (UID: \"d16293c2-d5aa-41fe-859c-0cc5201b6f0b\") " pod="openshift-ingress/router-default-5444994796-v8699" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.201430 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.221122 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.225447 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/d16293c2-d5aa-41fe-859c-0cc5201b6f0b-default-certificate\") pod \"router-default-5444994796-v8699\" (UID: \"d16293c2-d5aa-41fe-859c-0cc5201b6f0b\") " pod="openshift-ingress/router-default-5444994796-v8699" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.241290 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.261814 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.271351 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d16293c2-d5aa-41fe-859c-0cc5201b6f0b-service-ca-bundle\") pod \"router-default-5444994796-v8699\" (UID: \"d16293c2-d5aa-41fe-859c-0cc5201b6f0b\") " pod="openshift-ingress/router-default-5444994796-v8699" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.281110 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.301580 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.321364 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.360780 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.382045 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.401660 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.409738 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16c21d06-dc6b-45ea-8dc9-3a9de57e0b9b-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-kb52r\" (UID: \"16c21d06-dc6b-45ea-8dc9-3a9de57e0b9b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kb52r" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.421519 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.440998 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.447443 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7eac0fc7-e06a-4d6c-8e8a-a9cebae9d6cf-config\") pod \"kube-apiserver-operator-766d6c64bb-9bt8h\" (UID: \"7eac0fc7-e06a-4d6c-8e8a-a9cebae9d6cf\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9bt8h" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.461135 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.473001 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.473086 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.473003 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.473020 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qm4mr" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.482238 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.486636 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7eac0fc7-e06a-4d6c-8e8a-a9cebae9d6cf-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-9bt8h\" (UID: \"7eac0fc7-e06a-4d6c-8e8a-a9cebae9d6cf\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9bt8h" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.501379 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.514754 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16c21d06-dc6b-45ea-8dc9-3a9de57e0b9b-config\") pod \"kube-controller-manager-operator-78b949d7b-kb52r\" (UID: \"16c21d06-dc6b-45ea-8dc9-3a9de57e0b9b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kb52r" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.541198 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.561342 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.582792 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.601953 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.622710 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.654157 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.661187 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.681638 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.703057 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.722783 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.741922 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.761598 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.780842 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.802318 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.821534 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.841544 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.861457 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.881240 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.901338 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.921741 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.942175 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.961840 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Dec 11 10:13:43 crc kubenswrapper[4953]: I1211 10:13:43.981020 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Dec 11 10:13:44 crc kubenswrapper[4953]: I1211 10:13:44.001183 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Dec 11 10:13:44 crc kubenswrapper[4953]: I1211 10:13:44.020068 4953 request.go:700] Waited for 1.0001388s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0 Dec 11 10:13:44 crc kubenswrapper[4953]: I1211 10:13:44.021263 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Dec 11 10:13:44 crc kubenswrapper[4953]: I1211 10:13:44.042206 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Dec 11 10:13:44 crc kubenswrapper[4953]: I1211 10:13:44.061664 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Dec 11 10:13:44 crc kubenswrapper[4953]: I1211 10:13:44.082089 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Dec 11 10:13:44 crc kubenswrapper[4953]: I1211 10:13:44.110399 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Dec 11 10:13:44 crc kubenswrapper[4953]: I1211 10:13:44.122859 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 11 10:13:44 crc kubenswrapper[4953]: I1211 10:13:44.142424 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 11 10:13:44 crc kubenswrapper[4953]: I1211 10:13:44.161352 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Dec 11 10:13:44 crc kubenswrapper[4953]: I1211 10:13:44.181833 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Dec 11 10:13:44 crc kubenswrapper[4953]: I1211 10:13:44.201152 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Dec 11 10:13:44 crc kubenswrapper[4953]: I1211 10:13:44.221731 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Dec 11 10:13:44 crc kubenswrapper[4953]: I1211 10:13:44.241951 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Dec 11 10:13:44 crc kubenswrapper[4953]: I1211 10:13:44.261715 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Dec 11 10:13:44 crc kubenswrapper[4953]: I1211 10:13:44.281474 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Dec 11 10:13:44 crc kubenswrapper[4953]: I1211 10:13:44.300942 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Dec 11 10:13:44 crc kubenswrapper[4953]: I1211 10:13:44.321594 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Dec 11 10:13:44 crc kubenswrapper[4953]: I1211 10:13:44.350468 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Dec 11 10:13:44 crc kubenswrapper[4953]: I1211 10:13:44.361930 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Dec 11 10:13:44 crc kubenswrapper[4953]: I1211 10:13:44.381511 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Dec 11 10:13:44 crc kubenswrapper[4953]: I1211 10:13:44.402633 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Dec 11 10:13:44 crc kubenswrapper[4953]: I1211 10:13:44.422091 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Dec 11 10:13:44 crc kubenswrapper[4953]: I1211 10:13:44.441670 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Dec 11 10:13:44 crc kubenswrapper[4953]: I1211 10:13:44.460806 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Dec 11 10:13:44 crc kubenswrapper[4953]: I1211 10:13:44.481402 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Dec 11 10:13:44 crc kubenswrapper[4953]: I1211 10:13:44.500943 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Dec 11 10:13:44 crc kubenswrapper[4953]: I1211 10:13:44.522824 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Dec 11 10:13:44 crc kubenswrapper[4953]: I1211 10:13:44.542604 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Dec 11 10:13:44 crc kubenswrapper[4953]: I1211 10:13:44.561888 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Dec 11 10:13:44 crc kubenswrapper[4953]: I1211 10:13:44.581861 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Dec 11 10:13:44 crc kubenswrapper[4953]: I1211 10:13:44.601544 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Dec 11 10:13:44 crc kubenswrapper[4953]: I1211 10:13:44.636838 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qr7k7\" (UniqueName: \"kubernetes.io/projected/d16293c2-d5aa-41fe-859c-0cc5201b6f0b-kube-api-access-qr7k7\") pod \"router-default-5444994796-v8699\" (UID: \"d16293c2-d5aa-41fe-859c-0cc5201b6f0b\") " pod="openshift-ingress/router-default-5444994796-v8699" Dec 11 10:13:44 crc kubenswrapper[4953]: I1211 10:13:44.657499 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-swfgh\" (UniqueName: \"kubernetes.io/projected/c573c49d-036d-4d92-a63d-4f830df8a262-kube-api-access-swfgh\") pod \"authentication-operator-69f744f599-vjv7f\" (UID: \"c573c49d-036d-4d92-a63d-4f830df8a262\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vjv7f" Dec 11 10:13:44 crc kubenswrapper[4953]: I1211 10:13:44.687447 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-v8699" Dec 11 10:13:44 crc kubenswrapper[4953]: I1211 10:13:44.688945 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s7gr6\" (UniqueName: \"kubernetes.io/projected/6a593442-828c-4cff-b9b9-4efa41ef6f44-kube-api-access-s7gr6\") pod \"console-f9d7485db-wfrqd\" (UID: \"6a593442-828c-4cff-b9b9-4efa41ef6f44\") " pod="openshift-console/console-f9d7485db-wfrqd" Dec 11 10:13:44 crc kubenswrapper[4953]: I1211 10:13:44.697025 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2sp8\" (UniqueName: \"kubernetes.io/projected/908334c7-0bff-48d7-b294-70e88f29aa95-kube-api-access-s2sp8\") pod \"machine-api-operator-5694c8668f-nzrxl\" (UID: \"908334c7-0bff-48d7-b294-70e88f29aa95\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-nzrxl" Dec 11 10:13:44 crc kubenswrapper[4953]: I1211 10:13:44.717956 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4crb\" (UniqueName: \"kubernetes.io/projected/7c9ad729-d2d8-41a2-aac4-7c4909f0df98-kube-api-access-f4crb\") pod \"machine-approver-56656f9798-jnmj9\" (UID: \"7c9ad729-d2d8-41a2-aac4-7c4909f0df98\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jnmj9" Dec 11 10:13:44 crc kubenswrapper[4953]: I1211 10:13:44.735134 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45jtt\" (UniqueName: \"kubernetes.io/projected/b9ce2b59-c756-43bf-8114-9fe86a8c8cd9-kube-api-access-45jtt\") pod \"openshift-config-operator-7777fb866f-crtp9\" (UID: \"b9ce2b59-c756-43bf-8114-9fe86a8c8cd9\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-crtp9" Dec 11 10:13:44 crc kubenswrapper[4953]: I1211 10:13:44.756941 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6940241d-144c-44c2-bc2b-6b27c9ed106d-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-7ffjt\" (UID: \"6940241d-144c-44c2-bc2b-6b27c9ed106d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7ffjt" Dec 11 10:13:44 crc kubenswrapper[4953]: I1211 10:13:44.774947 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xb877\" (UniqueName: \"kubernetes.io/projected/74a6bf4e-fce1-4865-a637-13252c668255-kube-api-access-xb877\") pod \"dns-operator-744455d44c-rxj74\" (UID: \"74a6bf4e-fce1-4865-a637-13252c668255\") " pod="openshift-dns-operator/dns-operator-744455d44c-rxj74" Dec 11 10:13:44 crc kubenswrapper[4953]: I1211 10:13:44.800761 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Dec 11 10:13:44 crc kubenswrapper[4953]: I1211 10:13:44.806051 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hn9r\" (UniqueName: \"kubernetes.io/projected/d5a48387-bf09-4cf3-b4c0-2b75d3c9b3f9-kube-api-access-7hn9r\") pod \"route-controller-manager-6576b87f9c-hmk2h\" (UID: \"d5a48387-bf09-4cf3-b4c0-2b75d3c9b3f9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hmk2h" Dec 11 10:13:44 crc kubenswrapper[4953]: I1211 10:13:44.812698 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-vjv7f" Dec 11 10:13:44 crc kubenswrapper[4953]: I1211 10:13:44.820816 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Dec 11 10:13:44 crc kubenswrapper[4953]: I1211 10:13:44.842188 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Dec 11 10:13:44 crc kubenswrapper[4953]: I1211 10:13:44.843278 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jnmj9" Dec 11 10:13:44 crc kubenswrapper[4953]: W1211 10:13:44.858753 4953 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7c9ad729_d2d8_41a2_aac4_7c4909f0df98.slice/crio-ab8c852364751856f7b9bf6731616e887d7267686f4a5f34880a19a1522a9812 WatchSource:0}: Error finding container ab8c852364751856f7b9bf6731616e887d7267686f4a5f34880a19a1522a9812: Status 404 returned error can't find the container with id ab8c852364751856f7b9bf6731616e887d7267686f4a5f34880a19a1522a9812 Dec 11 10:13:44 crc kubenswrapper[4953]: I1211 10:13:44.861081 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-crtp9" Dec 11 10:13:44 crc kubenswrapper[4953]: I1211 10:13:44.875979 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2mkcj\" (UniqueName: \"kubernetes.io/projected/cc3eba09-e19d-4f1e-abbf-01d6f9463022-kube-api-access-2mkcj\") pod \"openshift-apiserver-operator-796bbdcf4f-22hb8\" (UID: \"cc3eba09-e19d-4f1e-abbf-01d6f9463022\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-22hb8" Dec 11 10:13:44 crc kubenswrapper[4953]: I1211 10:13:44.886920 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-wfrqd" Dec 11 10:13:44 crc kubenswrapper[4953]: I1211 10:13:44.904994 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6x9hg\" (UniqueName: \"kubernetes.io/projected/872f79b9-6f54-4b5c-bc80-cd2404dc3156-kube-api-access-6x9hg\") pod \"cluster-samples-operator-665b6dd947-8dr5c\" (UID: \"872f79b9-6f54-4b5c-bc80-cd2404dc3156\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8dr5c" Dec 11 10:13:44 crc kubenswrapper[4953]: I1211 10:13:44.915312 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wtmkl\" (UniqueName: \"kubernetes.io/projected/6940241d-144c-44c2-bc2b-6b27c9ed106d-kube-api-access-wtmkl\") pod \"cluster-image-registry-operator-dc59b4c8b-7ffjt\" (UID: \"6940241d-144c-44c2-bc2b-6b27c9ed106d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7ffjt" Dec 11 10:13:44 crc kubenswrapper[4953]: I1211 10:13:44.934540 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/16c21d06-dc6b-45ea-8dc9-3a9de57e0b9b-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-kb52r\" (UID: \"16c21d06-dc6b-45ea-8dc9-3a9de57e0b9b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kb52r" Dec 11 10:13:44 crc kubenswrapper[4953]: I1211 10:13:44.940767 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hmk2h" Dec 11 10:13:44 crc kubenswrapper[4953]: I1211 10:13:44.944105 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-rxj74" Dec 11 10:13:44 crc kubenswrapper[4953]: I1211 10:13:44.962647 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7ffjt" Dec 11 10:13:44 crc kubenswrapper[4953]: I1211 10:13:44.968298 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nd4hb\" (UniqueName: \"kubernetes.io/projected/49bbe965-c5d1-4c35-a42b-3b8e7a264de7-kube-api-access-nd4hb\") pod \"controller-manager-879f6c89f-jnqj6\" (UID: \"49bbe965-c5d1-4c35-a42b-3b8e7a264de7\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jnqj6" Dec 11 10:13:44 crc kubenswrapper[4953]: I1211 10:13:44.980081 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jnmj9" event={"ID":"7c9ad729-d2d8-41a2-aac4-7c4909f0df98","Type":"ContainerStarted","Data":"ab8c852364751856f7b9bf6731616e887d7267686f4a5f34880a19a1522a9812"} Dec 11 10:13:44 crc kubenswrapper[4953]: I1211 10:13:44.981005 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-v8699" event={"ID":"d16293c2-d5aa-41fe-859c-0cc5201b6f0b","Type":"ContainerStarted","Data":"f5a533e63e8bd7f30b491dddc3bff51d5a0587707d7de73be4ddeebb022efa27"} Dec 11 10:13:44 crc kubenswrapper[4953]: I1211 10:13:44.983484 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5bq8s\" (UniqueName: \"kubernetes.io/projected/0b1583dc-078f-4ced-a9d9-a16856b18406-kube-api-access-5bq8s\") pod \"openshift-controller-manager-operator-756b6f6bc6-8lqhc\" (UID: \"0b1583dc-078f-4ced-a9d9-a16856b18406\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-8lqhc" Dec 11 10:13:44 crc kubenswrapper[4953]: I1211 10:13:44.987127 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-nzrxl" Dec 11 10:13:44 crc kubenswrapper[4953]: I1211 10:13:44.995814 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pm972\" (UniqueName: \"kubernetes.io/projected/a3347424-53c5-4365-bcca-5ec96a8b2c0b-kube-api-access-pm972\") pod \"console-operator-58897d9998-pzsms\" (UID: \"a3347424-53c5-4365-bcca-5ec96a8b2c0b\") " pod="openshift-console-operator/console-operator-58897d9998-pzsms" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.014163 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kb52r" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.017549 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kbdrq\" (UniqueName: \"kubernetes.io/projected/63ca4931-8019-4e0d-ab43-ae5bd50b8d91-kube-api-access-kbdrq\") pod \"downloads-7954f5f757-9jt44\" (UID: \"63ca4931-8019-4e0d-ab43-ae5bd50b8d91\") " pod="openshift-console/downloads-7954f5f757-9jt44" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.236829 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8dr5c" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.238626 4953 request.go:700] Waited for 2.122515055s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/hostpath-provisioner/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0 Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.238830 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-22hb8" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.239391 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-jnqj6" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.240164 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-9jt44" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.240438 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-pzsms" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.241247 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.241675 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.241798 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.241904 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.242041 4953 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.242153 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.242265 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.252884 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-8lqhc" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.254848 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7eac0fc7-e06a-4d6c-8e8a-a9cebae9d6cf-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-9bt8h\" (UID: \"7eac0fc7-e06a-4d6c-8e8a-a9cebae9d6cf\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9bt8h" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.256416 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rwx8v\" (UniqueName: \"kubernetes.io/projected/2100f1b5-4d63-421f-8090-601fbb1ce20d-kube-api-access-rwx8v\") pod \"etcd-operator-b45778765-x8dvj\" (UID: \"2100f1b5-4d63-421f-8090-601fbb1ce20d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-x8dvj" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.257490 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jv4z6\" (UniqueName: \"kubernetes.io/projected/35703302-61e8-4383-9d13-0449584419e4-kube-api-access-jv4z6\") pod \"apiserver-76f77b778f-j88r5\" (UID: \"35703302-61e8-4383-9d13-0449584419e4\") " pod="openshift-apiserver/apiserver-76f77b778f-j88r5" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.258994 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nm46s\" (UniqueName: \"kubernetes.io/projected/c97cb435-9028-4ea4-a6cb-7851c2845566-kube-api-access-nm46s\") pod \"apiserver-7bbb656c7d-9shds\" (UID: \"c97cb435-9028-4ea4-a6cb-7851c2845566\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9shds" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.263902 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-j88r5" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.270122 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qp6wb\" (UniqueName: \"kubernetes.io/projected/f3b9e0de-9d50-4564-b075-9e56de0d6d20-kube-api-access-qp6wb\") pod \"oauth-openshift-558db77b4-8s4mq\" (UID: \"f3b9e0de-9d50-4564-b075-9e56de0d6d20\") " pod="openshift-authentication/oauth-openshift-558db77b4-8s4mq" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.277887 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-x8dvj" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.281693 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.301206 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.321097 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.325832 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9bt8h" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.341301 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.361803 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.379735 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-8s4mq" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.381846 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.403070 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9shds" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.678260 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-r99w9\" (UID: \"c1a4e773-6467-424c-935e-40ef82e5fa99\") " pod="openshift-image-registry/image-registry-697d97f7c8-r99w9" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.678681 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c1a4e773-6467-424c-935e-40ef82e5fa99-registry-tls\") pod \"image-registry-697d97f7c8-r99w9\" (UID: \"c1a4e773-6467-424c-935e-40ef82e5fa99\") " pod="openshift-image-registry/image-registry-697d97f7c8-r99w9" Dec 11 10:13:45 crc kubenswrapper[4953]: E1211 10:13:45.684013 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 10:13:46.183973682 +0000 UTC m=+144.207832715 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-r99w9" (UID: "c1a4e773-6467-424c-935e-40ef82e5fa99") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.781432 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.781671 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c1a4e773-6467-424c-935e-40ef82e5fa99-registry-certificates\") pod \"image-registry-697d97f7c8-r99w9\" (UID: \"c1a4e773-6467-424c-935e-40ef82e5fa99\") " pod="openshift-image-registry/image-registry-697d97f7c8-r99w9" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.781731 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c1a4e773-6467-424c-935e-40ef82e5fa99-trusted-ca\") pod \"image-registry-697d97f7c8-r99w9\" (UID: \"c1a4e773-6467-424c-935e-40ef82e5fa99\") " pod="openshift-image-registry/image-registry-697d97f7c8-r99w9" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.781765 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c1a4e773-6467-424c-935e-40ef82e5fa99-ca-trust-extracted\") pod \"image-registry-697d97f7c8-r99w9\" (UID: \"c1a4e773-6467-424c-935e-40ef82e5fa99\") " pod="openshift-image-registry/image-registry-697d97f7c8-r99w9" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.781784 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drxnt\" (UniqueName: \"kubernetes.io/projected/c1a4e773-6467-424c-935e-40ef82e5fa99-kube-api-access-drxnt\") pod \"image-registry-697d97f7c8-r99w9\" (UID: \"c1a4e773-6467-424c-935e-40ef82e5fa99\") " pod="openshift-image-registry/image-registry-697d97f7c8-r99w9" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.781862 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c1a4e773-6467-424c-935e-40ef82e5fa99-bound-sa-token\") pod \"image-registry-697d97f7c8-r99w9\" (UID: \"c1a4e773-6467-424c-935e-40ef82e5fa99\") " pod="openshift-image-registry/image-registry-697d97f7c8-r99w9" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.781882 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c1a4e773-6467-424c-935e-40ef82e5fa99-installation-pull-secrets\") pod \"image-registry-697d97f7c8-r99w9\" (UID: \"c1a4e773-6467-424c-935e-40ef82e5fa99\") " pod="openshift-image-registry/image-registry-697d97f7c8-r99w9" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.781908 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c1a4e773-6467-424c-935e-40ef82e5fa99-registry-tls\") pod \"image-registry-697d97f7c8-r99w9\" (UID: \"c1a4e773-6467-424c-935e-40ef82e5fa99\") " pod="openshift-image-registry/image-registry-697d97f7c8-r99w9" Dec 11 10:13:45 crc kubenswrapper[4953]: E1211 10:13:45.782692 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 10:13:46.282379103 +0000 UTC m=+144.306238136 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.803792 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c1a4e773-6467-424c-935e-40ef82e5fa99-registry-tls\") pod \"image-registry-697d97f7c8-r99w9\" (UID: \"c1a4e773-6467-424c-935e-40ef82e5fa99\") " pod="openshift-image-registry/image-registry-697d97f7c8-r99w9" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.886637 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f829d83c-e4f2-4e16-b02e-57b445b6fa41-serving-cert\") pod \"service-ca-operator-777779d784-ml8wp\" (UID: \"f829d83c-e4f2-4e16-b02e-57b445b6fa41\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-ml8wp" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.886734 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/332434db-75e4-4fce-8973-aff84310d0f5-apiservice-cert\") pod \"packageserver-d55dfcdfc-6jmjq\" (UID: \"332434db-75e4-4fce-8973-aff84310d0f5\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6jmjq" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.886786 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/090e3900-f3c2-4c4b-aa6f-3f2b77fa67f3-metrics-tls\") pod \"dns-default-m69bw\" (UID: \"090e3900-f3c2-4c4b-aa6f-3f2b77fa67f3\") " pod="openshift-dns/dns-default-m69bw" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.886817 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c1a4e773-6467-424c-935e-40ef82e5fa99-trusted-ca\") pod \"image-registry-697d97f7c8-r99w9\" (UID: \"c1a4e773-6467-424c-935e-40ef82e5fa99\") " pod="openshift-image-registry/image-registry-697d97f7c8-r99w9" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.886873 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c310bdcf-5786-4970-a81d-651417521b3c-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-b55xt\" (UID: \"c310bdcf-5786-4970-a81d-651417521b3c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-b55xt" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.886898 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vj9nt\" (UniqueName: \"kubernetes.io/projected/7a0b9050-3135-4670-bb41-0b7cf15918e6-kube-api-access-vj9nt\") pod \"service-ca-9c57cc56f-jb2sd\" (UID: \"7a0b9050-3135-4670-bb41-0b7cf15918e6\") " pod="openshift-service-ca/service-ca-9c57cc56f-jb2sd" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.887014 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjczj\" (UniqueName: \"kubernetes.io/projected/3374cd21-f51d-4fd3-afe6-8fd43d81622a-kube-api-access-rjczj\") pod \"machine-config-operator-74547568cd-krg44\" (UID: \"3374cd21-f51d-4fd3-afe6-8fd43d81622a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-krg44" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.887040 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/06554344-a634-4dec-aaf7-e3d9919d9e80-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-xmb4p\" (UID: \"06554344-a634-4dec-aaf7-e3d9919d9e80\") " pod="openshift-marketplace/marketplace-operator-79b997595-xmb4p" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.887101 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/090e3900-f3c2-4c4b-aa6f-3f2b77fa67f3-config-volume\") pod \"dns-default-m69bw\" (UID: \"090e3900-f3c2-4c4b-aa6f-3f2b77fa67f3\") " pod="openshift-dns/dns-default-m69bw" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.887139 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c1a4e773-6467-424c-935e-40ef82e5fa99-ca-trust-extracted\") pod \"image-registry-697d97f7c8-r99w9\" (UID: \"c1a4e773-6467-424c-935e-40ef82e5fa99\") " pod="openshift-image-registry/image-registry-697d97f7c8-r99w9" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.887196 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drxnt\" (UniqueName: \"kubernetes.io/projected/c1a4e773-6467-424c-935e-40ef82e5fa99-kube-api-access-drxnt\") pod \"image-registry-697d97f7c8-r99w9\" (UID: \"c1a4e773-6467-424c-935e-40ef82e5fa99\") " pod="openshift-image-registry/image-registry-697d97f7c8-r99w9" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.888272 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b1904c62-b304-45f8-a72b-e89e77597ec1-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-69q67\" (UID: \"b1904c62-b304-45f8-a72b-e89e77597ec1\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-69q67" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.888431 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7slkz\" (UniqueName: \"kubernetes.io/projected/f829d83c-e4f2-4e16-b02e-57b445b6fa41-kube-api-access-7slkz\") pod \"service-ca-operator-777779d784-ml8wp\" (UID: \"f829d83c-e4f2-4e16-b02e-57b445b6fa41\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-ml8wp" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.888743 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7c7d0de5-9432-4fd6-b44d-6529c186be7e-bound-sa-token\") pod \"ingress-operator-5b745b69d9-bc5n5\" (UID: \"7c7d0de5-9432-4fd6-b44d-6529c186be7e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bc5n5" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.888797 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/5c6ad29d-983a-4388-a962-3ee6af6f042f-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-b82cx\" (UID: \"5c6ad29d-983a-4388-a962-3ee6af6f042f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-b82cx" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.888861 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/88498e28-0a15-43a5-b157-5a3baccfaaaf-config-volume\") pod \"collect-profiles-29424120-hdqwl\" (UID: \"88498e28-0a15-43a5-b157-5a3baccfaaaf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424120-hdqwl" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.889012 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5s4v\" (UniqueName: \"kubernetes.io/projected/f8fc0aff-efbc-4801-ad90-7b849c816858-kube-api-access-n5s4v\") pod \"machine-config-server-47rbm\" (UID: \"f8fc0aff-efbc-4801-ad90-7b849c816858\") " pod="openshift-machine-config-operator/machine-config-server-47rbm" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.889063 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3374cd21-f51d-4fd3-afe6-8fd43d81622a-proxy-tls\") pod \"machine-config-operator-74547568cd-krg44\" (UID: \"3374cd21-f51d-4fd3-afe6-8fd43d81622a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-krg44" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.889089 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/06554344-a634-4dec-aaf7-e3d9919d9e80-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-xmb4p\" (UID: \"06554344-a634-4dec-aaf7-e3d9919d9e80\") " pod="openshift-marketplace/marketplace-operator-79b997595-xmb4p" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.889151 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgg9b\" (UniqueName: \"kubernetes.io/projected/24a3a305-afdf-4c02-b335-b8c173651e93-kube-api-access-jgg9b\") pod \"control-plane-machine-set-operator-78cbb6b69f-vrm5k\" (UID: \"24a3a305-afdf-4c02-b335-b8c173651e93\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vrm5k" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.889176 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a7b5a1d1-788d-448e-b859-c29daecb9a9b-registration-dir\") pod \"csi-hostpathplugin-9sdjn\" (UID: \"a7b5a1d1-788d-448e-b859-c29daecb9a9b\") " pod="hostpath-provisioner/csi-hostpathplugin-9sdjn" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.889249 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-r99w9\" (UID: \"c1a4e773-6467-424c-935e-40ef82e5fa99\") " pod="openshift-image-registry/image-registry-697d97f7c8-r99w9" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.889293 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1904c62-b304-45f8-a72b-e89e77597ec1-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-69q67\" (UID: \"b1904c62-b304-45f8-a72b-e89e77597ec1\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-69q67" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.889351 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/3cee2756-70ae-44b9-b52a-43cf1bc552e0-profile-collector-cert\") pod \"olm-operator-6b444d44fb-cl6x8\" (UID: \"3cee2756-70ae-44b9-b52a-43cf1bc552e0\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-cl6x8" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.889389 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/7a0b9050-3135-4670-bb41-0b7cf15918e6-signing-key\") pod \"service-ca-9c57cc56f-jb2sd\" (UID: \"7a0b9050-3135-4670-bb41-0b7cf15918e6\") " pod="openshift-service-ca/service-ca-9c57cc56f-jb2sd" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.889450 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wbgz\" (UniqueName: \"kubernetes.io/projected/332434db-75e4-4fce-8973-aff84310d0f5-kube-api-access-5wbgz\") pod \"packageserver-d55dfcdfc-6jmjq\" (UID: \"332434db-75e4-4fce-8973-aff84310d0f5\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6jmjq" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.889477 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrqhl\" (UniqueName: \"kubernetes.io/projected/c310bdcf-5786-4970-a81d-651417521b3c-kube-api-access-wrqhl\") pod \"kube-storage-version-migrator-operator-b67b599dd-b55xt\" (UID: \"c310bdcf-5786-4970-a81d-651417521b3c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-b55xt" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.889501 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/7a0b9050-3135-4670-bb41-0b7cf15918e6-signing-cabundle\") pod \"service-ca-9c57cc56f-jb2sd\" (UID: \"7a0b9050-3135-4670-bb41-0b7cf15918e6\") " pod="openshift-service-ca/service-ca-9c57cc56f-jb2sd" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.889542 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lsbtm\" (UniqueName: \"kubernetes.io/projected/a7b5a1d1-788d-448e-b859-c29daecb9a9b-kube-api-access-lsbtm\") pod \"csi-hostpathplugin-9sdjn\" (UID: \"a7b5a1d1-788d-448e-b859-c29daecb9a9b\") " pod="hostpath-provisioner/csi-hostpathplugin-9sdjn" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.889566 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/85c2f4f5-9757-4a9b-947e-27caba1bcf40-cert\") pod \"ingress-canary-65zv8\" (UID: \"85c2f4f5-9757-4a9b-947e-27caba1bcf40\") " pod="openshift-ingress-canary/ingress-canary-65zv8" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.889608 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djdff\" (UniqueName: \"kubernetes.io/projected/9d5a8173-8a2a-42c1-9935-2433336c3be7-kube-api-access-djdff\") pod \"multus-admission-controller-857f4d67dd-qmmnp\" (UID: \"9d5a8173-8a2a-42c1-9935-2433336c3be7\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-qmmnp" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.889675 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c1a4e773-6467-424c-935e-40ef82e5fa99-registry-certificates\") pod \"image-registry-697d97f7c8-r99w9\" (UID: \"c1a4e773-6467-424c-935e-40ef82e5fa99\") " pod="openshift-image-registry/image-registry-697d97f7c8-r99w9" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.889703 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/24a3a305-afdf-4c02-b335-b8c173651e93-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-vrm5k\" (UID: \"24a3a305-afdf-4c02-b335-b8c173651e93\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vrm5k" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.889727 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3374cd21-f51d-4fd3-afe6-8fd43d81622a-auth-proxy-config\") pod \"machine-config-operator-74547568cd-krg44\" (UID: \"3374cd21-f51d-4fd3-afe6-8fd43d81622a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-krg44" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.889749 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a7b5a1d1-788d-448e-b859-c29daecb9a9b-socket-dir\") pod \"csi-hostpathplugin-9sdjn\" (UID: \"a7b5a1d1-788d-448e-b859-c29daecb9a9b\") " pod="hostpath-provisioner/csi-hostpathplugin-9sdjn" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.889770 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7c7d0de5-9432-4fd6-b44d-6529c186be7e-trusted-ca\") pod \"ingress-operator-5b745b69d9-bc5n5\" (UID: \"7c7d0de5-9432-4fd6-b44d-6529c186be7e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bc5n5" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.890264 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c1a4e773-6467-424c-935e-40ef82e5fa99-ca-trust-extracted\") pod \"image-registry-697d97f7c8-r99w9\" (UID: \"c1a4e773-6467-424c-935e-40ef82e5fa99\") " pod="openshift-image-registry/image-registry-697d97f7c8-r99w9" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.892379 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/e4692382-7018-4f9f-b54e-4bcb83044387-profile-collector-cert\") pod \"catalog-operator-68c6474976-7b4cr\" (UID: \"e4692382-7018-4f9f-b54e-4bcb83044387\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7b4cr" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.892452 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b1904c62-b304-45f8-a72b-e89e77597ec1-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-69q67\" (UID: \"b1904c62-b304-45f8-a72b-e89e77597ec1\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-69q67" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.892627 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxtfl\" (UniqueName: \"kubernetes.io/projected/5c6ad29d-983a-4388-a962-3ee6af6f042f-kube-api-access-mxtfl\") pod \"package-server-manager-789f6589d5-b82cx\" (UID: \"5c6ad29d-983a-4388-a962-3ee6af6f042f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-b82cx" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.892671 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5qjf\" (UniqueName: \"kubernetes.io/projected/3b330811-c6d6-4052-a061-d3c7781d619e-kube-api-access-q5qjf\") pod \"migrator-59844c95c7-svgfk\" (UID: \"3b330811-c6d6-4052-a061-d3c7781d619e\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-svgfk" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.892706 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/a7b5a1d1-788d-448e-b859-c29daecb9a9b-mountpoint-dir\") pod \"csi-hostpathplugin-9sdjn\" (UID: \"a7b5a1d1-788d-448e-b859-c29daecb9a9b\") " pod="hostpath-provisioner/csi-hostpathplugin-9sdjn" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.892786 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/51a8b0f7-87b1-48af-961b-5802873c6f76-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-zlmqt\" (UID: \"51a8b0f7-87b1-48af-961b-5802873c6f76\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zlmqt" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.892820 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/f8fc0aff-efbc-4801-ad90-7b849c816858-node-bootstrap-token\") pod \"machine-config-server-47rbm\" (UID: \"f8fc0aff-efbc-4801-ad90-7b849c816858\") " pod="openshift-machine-config-operator/machine-config-server-47rbm" Dec 11 10:13:45 crc kubenswrapper[4953]: E1211 10:13:45.892890 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 10:13:46.39283976 +0000 UTC m=+144.416698833 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-r99w9" (UID: "c1a4e773-6467-424c-935e-40ef82e5fa99") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.892974 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8s77n\" (UniqueName: \"kubernetes.io/projected/e4692382-7018-4f9f-b54e-4bcb83044387-kube-api-access-8s77n\") pod \"catalog-operator-68c6474976-7b4cr\" (UID: \"e4692382-7018-4f9f-b54e-4bcb83044387\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7b4cr" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.893074 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e4692382-7018-4f9f-b54e-4bcb83044387-srv-cert\") pod \"catalog-operator-68c6474976-7b4cr\" (UID: \"e4692382-7018-4f9f-b54e-4bcb83044387\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7b4cr" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.893183 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/332434db-75e4-4fce-8973-aff84310d0f5-webhook-cert\") pod \"packageserver-d55dfcdfc-6jmjq\" (UID: \"332434db-75e4-4fce-8973-aff84310d0f5\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6jmjq" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.893240 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/88498e28-0a15-43a5-b157-5a3baccfaaaf-secret-volume\") pod \"collect-profiles-29424120-hdqwl\" (UID: \"88498e28-0a15-43a5-b157-5a3baccfaaaf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424120-hdqwl" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.893402 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/9d5a8173-8a2a-42c1-9935-2433336c3be7-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-qmmnp\" (UID: \"9d5a8173-8a2a-42c1-9935-2433336c3be7\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-qmmnp" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.893485 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/332434db-75e4-4fce-8973-aff84310d0f5-tmpfs\") pod \"packageserver-d55dfcdfc-6jmjq\" (UID: \"332434db-75e4-4fce-8973-aff84310d0f5\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6jmjq" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.893595 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c310bdcf-5786-4970-a81d-651417521b3c-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-b55xt\" (UID: \"c310bdcf-5786-4970-a81d-651417521b3c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-b55xt" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.893660 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmtln\" (UniqueName: \"kubernetes.io/projected/06554344-a634-4dec-aaf7-e3d9919d9e80-kube-api-access-fmtln\") pod \"marketplace-operator-79b997595-xmb4p\" (UID: \"06554344-a634-4dec-aaf7-e3d9919d9e80\") " pod="openshift-marketplace/marketplace-operator-79b997595-xmb4p" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.893697 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qpmf\" (UniqueName: \"kubernetes.io/projected/3cee2756-70ae-44b9-b52a-43cf1bc552e0-kube-api-access-6qpmf\") pod \"olm-operator-6b444d44fb-cl6x8\" (UID: \"3cee2756-70ae-44b9-b52a-43cf1bc552e0\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-cl6x8" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.893751 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmg4p\" (UniqueName: \"kubernetes.io/projected/7c7d0de5-9432-4fd6-b44d-6529c186be7e-kube-api-access-dmg4p\") pod \"ingress-operator-5b745b69d9-bc5n5\" (UID: \"7c7d0de5-9432-4fd6-b44d-6529c186be7e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bc5n5" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.893808 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whqzj\" (UniqueName: \"kubernetes.io/projected/090e3900-f3c2-4c4b-aa6f-3f2b77fa67f3-kube-api-access-whqzj\") pod \"dns-default-m69bw\" (UID: \"090e3900-f3c2-4c4b-aa6f-3f2b77fa67f3\") " pod="openshift-dns/dns-default-m69bw" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.894476 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/3374cd21-f51d-4fd3-afe6-8fd43d81622a-images\") pod \"machine-config-operator-74547568cd-krg44\" (UID: \"3374cd21-f51d-4fd3-afe6-8fd43d81622a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-krg44" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.894525 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/f8fc0aff-efbc-4801-ad90-7b849c816858-certs\") pod \"machine-config-server-47rbm\" (UID: \"f8fc0aff-efbc-4801-ad90-7b849c816858\") " pod="openshift-machine-config-operator/machine-config-server-47rbm" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.894549 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f829d83c-e4f2-4e16-b02e-57b445b6fa41-config\") pod \"service-ca-operator-777779d784-ml8wp\" (UID: \"f829d83c-e4f2-4e16-b02e-57b445b6fa41\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-ml8wp" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.894608 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7c7d0de5-9432-4fd6-b44d-6529c186be7e-metrics-tls\") pod \"ingress-operator-5b745b69d9-bc5n5\" (UID: \"7c7d0de5-9432-4fd6-b44d-6529c186be7e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bc5n5" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.894635 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/a7b5a1d1-788d-448e-b859-c29daecb9a9b-plugins-dir\") pod \"csi-hostpathplugin-9sdjn\" (UID: \"a7b5a1d1-788d-448e-b859-c29daecb9a9b\") " pod="hostpath-provisioner/csi-hostpathplugin-9sdjn" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.894652 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwrt4\" (UniqueName: \"kubernetes.io/projected/51a8b0f7-87b1-48af-961b-5802873c6f76-kube-api-access-jwrt4\") pod \"machine-config-controller-84d6567774-zlmqt\" (UID: \"51a8b0f7-87b1-48af-961b-5802873c6f76\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zlmqt" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.894677 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8rwl\" (UniqueName: \"kubernetes.io/projected/85c2f4f5-9757-4a9b-947e-27caba1bcf40-kube-api-access-f8rwl\") pod \"ingress-canary-65zv8\" (UID: \"85c2f4f5-9757-4a9b-947e-27caba1bcf40\") " pod="openshift-ingress-canary/ingress-canary-65zv8" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.894780 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c1a4e773-6467-424c-935e-40ef82e5fa99-bound-sa-token\") pod \"image-registry-697d97f7c8-r99w9\" (UID: \"c1a4e773-6467-424c-935e-40ef82e5fa99\") " pod="openshift-image-registry/image-registry-697d97f7c8-r99w9" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.894800 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/51a8b0f7-87b1-48af-961b-5802873c6f76-proxy-tls\") pod \"machine-config-controller-84d6567774-zlmqt\" (UID: \"51a8b0f7-87b1-48af-961b-5802873c6f76\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zlmqt" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.894827 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c1a4e773-6467-424c-935e-40ef82e5fa99-installation-pull-secrets\") pod \"image-registry-697d97f7c8-r99w9\" (UID: \"c1a4e773-6467-424c-935e-40ef82e5fa99\") " pod="openshift-image-registry/image-registry-697d97f7c8-r99w9" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.894872 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpksk\" (UniqueName: \"kubernetes.io/projected/88498e28-0a15-43a5-b157-5a3baccfaaaf-kube-api-access-jpksk\") pod \"collect-profiles-29424120-hdqwl\" (UID: \"88498e28-0a15-43a5-b157-5a3baccfaaaf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424120-hdqwl" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.894959 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/a7b5a1d1-788d-448e-b859-c29daecb9a9b-csi-data-dir\") pod \"csi-hostpathplugin-9sdjn\" (UID: \"a7b5a1d1-788d-448e-b859-c29daecb9a9b\") " pod="hostpath-provisioner/csi-hostpathplugin-9sdjn" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.894988 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/3cee2756-70ae-44b9-b52a-43cf1bc552e0-srv-cert\") pod \"olm-operator-6b444d44fb-cl6x8\" (UID: \"3cee2756-70ae-44b9-b52a-43cf1bc552e0\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-cl6x8" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.894328 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c1a4e773-6467-424c-935e-40ef82e5fa99-trusted-ca\") pod \"image-registry-697d97f7c8-r99w9\" (UID: \"c1a4e773-6467-424c-935e-40ef82e5fa99\") " pod="openshift-image-registry/image-registry-697d97f7c8-r99w9" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.910558 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c1a4e773-6467-424c-935e-40ef82e5fa99-registry-certificates\") pod \"image-registry-697d97f7c8-r99w9\" (UID: \"c1a4e773-6467-424c-935e-40ef82e5fa99\") " pod="openshift-image-registry/image-registry-697d97f7c8-r99w9" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.921767 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drxnt\" (UniqueName: \"kubernetes.io/projected/c1a4e773-6467-424c-935e-40ef82e5fa99-kube-api-access-drxnt\") pod \"image-registry-697d97f7c8-r99w9\" (UID: \"c1a4e773-6467-424c-935e-40ef82e5fa99\") " pod="openshift-image-registry/image-registry-697d97f7c8-r99w9" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.922137 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c1a4e773-6467-424c-935e-40ef82e5fa99-bound-sa-token\") pod \"image-registry-697d97f7c8-r99w9\" (UID: \"c1a4e773-6467-424c-935e-40ef82e5fa99\") " pod="openshift-image-registry/image-registry-697d97f7c8-r99w9" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.935384 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c1a4e773-6467-424c-935e-40ef82e5fa99-installation-pull-secrets\") pod \"image-registry-697d97f7c8-r99w9\" (UID: \"c1a4e773-6467-424c-935e-40ef82e5fa99\") " pod="openshift-image-registry/image-registry-697d97f7c8-r99w9" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.995834 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.996012 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e4692382-7018-4f9f-b54e-4bcb83044387-srv-cert\") pod \"catalog-operator-68c6474976-7b4cr\" (UID: \"e4692382-7018-4f9f-b54e-4bcb83044387\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7b4cr" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.996057 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/88498e28-0a15-43a5-b157-5a3baccfaaaf-secret-volume\") pod \"collect-profiles-29424120-hdqwl\" (UID: \"88498e28-0a15-43a5-b157-5a3baccfaaaf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424120-hdqwl" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.996079 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/332434db-75e4-4fce-8973-aff84310d0f5-webhook-cert\") pod \"packageserver-d55dfcdfc-6jmjq\" (UID: \"332434db-75e4-4fce-8973-aff84310d0f5\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6jmjq" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.996099 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/9d5a8173-8a2a-42c1-9935-2433336c3be7-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-qmmnp\" (UID: \"9d5a8173-8a2a-42c1-9935-2433336c3be7\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-qmmnp" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.996119 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c310bdcf-5786-4970-a81d-651417521b3c-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-b55xt\" (UID: \"c310bdcf-5786-4970-a81d-651417521b3c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-b55xt" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.996135 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fmtln\" (UniqueName: \"kubernetes.io/projected/06554344-a634-4dec-aaf7-e3d9919d9e80-kube-api-access-fmtln\") pod \"marketplace-operator-79b997595-xmb4p\" (UID: \"06554344-a634-4dec-aaf7-e3d9919d9e80\") " pod="openshift-marketplace/marketplace-operator-79b997595-xmb4p" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.996156 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6qpmf\" (UniqueName: \"kubernetes.io/projected/3cee2756-70ae-44b9-b52a-43cf1bc552e0-kube-api-access-6qpmf\") pod \"olm-operator-6b444d44fb-cl6x8\" (UID: \"3cee2756-70ae-44b9-b52a-43cf1bc552e0\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-cl6x8" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.996189 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/332434db-75e4-4fce-8973-aff84310d0f5-tmpfs\") pod \"packageserver-d55dfcdfc-6jmjq\" (UID: \"332434db-75e4-4fce-8973-aff84310d0f5\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6jmjq" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.996214 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmg4p\" (UniqueName: \"kubernetes.io/projected/7c7d0de5-9432-4fd6-b44d-6529c186be7e-kube-api-access-dmg4p\") pod \"ingress-operator-5b745b69d9-bc5n5\" (UID: \"7c7d0de5-9432-4fd6-b44d-6529c186be7e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bc5n5" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.996238 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-whqzj\" (UniqueName: \"kubernetes.io/projected/090e3900-f3c2-4c4b-aa6f-3f2b77fa67f3-kube-api-access-whqzj\") pod \"dns-default-m69bw\" (UID: \"090e3900-f3c2-4c4b-aa6f-3f2b77fa67f3\") " pod="openshift-dns/dns-default-m69bw" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.996258 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/3374cd21-f51d-4fd3-afe6-8fd43d81622a-images\") pod \"machine-config-operator-74547568cd-krg44\" (UID: \"3374cd21-f51d-4fd3-afe6-8fd43d81622a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-krg44" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.996285 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f829d83c-e4f2-4e16-b02e-57b445b6fa41-config\") pod \"service-ca-operator-777779d784-ml8wp\" (UID: \"f829d83c-e4f2-4e16-b02e-57b445b6fa41\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-ml8wp" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.996300 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/f8fc0aff-efbc-4801-ad90-7b849c816858-certs\") pod \"machine-config-server-47rbm\" (UID: \"f8fc0aff-efbc-4801-ad90-7b849c816858\") " pod="openshift-machine-config-operator/machine-config-server-47rbm" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.996316 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7c7d0de5-9432-4fd6-b44d-6529c186be7e-metrics-tls\") pod \"ingress-operator-5b745b69d9-bc5n5\" (UID: \"7c7d0de5-9432-4fd6-b44d-6529c186be7e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bc5n5" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.996334 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jwrt4\" (UniqueName: \"kubernetes.io/projected/51a8b0f7-87b1-48af-961b-5802873c6f76-kube-api-access-jwrt4\") pod \"machine-config-controller-84d6567774-zlmqt\" (UID: \"51a8b0f7-87b1-48af-961b-5802873c6f76\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zlmqt" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.996352 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/a7b5a1d1-788d-448e-b859-c29daecb9a9b-plugins-dir\") pod \"csi-hostpathplugin-9sdjn\" (UID: \"a7b5a1d1-788d-448e-b859-c29daecb9a9b\") " pod="hostpath-provisioner/csi-hostpathplugin-9sdjn" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.996368 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f8rwl\" (UniqueName: \"kubernetes.io/projected/85c2f4f5-9757-4a9b-947e-27caba1bcf40-kube-api-access-f8rwl\") pod \"ingress-canary-65zv8\" (UID: \"85c2f4f5-9757-4a9b-947e-27caba1bcf40\") " pod="openshift-ingress-canary/ingress-canary-65zv8" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.996388 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/51a8b0f7-87b1-48af-961b-5802873c6f76-proxy-tls\") pod \"machine-config-controller-84d6567774-zlmqt\" (UID: \"51a8b0f7-87b1-48af-961b-5802873c6f76\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zlmqt" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.996412 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jpksk\" (UniqueName: \"kubernetes.io/projected/88498e28-0a15-43a5-b157-5a3baccfaaaf-kube-api-access-jpksk\") pod \"collect-profiles-29424120-hdqwl\" (UID: \"88498e28-0a15-43a5-b157-5a3baccfaaaf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424120-hdqwl" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.996439 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/a7b5a1d1-788d-448e-b859-c29daecb9a9b-csi-data-dir\") pod \"csi-hostpathplugin-9sdjn\" (UID: \"a7b5a1d1-788d-448e-b859-c29daecb9a9b\") " pod="hostpath-provisioner/csi-hostpathplugin-9sdjn" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.996454 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/3cee2756-70ae-44b9-b52a-43cf1bc552e0-srv-cert\") pod \"olm-operator-6b444d44fb-cl6x8\" (UID: \"3cee2756-70ae-44b9-b52a-43cf1bc552e0\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-cl6x8" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.996472 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f829d83c-e4f2-4e16-b02e-57b445b6fa41-serving-cert\") pod \"service-ca-operator-777779d784-ml8wp\" (UID: \"f829d83c-e4f2-4e16-b02e-57b445b6fa41\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-ml8wp" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.996486 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/090e3900-f3c2-4c4b-aa6f-3f2b77fa67f3-metrics-tls\") pod \"dns-default-m69bw\" (UID: \"090e3900-f3c2-4c4b-aa6f-3f2b77fa67f3\") " pod="openshift-dns/dns-default-m69bw" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.996501 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/332434db-75e4-4fce-8973-aff84310d0f5-apiservice-cert\") pod \"packageserver-d55dfcdfc-6jmjq\" (UID: \"332434db-75e4-4fce-8973-aff84310d0f5\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6jmjq" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.996518 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c310bdcf-5786-4970-a81d-651417521b3c-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-b55xt\" (UID: \"c310bdcf-5786-4970-a81d-651417521b3c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-b55xt" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.996534 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vj9nt\" (UniqueName: \"kubernetes.io/projected/7a0b9050-3135-4670-bb41-0b7cf15918e6-kube-api-access-vj9nt\") pod \"service-ca-9c57cc56f-jb2sd\" (UID: \"7a0b9050-3135-4670-bb41-0b7cf15918e6\") " pod="openshift-service-ca/service-ca-9c57cc56f-jb2sd" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.996552 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/06554344-a634-4dec-aaf7-e3d9919d9e80-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-xmb4p\" (UID: \"06554344-a634-4dec-aaf7-e3d9919d9e80\") " pod="openshift-marketplace/marketplace-operator-79b997595-xmb4p" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.996588 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjczj\" (UniqueName: \"kubernetes.io/projected/3374cd21-f51d-4fd3-afe6-8fd43d81622a-kube-api-access-rjczj\") pod \"machine-config-operator-74547568cd-krg44\" (UID: \"3374cd21-f51d-4fd3-afe6-8fd43d81622a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-krg44" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.996605 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/090e3900-f3c2-4c4b-aa6f-3f2b77fa67f3-config-volume\") pod \"dns-default-m69bw\" (UID: \"090e3900-f3c2-4c4b-aa6f-3f2b77fa67f3\") " pod="openshift-dns/dns-default-m69bw" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.996623 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b1904c62-b304-45f8-a72b-e89e77597ec1-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-69q67\" (UID: \"b1904c62-b304-45f8-a72b-e89e77597ec1\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-69q67" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.996639 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7slkz\" (UniqueName: \"kubernetes.io/projected/f829d83c-e4f2-4e16-b02e-57b445b6fa41-kube-api-access-7slkz\") pod \"service-ca-operator-777779d784-ml8wp\" (UID: \"f829d83c-e4f2-4e16-b02e-57b445b6fa41\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-ml8wp" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.996659 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/5c6ad29d-983a-4388-a962-3ee6af6f042f-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-b82cx\" (UID: \"5c6ad29d-983a-4388-a962-3ee6af6f042f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-b82cx" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.996676 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7c7d0de5-9432-4fd6-b44d-6529c186be7e-bound-sa-token\") pod \"ingress-operator-5b745b69d9-bc5n5\" (UID: \"7c7d0de5-9432-4fd6-b44d-6529c186be7e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bc5n5" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.996693 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/88498e28-0a15-43a5-b157-5a3baccfaaaf-config-volume\") pod \"collect-profiles-29424120-hdqwl\" (UID: \"88498e28-0a15-43a5-b157-5a3baccfaaaf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424120-hdqwl" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.996714 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n5s4v\" (UniqueName: \"kubernetes.io/projected/f8fc0aff-efbc-4801-ad90-7b849c816858-kube-api-access-n5s4v\") pod \"machine-config-server-47rbm\" (UID: \"f8fc0aff-efbc-4801-ad90-7b849c816858\") " pod="openshift-machine-config-operator/machine-config-server-47rbm" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.996729 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3374cd21-f51d-4fd3-afe6-8fd43d81622a-proxy-tls\") pod \"machine-config-operator-74547568cd-krg44\" (UID: \"3374cd21-f51d-4fd3-afe6-8fd43d81622a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-krg44" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.996767 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/06554344-a634-4dec-aaf7-e3d9919d9e80-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-xmb4p\" (UID: \"06554344-a634-4dec-aaf7-e3d9919d9e80\") " pod="openshift-marketplace/marketplace-operator-79b997595-xmb4p" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.996800 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jgg9b\" (UniqueName: \"kubernetes.io/projected/24a3a305-afdf-4c02-b335-b8c173651e93-kube-api-access-jgg9b\") pod \"control-plane-machine-set-operator-78cbb6b69f-vrm5k\" (UID: \"24a3a305-afdf-4c02-b335-b8c173651e93\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vrm5k" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.996818 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a7b5a1d1-788d-448e-b859-c29daecb9a9b-registration-dir\") pod \"csi-hostpathplugin-9sdjn\" (UID: \"a7b5a1d1-788d-448e-b859-c29daecb9a9b\") " pod="hostpath-provisioner/csi-hostpathplugin-9sdjn" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.996846 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1904c62-b304-45f8-a72b-e89e77597ec1-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-69q67\" (UID: \"b1904c62-b304-45f8-a72b-e89e77597ec1\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-69q67" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.996861 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/3cee2756-70ae-44b9-b52a-43cf1bc552e0-profile-collector-cert\") pod \"olm-operator-6b444d44fb-cl6x8\" (UID: \"3cee2756-70ae-44b9-b52a-43cf1bc552e0\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-cl6x8" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.996879 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/7a0b9050-3135-4670-bb41-0b7cf15918e6-signing-key\") pod \"service-ca-9c57cc56f-jb2sd\" (UID: \"7a0b9050-3135-4670-bb41-0b7cf15918e6\") " pod="openshift-service-ca/service-ca-9c57cc56f-jb2sd" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.996898 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wbgz\" (UniqueName: \"kubernetes.io/projected/332434db-75e4-4fce-8973-aff84310d0f5-kube-api-access-5wbgz\") pod \"packageserver-d55dfcdfc-6jmjq\" (UID: \"332434db-75e4-4fce-8973-aff84310d0f5\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6jmjq" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.996913 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wrqhl\" (UniqueName: \"kubernetes.io/projected/c310bdcf-5786-4970-a81d-651417521b3c-kube-api-access-wrqhl\") pod \"kube-storage-version-migrator-operator-b67b599dd-b55xt\" (UID: \"c310bdcf-5786-4970-a81d-651417521b3c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-b55xt" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.996932 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/7a0b9050-3135-4670-bb41-0b7cf15918e6-signing-cabundle\") pod \"service-ca-9c57cc56f-jb2sd\" (UID: \"7a0b9050-3135-4670-bb41-0b7cf15918e6\") " pod="openshift-service-ca/service-ca-9c57cc56f-jb2sd" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.996948 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lsbtm\" (UniqueName: \"kubernetes.io/projected/a7b5a1d1-788d-448e-b859-c29daecb9a9b-kube-api-access-lsbtm\") pod \"csi-hostpathplugin-9sdjn\" (UID: \"a7b5a1d1-788d-448e-b859-c29daecb9a9b\") " pod="hostpath-provisioner/csi-hostpathplugin-9sdjn" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.996963 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/85c2f4f5-9757-4a9b-947e-27caba1bcf40-cert\") pod \"ingress-canary-65zv8\" (UID: \"85c2f4f5-9757-4a9b-947e-27caba1bcf40\") " pod="openshift-ingress-canary/ingress-canary-65zv8" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.996978 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djdff\" (UniqueName: \"kubernetes.io/projected/9d5a8173-8a2a-42c1-9935-2433336c3be7-kube-api-access-djdff\") pod \"multus-admission-controller-857f4d67dd-qmmnp\" (UID: \"9d5a8173-8a2a-42c1-9935-2433336c3be7\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-qmmnp" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.996995 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/24a3a305-afdf-4c02-b335-b8c173651e93-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-vrm5k\" (UID: \"24a3a305-afdf-4c02-b335-b8c173651e93\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vrm5k" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.997012 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3374cd21-f51d-4fd3-afe6-8fd43d81622a-auth-proxy-config\") pod \"machine-config-operator-74547568cd-krg44\" (UID: \"3374cd21-f51d-4fd3-afe6-8fd43d81622a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-krg44" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.997029 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a7b5a1d1-788d-448e-b859-c29daecb9a9b-socket-dir\") pod \"csi-hostpathplugin-9sdjn\" (UID: \"a7b5a1d1-788d-448e-b859-c29daecb9a9b\") " pod="hostpath-provisioner/csi-hostpathplugin-9sdjn" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.997052 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7c7d0de5-9432-4fd6-b44d-6529c186be7e-trusted-ca\") pod \"ingress-operator-5b745b69d9-bc5n5\" (UID: \"7c7d0de5-9432-4fd6-b44d-6529c186be7e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bc5n5" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.997071 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/e4692382-7018-4f9f-b54e-4bcb83044387-profile-collector-cert\") pod \"catalog-operator-68c6474976-7b4cr\" (UID: \"e4692382-7018-4f9f-b54e-4bcb83044387\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7b4cr" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.997089 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b1904c62-b304-45f8-a72b-e89e77597ec1-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-69q67\" (UID: \"b1904c62-b304-45f8-a72b-e89e77597ec1\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-69q67" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.997105 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxtfl\" (UniqueName: \"kubernetes.io/projected/5c6ad29d-983a-4388-a962-3ee6af6f042f-kube-api-access-mxtfl\") pod \"package-server-manager-789f6589d5-b82cx\" (UID: \"5c6ad29d-983a-4388-a962-3ee6af6f042f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-b82cx" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.997121 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q5qjf\" (UniqueName: \"kubernetes.io/projected/3b330811-c6d6-4052-a061-d3c7781d619e-kube-api-access-q5qjf\") pod \"migrator-59844c95c7-svgfk\" (UID: \"3b330811-c6d6-4052-a061-d3c7781d619e\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-svgfk" Dec 11 10:13:45 crc kubenswrapper[4953]: I1211 10:13:45.997150 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/a7b5a1d1-788d-448e-b859-c29daecb9a9b-mountpoint-dir\") pod \"csi-hostpathplugin-9sdjn\" (UID: \"a7b5a1d1-788d-448e-b859-c29daecb9a9b\") " pod="hostpath-provisioner/csi-hostpathplugin-9sdjn" Dec 11 10:13:46 crc kubenswrapper[4953]: I1211 10:13:45.997187 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8s77n\" (UniqueName: \"kubernetes.io/projected/e4692382-7018-4f9f-b54e-4bcb83044387-kube-api-access-8s77n\") pod \"catalog-operator-68c6474976-7b4cr\" (UID: \"e4692382-7018-4f9f-b54e-4bcb83044387\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7b4cr" Dec 11 10:13:46 crc kubenswrapper[4953]: I1211 10:13:45.997212 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/51a8b0f7-87b1-48af-961b-5802873c6f76-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-zlmqt\" (UID: \"51a8b0f7-87b1-48af-961b-5802873c6f76\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zlmqt" Dec 11 10:13:46 crc kubenswrapper[4953]: I1211 10:13:45.997227 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/f8fc0aff-efbc-4801-ad90-7b849c816858-node-bootstrap-token\") pod \"machine-config-server-47rbm\" (UID: \"f8fc0aff-efbc-4801-ad90-7b849c816858\") " pod="openshift-machine-config-operator/machine-config-server-47rbm" Dec 11 10:13:46 crc kubenswrapper[4953]: I1211 10:13:46.001119 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/332434db-75e4-4fce-8973-aff84310d0f5-tmpfs\") pod \"packageserver-d55dfcdfc-6jmjq\" (UID: \"332434db-75e4-4fce-8973-aff84310d0f5\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6jmjq" Dec 11 10:13:46 crc kubenswrapper[4953]: E1211 10:13:46.001310 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 10:13:46.50128386 +0000 UTC m=+144.525142913 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:13:46 crc kubenswrapper[4953]: I1211 10:13:46.011376 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/88498e28-0a15-43a5-b157-5a3baccfaaaf-config-volume\") pod \"collect-profiles-29424120-hdqwl\" (UID: \"88498e28-0a15-43a5-b157-5a3baccfaaaf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424120-hdqwl" Dec 11 10:13:46 crc kubenswrapper[4953]: I1211 10:13:46.011591 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/88498e28-0a15-43a5-b157-5a3baccfaaaf-secret-volume\") pod \"collect-profiles-29424120-hdqwl\" (UID: \"88498e28-0a15-43a5-b157-5a3baccfaaaf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424120-hdqwl" Dec 11 10:13:46 crc kubenswrapper[4953]: I1211 10:13:46.013363 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/9d5a8173-8a2a-42c1-9935-2433336c3be7-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-qmmnp\" (UID: \"9d5a8173-8a2a-42c1-9935-2433336c3be7\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-qmmnp" Dec 11 10:13:46 crc kubenswrapper[4953]: I1211 10:13:46.014129 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/3374cd21-f51d-4fd3-afe6-8fd43d81622a-images\") pod \"machine-config-operator-74547568cd-krg44\" (UID: \"3374cd21-f51d-4fd3-afe6-8fd43d81622a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-krg44" Dec 11 10:13:46 crc kubenswrapper[4953]: I1211 10:13:46.015852 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c310bdcf-5786-4970-a81d-651417521b3c-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-b55xt\" (UID: \"c310bdcf-5786-4970-a81d-651417521b3c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-b55xt" Dec 11 10:13:46 crc kubenswrapper[4953]: I1211 10:13:46.016340 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f829d83c-e4f2-4e16-b02e-57b445b6fa41-config\") pod \"service-ca-operator-777779d784-ml8wp\" (UID: \"f829d83c-e4f2-4e16-b02e-57b445b6fa41\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-ml8wp" Dec 11 10:13:46 crc kubenswrapper[4953]: I1211 10:13:46.021317 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3374cd21-f51d-4fd3-afe6-8fd43d81622a-auth-proxy-config\") pod \"machine-config-operator-74547568cd-krg44\" (UID: \"3374cd21-f51d-4fd3-afe6-8fd43d81622a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-krg44" Dec 11 10:13:46 crc kubenswrapper[4953]: I1211 10:13:46.025060 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/a7b5a1d1-788d-448e-b859-c29daecb9a9b-plugins-dir\") pod \"csi-hostpathplugin-9sdjn\" (UID: \"a7b5a1d1-788d-448e-b859-c29daecb9a9b\") " pod="hostpath-provisioner/csi-hostpathplugin-9sdjn" Dec 11 10:13:46 crc kubenswrapper[4953]: I1211 10:13:46.025779 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/f8fc0aff-efbc-4801-ad90-7b849c816858-certs\") pod \"machine-config-server-47rbm\" (UID: \"f8fc0aff-efbc-4801-ad90-7b849c816858\") " pod="openshift-machine-config-operator/machine-config-server-47rbm" Dec 11 10:13:46 crc kubenswrapper[4953]: I1211 10:13:46.034603 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/7a0b9050-3135-4670-bb41-0b7cf15918e6-signing-key\") pod \"service-ca-9c57cc56f-jb2sd\" (UID: \"7a0b9050-3135-4670-bb41-0b7cf15918e6\") " pod="openshift-service-ca/service-ca-9c57cc56f-jb2sd" Dec 11 10:13:46 crc kubenswrapper[4953]: I1211 10:13:46.035550 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/7a0b9050-3135-4670-bb41-0b7cf15918e6-signing-cabundle\") pod \"service-ca-9c57cc56f-jb2sd\" (UID: \"7a0b9050-3135-4670-bb41-0b7cf15918e6\") " pod="openshift-service-ca/service-ca-9c57cc56f-jb2sd" Dec 11 10:13:46 crc kubenswrapper[4953]: I1211 10:13:46.035913 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/090e3900-f3c2-4c4b-aa6f-3f2b77fa67f3-config-volume\") pod \"dns-default-m69bw\" (UID: \"090e3900-f3c2-4c4b-aa6f-3f2b77fa67f3\") " pod="openshift-dns/dns-default-m69bw" Dec 11 10:13:46 crc kubenswrapper[4953]: I1211 10:13:46.038800 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-whqzj\" (UniqueName: \"kubernetes.io/projected/090e3900-f3c2-4c4b-aa6f-3f2b77fa67f3-kube-api-access-whqzj\") pod \"dns-default-m69bw\" (UID: \"090e3900-f3c2-4c4b-aa6f-3f2b77fa67f3\") " pod="openshift-dns/dns-default-m69bw" Dec 11 10:13:46 crc kubenswrapper[4953]: I1211 10:13:46.038878 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a7b5a1d1-788d-448e-b859-c29daecb9a9b-socket-dir\") pod \"csi-hostpathplugin-9sdjn\" (UID: \"a7b5a1d1-788d-448e-b859-c29daecb9a9b\") " pod="hostpath-provisioner/csi-hostpathplugin-9sdjn" Dec 11 10:13:46 crc kubenswrapper[4953]: I1211 10:13:46.038959 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/a7b5a1d1-788d-448e-b859-c29daecb9a9b-csi-data-dir\") pod \"csi-hostpathplugin-9sdjn\" (UID: \"a7b5a1d1-788d-448e-b859-c29daecb9a9b\") " pod="hostpath-provisioner/csi-hostpathplugin-9sdjn" Dec 11 10:13:46 crc kubenswrapper[4953]: I1211 10:13:46.103093 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/e4692382-7018-4f9f-b54e-4bcb83044387-profile-collector-cert\") pod \"catalog-operator-68c6474976-7b4cr\" (UID: \"e4692382-7018-4f9f-b54e-4bcb83044387\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7b4cr" Dec 11 10:13:46 crc kubenswrapper[4953]: I1211 10:13:46.107517 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmg4p\" (UniqueName: \"kubernetes.io/projected/7c7d0de5-9432-4fd6-b44d-6529c186be7e-kube-api-access-dmg4p\") pod \"ingress-operator-5b745b69d9-bc5n5\" (UID: \"7c7d0de5-9432-4fd6-b44d-6529c186be7e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bc5n5" Dec 11 10:13:46 crc kubenswrapper[4953]: I1211 10:13:46.108349 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1904c62-b304-45f8-a72b-e89e77597ec1-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-69q67\" (UID: \"b1904c62-b304-45f8-a72b-e89e77597ec1\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-69q67" Dec 11 10:13:46 crc kubenswrapper[4953]: I1211 10:13:46.110503 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/51a8b0f7-87b1-48af-961b-5802873c6f76-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-zlmqt\" (UID: \"51a8b0f7-87b1-48af-961b-5802873c6f76\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zlmqt" Dec 11 10:13:46 crc kubenswrapper[4953]: I1211 10:13:46.111243 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/f8fc0aff-efbc-4801-ad90-7b849c816858-node-bootstrap-token\") pod \"machine-config-server-47rbm\" (UID: \"f8fc0aff-efbc-4801-ad90-7b849c816858\") " pod="openshift-machine-config-operator/machine-config-server-47rbm" Dec 11 10:13:46 crc kubenswrapper[4953]: I1211 10:13:46.115639 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-v8699" event={"ID":"d16293c2-d5aa-41fe-859c-0cc5201b6f0b","Type":"ContainerStarted","Data":"fa9ce134663f16a00fc80eee0f8249c1e82b949467eb3d7f8ba05b26ed593eb0"} Dec 11 10:13:46 crc kubenswrapper[4953]: I1211 10:13:46.121561 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a7b5a1d1-788d-448e-b859-c29daecb9a9b-registration-dir\") pod \"csi-hostpathplugin-9sdjn\" (UID: \"a7b5a1d1-788d-448e-b859-c29daecb9a9b\") " pod="hostpath-provisioner/csi-hostpathplugin-9sdjn" Dec 11 10:13:46 crc kubenswrapper[4953]: I1211 10:13:46.121762 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/a7b5a1d1-788d-448e-b859-c29daecb9a9b-mountpoint-dir\") pod \"csi-hostpathplugin-9sdjn\" (UID: \"a7b5a1d1-788d-448e-b859-c29daecb9a9b\") " pod="hostpath-provisioner/csi-hostpathplugin-9sdjn" Dec 11 10:13:46 crc kubenswrapper[4953]: I1211 10:13:46.179525 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/85c2f4f5-9757-4a9b-947e-27caba1bcf40-cert\") pod \"ingress-canary-65zv8\" (UID: \"85c2f4f5-9757-4a9b-947e-27caba1bcf40\") " pod="openshift-ingress-canary/ingress-canary-65zv8" Dec 11 10:13:46 crc kubenswrapper[4953]: I1211 10:13:46.204277 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/06554344-a634-4dec-aaf7-e3d9919d9e80-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-xmb4p\" (UID: \"06554344-a634-4dec-aaf7-e3d9919d9e80\") " pod="openshift-marketplace/marketplace-operator-79b997595-xmb4p" Dec 11 10:13:46 crc kubenswrapper[4953]: I1211 10:13:46.204799 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e4692382-7018-4f9f-b54e-4bcb83044387-srv-cert\") pod \"catalog-operator-68c6474976-7b4cr\" (UID: \"e4692382-7018-4f9f-b54e-4bcb83044387\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7b4cr" Dec 11 10:13:46 crc kubenswrapper[4953]: I1211 10:13:46.205732 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5qjf\" (UniqueName: \"kubernetes.io/projected/3b330811-c6d6-4052-a061-d3c7781d619e-kube-api-access-q5qjf\") pod \"migrator-59844c95c7-svgfk\" (UID: \"3b330811-c6d6-4052-a061-d3c7781d619e\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-svgfk" Dec 11 10:13:46 crc kubenswrapper[4953]: I1211 10:13:46.205915 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b1904c62-b304-45f8-a72b-e89e77597ec1-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-69q67\" (UID: \"b1904c62-b304-45f8-a72b-e89e77597ec1\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-69q67" Dec 11 10:13:46 crc kubenswrapper[4953]: I1211 10:13:46.207611 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-r99w9\" (UID: \"c1a4e773-6467-424c-935e-40ef82e5fa99\") " pod="openshift-image-registry/image-registry-697d97f7c8-r99w9" Dec 11 10:13:46 crc kubenswrapper[4953]: E1211 10:13:46.208162 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 10:13:46.708144461 +0000 UTC m=+144.732003494 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-r99w9" (UID: "c1a4e773-6467-424c-935e-40ef82e5fa99") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:13:46 crc kubenswrapper[4953]: I1211 10:13:46.209869 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7c7d0de5-9432-4fd6-b44d-6529c186be7e-metrics-tls\") pod \"ingress-operator-5b745b69d9-bc5n5\" (UID: \"7c7d0de5-9432-4fd6-b44d-6529c186be7e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bc5n5" Dec 11 10:13:46 crc kubenswrapper[4953]: I1211 10:13:46.210408 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f8rwl\" (UniqueName: \"kubernetes.io/projected/85c2f4f5-9757-4a9b-947e-27caba1bcf40-kube-api-access-f8rwl\") pod \"ingress-canary-65zv8\" (UID: \"85c2f4f5-9757-4a9b-947e-27caba1bcf40\") " pod="openshift-ingress-canary/ingress-canary-65zv8" Dec 11 10:13:46 crc kubenswrapper[4953]: I1211 10:13:46.271692 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7c7d0de5-9432-4fd6-b44d-6529c186be7e-trusted-ca\") pod \"ingress-operator-5b745b69d9-bc5n5\" (UID: \"7c7d0de5-9432-4fd6-b44d-6529c186be7e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bc5n5" Dec 11 10:13:46 crc kubenswrapper[4953]: I1211 10:13:46.272939 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-svgfk" Dec 11 10:13:46 crc kubenswrapper[4953]: I1211 10:13:46.315117 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/51a8b0f7-87b1-48af-961b-5802873c6f76-proxy-tls\") pod \"machine-config-controller-84d6567774-zlmqt\" (UID: \"51a8b0f7-87b1-48af-961b-5802873c6f76\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zlmqt" Dec 11 10:13:46 crc kubenswrapper[4953]: I1211 10:13:46.315368 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/3cee2756-70ae-44b9-b52a-43cf1bc552e0-srv-cert\") pod \"olm-operator-6b444d44fb-cl6x8\" (UID: \"3cee2756-70ae-44b9-b52a-43cf1bc552e0\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-cl6x8" Dec 11 10:13:46 crc kubenswrapper[4953]: I1211 10:13:46.320425 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 10:13:46 crc kubenswrapper[4953]: E1211 10:13:46.321445 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 10:13:46.821424301 +0000 UTC m=+144.845283334 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:13:46 crc kubenswrapper[4953]: I1211 10:13:46.323449 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxtfl\" (UniqueName: \"kubernetes.io/projected/5c6ad29d-983a-4388-a962-3ee6af6f042f-kube-api-access-mxtfl\") pod \"package-server-manager-789f6589d5-b82cx\" (UID: \"5c6ad29d-983a-4388-a962-3ee6af6f042f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-b82cx" Dec 11 10:13:46 crc kubenswrapper[4953]: I1211 10:13:46.451817 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3374cd21-f51d-4fd3-afe6-8fd43d81622a-proxy-tls\") pod \"machine-config-operator-74547568cd-krg44\" (UID: \"3374cd21-f51d-4fd3-afe6-8fd43d81622a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-krg44" Dec 11 10:13:46 crc kubenswrapper[4953]: I1211 10:13:46.452611 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n5s4v\" (UniqueName: \"kubernetes.io/projected/f8fc0aff-efbc-4801-ad90-7b849c816858-kube-api-access-n5s4v\") pod \"machine-config-server-47rbm\" (UID: \"f8fc0aff-efbc-4801-ad90-7b849c816858\") " pod="openshift-machine-config-operator/machine-config-server-47rbm" Dec 11 10:13:46 crc kubenswrapper[4953]: I1211 10:13:46.453008 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-r99w9\" (UID: \"c1a4e773-6467-424c-935e-40ef82e5fa99\") " pod="openshift-image-registry/image-registry-697d97f7c8-r99w9" Dec 11 10:13:46 crc kubenswrapper[4953]: I1211 10:13:46.453161 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/24a3a305-afdf-4c02-b335-b8c173651e93-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-vrm5k\" (UID: \"24a3a305-afdf-4c02-b335-b8c173651e93\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vrm5k" Dec 11 10:13:46 crc kubenswrapper[4953]: I1211 10:13:46.453594 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/06554344-a634-4dec-aaf7-e3d9919d9e80-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-xmb4p\" (UID: \"06554344-a634-4dec-aaf7-e3d9919d9e80\") " pod="openshift-marketplace/marketplace-operator-79b997595-xmb4p" Dec 11 10:13:46 crc kubenswrapper[4953]: I1211 10:13:46.454113 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/332434db-75e4-4fce-8973-aff84310d0f5-apiservice-cert\") pod \"packageserver-d55dfcdfc-6jmjq\" (UID: \"332434db-75e4-4fce-8973-aff84310d0f5\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6jmjq" Dec 11 10:13:46 crc kubenswrapper[4953]: I1211 10:13:46.454148 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jwrt4\" (UniqueName: \"kubernetes.io/projected/51a8b0f7-87b1-48af-961b-5802873c6f76-kube-api-access-jwrt4\") pod \"machine-config-controller-84d6567774-zlmqt\" (UID: \"51a8b0f7-87b1-48af-961b-5802873c6f76\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zlmqt" Dec 11 10:13:46 crc kubenswrapper[4953]: I1211 10:13:46.454517 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7c7d0de5-9432-4fd6-b44d-6529c186be7e-bound-sa-token\") pod \"ingress-operator-5b745b69d9-bc5n5\" (UID: \"7c7d0de5-9432-4fd6-b44d-6529c186be7e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bc5n5" Dec 11 10:13:46 crc kubenswrapper[4953]: I1211 10:13:46.455314 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djdff\" (UniqueName: \"kubernetes.io/projected/9d5a8173-8a2a-42c1-9935-2433336c3be7-kube-api-access-djdff\") pod \"multus-admission-controller-857f4d67dd-qmmnp\" (UID: \"9d5a8173-8a2a-42c1-9935-2433336c3be7\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-qmmnp" Dec 11 10:13:46 crc kubenswrapper[4953]: I1211 10:13:46.457306 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b1904c62-b304-45f8-a72b-e89e77597ec1-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-69q67\" (UID: \"b1904c62-b304-45f8-a72b-e89e77597ec1\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-69q67" Dec 11 10:13:46 crc kubenswrapper[4953]: I1211 10:13:46.614027 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jgg9b\" (UniqueName: \"kubernetes.io/projected/24a3a305-afdf-4c02-b335-b8c173651e93-kube-api-access-jgg9b\") pod \"control-plane-machine-set-operator-78cbb6b69f-vrm5k\" (UID: \"24a3a305-afdf-4c02-b335-b8c173651e93\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vrm5k" Dec 11 10:13:46 crc kubenswrapper[4953]: I1211 10:13:46.614250 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/090e3900-f3c2-4c4b-aa6f-3f2b77fa67f3-metrics-tls\") pod \"dns-default-m69bw\" (UID: \"090e3900-f3c2-4c4b-aa6f-3f2b77fa67f3\") " pod="openshift-dns/dns-default-m69bw" Dec 11 10:13:46 crc kubenswrapper[4953]: I1211 10:13:46.614810 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/5c6ad29d-983a-4388-a962-3ee6af6f042f-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-b82cx\" (UID: \"5c6ad29d-983a-4388-a962-3ee6af6f042f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-b82cx" Dec 11 10:13:46 crc kubenswrapper[4953]: I1211 10:13:46.615419 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f829d83c-e4f2-4e16-b02e-57b445b6fa41-serving-cert\") pod \"service-ca-operator-777779d784-ml8wp\" (UID: \"f829d83c-e4f2-4e16-b02e-57b445b6fa41\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-ml8wp" Dec 11 10:13:46 crc kubenswrapper[4953]: I1211 10:13:46.616329 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jpksk\" (UniqueName: \"kubernetes.io/projected/88498e28-0a15-43a5-b157-5a3baccfaaaf-kube-api-access-jpksk\") pod \"collect-profiles-29424120-hdqwl\" (UID: \"88498e28-0a15-43a5-b157-5a3baccfaaaf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424120-hdqwl" Dec 11 10:13:46 crc kubenswrapper[4953]: I1211 10:13:46.617063 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6qpmf\" (UniqueName: \"kubernetes.io/projected/3cee2756-70ae-44b9-b52a-43cf1bc552e0-kube-api-access-6qpmf\") pod \"olm-operator-6b444d44fb-cl6x8\" (UID: \"3cee2756-70ae-44b9-b52a-43cf1bc552e0\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-cl6x8" Dec 11 10:13:46 crc kubenswrapper[4953]: I1211 10:13:46.629187 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-qmmnp" Dec 11 10:13:46 crc kubenswrapper[4953]: I1211 10:13:46.629874 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wbgz\" (UniqueName: \"kubernetes.io/projected/332434db-75e4-4fce-8973-aff84310d0f5-kube-api-access-5wbgz\") pod \"packageserver-d55dfcdfc-6jmjq\" (UID: \"332434db-75e4-4fce-8973-aff84310d0f5\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6jmjq" Dec 11 10:13:46 crc kubenswrapper[4953]: I1211 10:13:46.629994 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zlmqt" Dec 11 10:13:46 crc kubenswrapper[4953]: I1211 10:13:46.630752 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bc5n5" Dec 11 10:13:46 crc kubenswrapper[4953]: I1211 10:13:46.631662 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-47rbm" Dec 11 10:13:46 crc kubenswrapper[4953]: I1211 10:13:46.632926 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/332434db-75e4-4fce-8973-aff84310d0f5-webhook-cert\") pod \"packageserver-d55dfcdfc-6jmjq\" (UID: \"332434db-75e4-4fce-8973-aff84310d0f5\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6jmjq" Dec 11 10:13:46 crc kubenswrapper[4953]: I1211 10:13:46.645199 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fmtln\" (UniqueName: \"kubernetes.io/projected/06554344-a634-4dec-aaf7-e3d9919d9e80-kube-api-access-fmtln\") pod \"marketplace-operator-79b997595-xmb4p\" (UID: \"06554344-a634-4dec-aaf7-e3d9919d9e80\") " pod="openshift-marketplace/marketplace-operator-79b997595-xmb4p" Dec 11 10:13:46 crc kubenswrapper[4953]: I1211 10:13:46.645916 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c310bdcf-5786-4970-a81d-651417521b3c-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-b55xt\" (UID: \"c310bdcf-5786-4970-a81d-651417521b3c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-b55xt" Dec 11 10:13:46 crc kubenswrapper[4953]: I1211 10:13:46.646821 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8s77n\" (UniqueName: \"kubernetes.io/projected/e4692382-7018-4f9f-b54e-4bcb83044387-kube-api-access-8s77n\") pod \"catalog-operator-68c6474976-7b4cr\" (UID: \"e4692382-7018-4f9f-b54e-4bcb83044387\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7b4cr" Dec 11 10:13:46 crc kubenswrapper[4953]: I1211 10:13:46.649154 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lsbtm\" (UniqueName: \"kubernetes.io/projected/a7b5a1d1-788d-448e-b859-c29daecb9a9b-kube-api-access-lsbtm\") pod \"csi-hostpathplugin-9sdjn\" (UID: \"a7b5a1d1-788d-448e-b859-c29daecb9a9b\") " pod="hostpath-provisioner/csi-hostpathplugin-9sdjn" Dec 11 10:13:46 crc kubenswrapper[4953]: I1211 10:13:46.649829 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-69q67" Dec 11 10:13:46 crc kubenswrapper[4953]: I1211 10:13:46.650459 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7b4cr" Dec 11 10:13:46 crc kubenswrapper[4953]: I1211 10:13:46.652785 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/3cee2756-70ae-44b9-b52a-43cf1bc552e0-profile-collector-cert\") pod \"olm-operator-6b444d44fb-cl6x8\" (UID: \"3cee2756-70ae-44b9-b52a-43cf1bc552e0\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-cl6x8" Dec 11 10:13:46 crc kubenswrapper[4953]: E1211 10:13:46.655532 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 10:13:47.155514801 +0000 UTC m=+145.179373834 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-r99w9" (UID: "c1a4e773-6467-424c-935e-40ef82e5fa99") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:13:46 crc kubenswrapper[4953]: I1211 10:13:46.657154 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-65zv8" Dec 11 10:13:46 crc kubenswrapper[4953]: I1211 10:13:46.658157 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-xmb4p" Dec 11 10:13:46 crc kubenswrapper[4953]: I1211 10:13:46.658723 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6jmjq" Dec 11 10:13:46 crc kubenswrapper[4953]: I1211 10:13:46.658934 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-m69bw" Dec 11 10:13:46 crc kubenswrapper[4953]: I1211 10:13:46.665809 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vj9nt\" (UniqueName: \"kubernetes.io/projected/7a0b9050-3135-4670-bb41-0b7cf15918e6-kube-api-access-vj9nt\") pod \"service-ca-9c57cc56f-jb2sd\" (UID: \"7a0b9050-3135-4670-bb41-0b7cf15918e6\") " pod="openshift-service-ca/service-ca-9c57cc56f-jb2sd" Dec 11 10:13:46 crc kubenswrapper[4953]: I1211 10:13:46.667418 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7slkz\" (UniqueName: \"kubernetes.io/projected/f829d83c-e4f2-4e16-b02e-57b445b6fa41-kube-api-access-7slkz\") pod \"service-ca-operator-777779d784-ml8wp\" (UID: \"f829d83c-e4f2-4e16-b02e-57b445b6fa41\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-ml8wp" Dec 11 10:13:46 crc kubenswrapper[4953]: I1211 10:13:46.667709 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vrm5k" Dec 11 10:13:46 crc kubenswrapper[4953]: I1211 10:13:46.675527 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-b82cx" Dec 11 10:13:46 crc kubenswrapper[4953]: I1211 10:13:46.686665 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-cl6x8" Dec 11 10:13:46 crc kubenswrapper[4953]: I1211 10:13:46.690044 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrqhl\" (UniqueName: \"kubernetes.io/projected/c310bdcf-5786-4970-a81d-651417521b3c-kube-api-access-wrqhl\") pod \"kube-storage-version-migrator-operator-b67b599dd-b55xt\" (UID: \"c310bdcf-5786-4970-a81d-651417521b3c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-b55xt" Dec 11 10:13:46 crc kubenswrapper[4953]: I1211 10:13:46.691289 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-v8699" Dec 11 10:13:46 crc kubenswrapper[4953]: I1211 10:13:46.736047 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-9sdjn" Dec 11 10:13:46 crc kubenswrapper[4953]: I1211 10:13:46.804630 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjczj\" (UniqueName: \"kubernetes.io/projected/3374cd21-f51d-4fd3-afe6-8fd43d81622a-kube-api-access-rjczj\") pod \"machine-config-operator-74547568cd-krg44\" (UID: \"3374cd21-f51d-4fd3-afe6-8fd43d81622a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-krg44" Dec 11 10:13:46 crc kubenswrapper[4953]: I1211 10:13:46.807078 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 10:13:46 crc kubenswrapper[4953]: E1211 10:13:46.807493 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 10:13:47.307465265 +0000 UTC m=+145.331324298 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:13:46 crc kubenswrapper[4953]: I1211 10:13:46.807640 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-r99w9\" (UID: \"c1a4e773-6467-424c-935e-40ef82e5fa99\") " pod="openshift-image-registry/image-registry-697d97f7c8-r99w9" Dec 11 10:13:46 crc kubenswrapper[4953]: E1211 10:13:46.808010 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 10:13:47.307998012 +0000 UTC m=+145.331857045 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-r99w9" (UID: "c1a4e773-6467-424c-935e-40ef82e5fa99") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:13:46 crc kubenswrapper[4953]: I1211 10:13:46.961630 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-b55xt" Dec 11 10:13:46 crc kubenswrapper[4953]: I1211 10:13:46.961948 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 10:13:46 crc kubenswrapper[4953]: I1211 10:13:46.962379 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-ml8wp" Dec 11 10:13:46 crc kubenswrapper[4953]: I1211 10:13:46.962437 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424120-hdqwl" Dec 11 10:13:46 crc kubenswrapper[4953]: I1211 10:13:46.962599 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-jb2sd" Dec 11 10:13:46 crc kubenswrapper[4953]: I1211 10:13:46.962657 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-krg44" Dec 11 10:13:46 crc kubenswrapper[4953]: E1211 10:13:46.962701 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 10:13:47.462680295 +0000 UTC m=+145.486539328 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:13:47 crc kubenswrapper[4953]: I1211 10:13:47.063270 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-r99w9\" (UID: \"c1a4e773-6467-424c-935e-40ef82e5fa99\") " pod="openshift-image-registry/image-registry-697d97f7c8-r99w9" Dec 11 10:13:47 crc kubenswrapper[4953]: E1211 10:13:47.063760 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 10:13:47.563741922 +0000 UTC m=+145.587600955 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-r99w9" (UID: "c1a4e773-6467-424c-935e-40ef82e5fa99") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:13:47 crc kubenswrapper[4953]: I1211 10:13:47.163912 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 10:13:47 crc kubenswrapper[4953]: E1211 10:13:47.164524 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 10:13:47.66449808 +0000 UTC m=+145.688357113 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:13:47 crc kubenswrapper[4953]: I1211 10:13:47.178700 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-47rbm" event={"ID":"f8fc0aff-efbc-4801-ad90-7b849c816858","Type":"ContainerStarted","Data":"c4aee0e1a76abc8b57396816e59756c8aa78c2a326088d4bc1ea30039e173a97"} Dec 11 10:13:47 crc kubenswrapper[4953]: I1211 10:13:47.265827 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-r99w9\" (UID: \"c1a4e773-6467-424c-935e-40ef82e5fa99\") " pod="openshift-image-registry/image-registry-697d97f7c8-r99w9" Dec 11 10:13:47 crc kubenswrapper[4953]: E1211 10:13:47.266378 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 10:13:47.766346754 +0000 UTC m=+145.790205787 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-r99w9" (UID: "c1a4e773-6467-424c-935e-40ef82e5fa99") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:13:47 crc kubenswrapper[4953]: I1211 10:13:47.374373 4953 patch_prober.go:28] interesting pod/router-default-5444994796-v8699 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 11 10:13:47 crc kubenswrapper[4953]: [-]has-synced failed: reason withheld Dec 11 10:13:47 crc kubenswrapper[4953]: [+]process-running ok Dec 11 10:13:47 crc kubenswrapper[4953]: healthz check failed Dec 11 10:13:47 crc kubenswrapper[4953]: I1211 10:13:47.374435 4953 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-v8699" podUID="d16293c2-d5aa-41fe-859c-0cc5201b6f0b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 10:13:47 crc kubenswrapper[4953]: I1211 10:13:47.386227 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 10:13:47 crc kubenswrapper[4953]: E1211 10:13:47.386592 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 10:13:47.886542781 +0000 UTC m=+145.910401814 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:13:47 crc kubenswrapper[4953]: I1211 10:13:47.386734 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-r99w9\" (UID: \"c1a4e773-6467-424c-935e-40ef82e5fa99\") " pod="openshift-image-registry/image-registry-697d97f7c8-r99w9" Dec 11 10:13:47 crc kubenswrapper[4953]: E1211 10:13:47.387222 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 10:13:47.887211883 +0000 UTC m=+145.911070916 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-r99w9" (UID: "c1a4e773-6467-424c-935e-40ef82e5fa99") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:13:47 crc kubenswrapper[4953]: I1211 10:13:47.487609 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 10:13:47 crc kubenswrapper[4953]: E1211 10:13:47.488163 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 10:13:47.988144017 +0000 UTC m=+146.012003050 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:13:47 crc kubenswrapper[4953]: I1211 10:13:47.590035 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-r99w9\" (UID: \"c1a4e773-6467-424c-935e-40ef82e5fa99\") " pod="openshift-image-registry/image-registry-697d97f7c8-r99w9" Dec 11 10:13:47 crc kubenswrapper[4953]: E1211 10:13:47.590553 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 10:13:48.090534208 +0000 UTC m=+146.114393241 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-r99w9" (UID: "c1a4e773-6467-424c-935e-40ef82e5fa99") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:13:47 crc kubenswrapper[4953]: I1211 10:13:47.691410 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 10:13:47 crc kubenswrapper[4953]: E1211 10:13:47.691714 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 10:13:48.191699339 +0000 UTC m=+146.215558372 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:13:47 crc kubenswrapper[4953]: I1211 10:13:47.794256 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-r99w9\" (UID: \"c1a4e773-6467-424c-935e-40ef82e5fa99\") " pod="openshift-image-registry/image-registry-697d97f7c8-r99w9" Dec 11 10:13:47 crc kubenswrapper[4953]: E1211 10:13:47.794720 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 10:13:48.29470286 +0000 UTC m=+146.318561893 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-r99w9" (UID: "c1a4e773-6467-424c-935e-40ef82e5fa99") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:13:47 crc kubenswrapper[4953]: I1211 10:13:47.895438 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 10:13:47 crc kubenswrapper[4953]: E1211 10:13:47.896943 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 10:13:48.396915796 +0000 UTC m=+146.420774839 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:13:47 crc kubenswrapper[4953]: I1211 10:13:47.918277 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-v8699" podStartSLOduration=122.918257098 podStartE2EDuration="2m2.918257098s" podCreationTimestamp="2025-12-11 10:11:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:13:47.914714841 +0000 UTC m=+145.938573884" watchObservedRunningTime="2025-12-11 10:13:47.918257098 +0000 UTC m=+145.942116131" Dec 11 10:13:48 crc kubenswrapper[4953]: I1211 10:13:47.993861 4953 patch_prober.go:28] interesting pod/router-default-5444994796-v8699 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 11 10:13:48 crc kubenswrapper[4953]: [-]has-synced failed: reason withheld Dec 11 10:13:48 crc kubenswrapper[4953]: [+]process-running ok Dec 11 10:13:48 crc kubenswrapper[4953]: healthz check failed Dec 11 10:13:48 crc kubenswrapper[4953]: I1211 10:13:47.993927 4953 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-v8699" podUID="d16293c2-d5aa-41fe-859c-0cc5201b6f0b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 10:13:48 crc kubenswrapper[4953]: I1211 10:13:48.002600 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-r99w9\" (UID: \"c1a4e773-6467-424c-935e-40ef82e5fa99\") " pod="openshift-image-registry/image-registry-697d97f7c8-r99w9" Dec 11 10:13:48 crc kubenswrapper[4953]: E1211 10:13:48.003167 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 10:13:48.503126113 +0000 UTC m=+146.526985166 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-r99w9" (UID: "c1a4e773-6467-424c-935e-40ef82e5fa99") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:13:48 crc kubenswrapper[4953]: I1211 10:13:48.103584 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 10:13:48 crc kubenswrapper[4953]: E1211 10:13:48.104016 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 10:13:48.603996104 +0000 UTC m=+146.627855137 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:13:48 crc kubenswrapper[4953]: I1211 10:13:48.192152 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-47rbm" event={"ID":"f8fc0aff-efbc-4801-ad90-7b849c816858","Type":"ContainerStarted","Data":"7b15d5289a4a9b6db5fd7c7066d3bb96ebd3fe5cd036e1c26bc09b87f01158fb"} Dec 11 10:13:48 crc kubenswrapper[4953]: I1211 10:13:48.194315 4953 patch_prober.go:28] interesting pod/machine-config-daemon-q2898 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 10:13:48 crc kubenswrapper[4953]: I1211 10:13:48.194403 4953 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 10:13:48 crc kubenswrapper[4953]: I1211 10:13:48.205801 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-r99w9\" (UID: \"c1a4e773-6467-424c-935e-40ef82e5fa99\") " pod="openshift-image-registry/image-registry-697d97f7c8-r99w9" Dec 11 10:13:48 crc kubenswrapper[4953]: E1211 10:13:48.207124 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 10:13:48.707098189 +0000 UTC m=+146.730957232 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-r99w9" (UID: "c1a4e773-6467-424c-935e-40ef82e5fa99") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:13:48 crc kubenswrapper[4953]: I1211 10:13:48.311976 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 10:13:48 crc kubenswrapper[4953]: E1211 10:13:48.313282 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 10:13:48.813263566 +0000 UTC m=+146.837122589 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:13:48 crc kubenswrapper[4953]: I1211 10:13:48.358440 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-47rbm" podStartSLOduration=6.358415051 podStartE2EDuration="6.358415051s" podCreationTimestamp="2025-12-11 10:13:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:13:48.355816226 +0000 UTC m=+146.379675249" watchObservedRunningTime="2025-12-11 10:13:48.358415051 +0000 UTC m=+146.382274084" Dec 11 10:13:48 crc kubenswrapper[4953]: I1211 10:13:48.399718 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jnmj9" event={"ID":"7c9ad729-d2d8-41a2-aac4-7c4909f0df98","Type":"ContainerStarted","Data":"910588cb112b91bfced0fb2a3e59583961df5c107509c8c0668c2deb9028f534"} Dec 11 10:13:48 crc kubenswrapper[4953]: I1211 10:13:48.437511 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-r99w9\" (UID: \"c1a4e773-6467-424c-935e-40ef82e5fa99\") " pod="openshift-image-registry/image-registry-697d97f7c8-r99w9" Dec 11 10:13:48 crc kubenswrapper[4953]: E1211 10:13:48.437984 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 10:13:48.937968369 +0000 UTC m=+146.961827402 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-r99w9" (UID: "c1a4e773-6467-424c-935e-40ef82e5fa99") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:13:48 crc kubenswrapper[4953]: I1211 10:13:48.538338 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 10:13:48 crc kubenswrapper[4953]: E1211 10:13:48.539007 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 10:13:49.038987256 +0000 UTC m=+147.062846289 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:13:48 crc kubenswrapper[4953]: I1211 10:13:48.665967 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-r99w9\" (UID: \"c1a4e773-6467-424c-935e-40ef82e5fa99\") " pod="openshift-image-registry/image-registry-697d97f7c8-r99w9" Dec 11 10:13:48 crc kubenswrapper[4953]: E1211 10:13:48.667128 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 10:13:49.167104365 +0000 UTC m=+147.190963398 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-r99w9" (UID: "c1a4e773-6467-424c-935e-40ef82e5fa99") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:13:48 crc kubenswrapper[4953]: I1211 10:13:48.781669 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 10:13:48 crc kubenswrapper[4953]: E1211 10:13:48.781864 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 10:13:49.281836612 +0000 UTC m=+147.305695645 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:13:48 crc kubenswrapper[4953]: I1211 10:13:48.782258 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-r99w9\" (UID: \"c1a4e773-6467-424c-935e-40ef82e5fa99\") " pod="openshift-image-registry/image-registry-697d97f7c8-r99w9" Dec 11 10:13:48 crc kubenswrapper[4953]: E1211 10:13:48.782667 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 10:13:49.282659029 +0000 UTC m=+147.306518062 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-r99w9" (UID: "c1a4e773-6467-424c-935e-40ef82e5fa99") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:13:48 crc kubenswrapper[4953]: I1211 10:13:48.927828 4953 patch_prober.go:28] interesting pod/router-default-5444994796-v8699 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 11 10:13:48 crc kubenswrapper[4953]: [-]has-synced failed: reason withheld Dec 11 10:13:48 crc kubenswrapper[4953]: [+]process-running ok Dec 11 10:13:48 crc kubenswrapper[4953]: healthz check failed Dec 11 10:13:48 crc kubenswrapper[4953]: I1211 10:13:48.927892 4953 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-v8699" podUID="d16293c2-d5aa-41fe-859c-0cc5201b6f0b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 10:13:48 crc kubenswrapper[4953]: I1211 10:13:48.929121 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 10:13:48 crc kubenswrapper[4953]: E1211 10:13:48.929486 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 10:13:49.429465483 +0000 UTC m=+147.453324516 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:13:49 crc kubenswrapper[4953]: I1211 10:13:49.034733 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-r99w9\" (UID: \"c1a4e773-6467-424c-935e-40ef82e5fa99\") " pod="openshift-image-registry/image-registry-697d97f7c8-r99w9" Dec 11 10:13:49 crc kubenswrapper[4953]: E1211 10:13:49.035710 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 10:13:49.53568848 +0000 UTC m=+147.559547513 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-r99w9" (UID: "c1a4e773-6467-424c-935e-40ef82e5fa99") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:13:49 crc kubenswrapper[4953]: I1211 10:13:49.135792 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 10:13:49 crc kubenswrapper[4953]: E1211 10:13:49.135967 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 10:13:49.635932751 +0000 UTC m=+147.659791784 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:13:49 crc kubenswrapper[4953]: I1211 10:13:49.136247 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-r99w9\" (UID: \"c1a4e773-6467-424c-935e-40ef82e5fa99\") " pod="openshift-image-registry/image-registry-697d97f7c8-r99w9" Dec 11 10:13:49 crc kubenswrapper[4953]: E1211 10:13:49.136549 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 10:13:49.636542061 +0000 UTC m=+147.660401084 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-r99w9" (UID: "c1a4e773-6467-424c-935e-40ef82e5fa99") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:13:49 crc kubenswrapper[4953]: I1211 10:13:49.240683 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 10:13:49 crc kubenswrapper[4953]: E1211 10:13:49.240843 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 10:13:49.740815874 +0000 UTC m=+147.764674907 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:13:49 crc kubenswrapper[4953]: I1211 10:13:49.241015 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-r99w9\" (UID: \"c1a4e773-6467-424c-935e-40ef82e5fa99\") " pod="openshift-image-registry/image-registry-697d97f7c8-r99w9" Dec 11 10:13:49 crc kubenswrapper[4953]: E1211 10:13:49.241454 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 10:13:49.741430664 +0000 UTC m=+147.765289687 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-r99w9" (UID: "c1a4e773-6467-424c-935e-40ef82e5fa99") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:13:49 crc kubenswrapper[4953]: I1211 10:13:49.378182 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 10:13:49 crc kubenswrapper[4953]: E1211 10:13:49.378618 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 10:13:49.87859364 +0000 UTC m=+147.902452673 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:13:49 crc kubenswrapper[4953]: I1211 10:13:49.405706 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jnmj9" event={"ID":"7c9ad729-d2d8-41a2-aac4-7c4909f0df98","Type":"ContainerStarted","Data":"4ce2009ace1b66eeafa2ffad97805a371e8f60c9fe393adeb2e052ea9dbd82af"} Dec 11 10:13:49 crc kubenswrapper[4953]: I1211 10:13:49.503242 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-r99w9\" (UID: \"c1a4e773-6467-424c-935e-40ef82e5fa99\") " pod="openshift-image-registry/image-registry-697d97f7c8-r99w9" Dec 11 10:13:49 crc kubenswrapper[4953]: E1211 10:13:49.625812 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 10:13:50.12578595 +0000 UTC m=+148.149644983 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-r99w9" (UID: "c1a4e773-6467-424c-935e-40ef82e5fa99") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:13:49 crc kubenswrapper[4953]: I1211 10:13:49.626084 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 10:13:49 crc kubenswrapper[4953]: E1211 10:13:49.626461 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 10:13:50.126452432 +0000 UTC m=+148.150311465 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:13:49 crc kubenswrapper[4953]: I1211 10:13:49.657646 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jnmj9" podStartSLOduration=125.657628598 podStartE2EDuration="2m5.657628598s" podCreationTimestamp="2025-12-11 10:11:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:13:49.656610115 +0000 UTC m=+147.680469168" watchObservedRunningTime="2025-12-11 10:13:49.657628598 +0000 UTC m=+147.681487641" Dec 11 10:13:49 crc kubenswrapper[4953]: I1211 10:13:49.691545 4953 patch_prober.go:28] interesting pod/router-default-5444994796-v8699 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 11 10:13:49 crc kubenswrapper[4953]: [-]has-synced failed: reason withheld Dec 11 10:13:49 crc kubenswrapper[4953]: [+]process-running ok Dec 11 10:13:49 crc kubenswrapper[4953]: healthz check failed Dec 11 10:13:49 crc kubenswrapper[4953]: I1211 10:13:49.691625 4953 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-v8699" podUID="d16293c2-d5aa-41fe-859c-0cc5201b6f0b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 10:13:49 crc kubenswrapper[4953]: I1211 10:13:49.867218 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-r99w9\" (UID: \"c1a4e773-6467-424c-935e-40ef82e5fa99\") " pod="openshift-image-registry/image-registry-697d97f7c8-r99w9" Dec 11 10:13:49 crc kubenswrapper[4953]: E1211 10:13:49.868654 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 10:13:50.368642096 +0000 UTC m=+148.392501129 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-r99w9" (UID: "c1a4e773-6467-424c-935e-40ef82e5fa99") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:13:49 crc kubenswrapper[4953]: I1211 10:13:49.930277 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-crtp9"] Dec 11 10:13:49 crc kubenswrapper[4953]: I1211 10:13:49.934980 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-vjv7f"] Dec 11 10:13:49 crc kubenswrapper[4953]: I1211 10:13:49.950633 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-jnqj6"] Dec 11 10:13:50 crc kubenswrapper[4953]: I1211 10:13:49.963587 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-wfrqd"] Dec 11 10:13:50 crc kubenswrapper[4953]: I1211 10:13:49.968627 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 10:13:50 crc kubenswrapper[4953]: E1211 10:13:49.969154 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 10:13:50.469128875 +0000 UTC m=+148.492987908 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:13:50 crc kubenswrapper[4953]: I1211 10:13:50.445943 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-nzrxl"] Dec 11 10:13:50 crc kubenswrapper[4953]: I1211 10:13:50.461488 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-9jt44"] Dec 11 10:13:50 crc kubenswrapper[4953]: I1211 10:13:50.471948 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-hmk2h"] Dec 11 10:13:50 crc kubenswrapper[4953]: I1211 10:13:50.472006 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-rxj74"] Dec 11 10:13:50 crc kubenswrapper[4953]: I1211 10:13:50.517380 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-9shds"] Dec 11 10:13:50 crc kubenswrapper[4953]: I1211 10:13:50.517442 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kb52r"] Dec 11 10:13:50 crc kubenswrapper[4953]: I1211 10:13:50.538738 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 10:13:50 crc kubenswrapper[4953]: E1211 10:13:50.539274 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 10:13:51.539242516 +0000 UTC m=+149.563101559 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:13:50 crc kubenswrapper[4953]: I1211 10:13:50.539767 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-22hb8"] Dec 11 10:13:50 crc kubenswrapper[4953]: I1211 10:13:50.539838 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7ffjt"] Dec 11 10:13:50 crc kubenswrapper[4953]: I1211 10:13:50.717863 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-r99w9\" (UID: \"c1a4e773-6467-424c-935e-40ef82e5fa99\") " pod="openshift-image-registry/image-registry-697d97f7c8-r99w9" Dec 11 10:13:50 crc kubenswrapper[4953]: E1211 10:13:50.718824 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 10:13:51.218810028 +0000 UTC m=+149.242669061 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-r99w9" (UID: "c1a4e773-6467-424c-935e-40ef82e5fa99") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:13:50 crc kubenswrapper[4953]: I1211 10:13:50.720032 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-8s4mq"] Dec 11 10:13:50 crc kubenswrapper[4953]: I1211 10:13:50.720083 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-svgfk"] Dec 11 10:13:50 crc kubenswrapper[4953]: I1211 10:13:50.728686 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-crtp9" event={"ID":"b9ce2b59-c756-43bf-8114-9fe86a8c8cd9","Type":"ContainerStarted","Data":"c63201e6d82f92e3e0a8e0149becaa113bf2c9127b55b04f257914644c43ad4a"} Dec 11 10:13:50 crc kubenswrapper[4953]: I1211 10:13:50.752560 4953 patch_prober.go:28] interesting pod/router-default-5444994796-v8699 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 11 10:13:50 crc kubenswrapper[4953]: [-]has-synced failed: reason withheld Dec 11 10:13:50 crc kubenswrapper[4953]: [+]process-running ok Dec 11 10:13:50 crc kubenswrapper[4953]: healthz check failed Dec 11 10:13:50 crc kubenswrapper[4953]: I1211 10:13:50.752669 4953 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-v8699" podUID="d16293c2-d5aa-41fe-859c-0cc5201b6f0b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 10:13:50 crc kubenswrapper[4953]: I1211 10:13:50.764156 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-x8dvj"] Dec 11 10:13:50 crc kubenswrapper[4953]: I1211 10:13:50.794944 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-wfrqd" event={"ID":"6a593442-828c-4cff-b9b9-4efa41ef6f44","Type":"ContainerStarted","Data":"048fef2fc96c4f67d3f0e81fe8e18db495515590a26df2c37c93829468a58750"} Dec 11 10:13:50 crc kubenswrapper[4953]: I1211 10:13:50.798593 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8dr5c"] Dec 11 10:13:50 crc kubenswrapper[4953]: I1211 10:13:50.806146 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9bt8h"] Dec 11 10:13:50 crc kubenswrapper[4953]: I1211 10:13:50.809010 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-jnqj6" event={"ID":"49bbe965-c5d1-4c35-a42b-3b8e7a264de7","Type":"ContainerStarted","Data":"2145d53d9384bfa2716e2ab7f0f06e4e4f003deda87d07d0062e9d317d5aae61"} Dec 11 10:13:50 crc kubenswrapper[4953]: I1211 10:13:50.810179 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-8lqhc"] Dec 11 10:13:51 crc kubenswrapper[4953]: I1211 10:13:50.828669 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-j88r5"] Dec 11 10:13:51 crc kubenswrapper[4953]: I1211 10:13:50.830635 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 10:13:51 crc kubenswrapper[4953]: E1211 10:13:50.831169 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 10:13:51.331136507 +0000 UTC m=+149.354995540 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:13:51 crc kubenswrapper[4953]: I1211 10:13:51.050758 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-r99w9\" (UID: \"c1a4e773-6467-424c-935e-40ef82e5fa99\") " pod="openshift-image-registry/image-registry-697d97f7c8-r99w9" Dec 11 10:13:51 crc kubenswrapper[4953]: E1211 10:13:51.051284 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 10:13:51.551267874 +0000 UTC m=+149.575126907 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-r99w9" (UID: "c1a4e773-6467-424c-935e-40ef82e5fa99") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:13:51 crc kubenswrapper[4953]: I1211 10:13:51.051281 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-zlmqt"] Dec 11 10:13:51 crc kubenswrapper[4953]: I1211 10:13:51.051337 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-pzsms"] Dec 11 10:13:51 crc kubenswrapper[4953]: I1211 10:13:51.053725 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-xmb4p"] Dec 11 10:13:51 crc kubenswrapper[4953]: I1211 10:13:51.056565 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-rxj74" event={"ID":"74a6bf4e-fce1-4865-a637-13252c668255","Type":"ContainerStarted","Data":"4c4de75b0a68e25b642273f4825e83746a2d5c1c284ef72cd4125f4219c50b17"} Dec 11 10:13:51 crc kubenswrapper[4953]: I1211 10:13:51.059030 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-69q67"] Dec 11 10:13:51 crc kubenswrapper[4953]: I1211 10:13:51.063780 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-22hb8" event={"ID":"cc3eba09-e19d-4f1e-abbf-01d6f9463022","Type":"ContainerStarted","Data":"46a10dec563b737a3e7b4736d61cd8a3791d8ab190c6f4d49fa08c84c74200cb"} Dec 11 10:13:51 crc kubenswrapper[4953]: I1211 10:13:51.066996 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-b82cx"] Dec 11 10:13:51 crc kubenswrapper[4953]: I1211 10:13:51.069460 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-bc5n5"] Dec 11 10:13:51 crc kubenswrapper[4953]: I1211 10:13:51.074743 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hmk2h" event={"ID":"d5a48387-bf09-4cf3-b4c0-2b75d3c9b3f9","Type":"ContainerStarted","Data":"0af57bb95800a5e9f3dfc197d71e2ce34a6a055a9eb20e81fd9952a3868a8d5c"} Dec 11 10:13:51 crc kubenswrapper[4953]: I1211 10:13:51.080462 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-vjv7f" event={"ID":"c573c49d-036d-4d92-a63d-4f830df8a262","Type":"ContainerStarted","Data":"739562b1b7634d8ca36107b17d5d5c8ef33213f31c2b4192cef07a307af02aa2"} Dec 11 10:13:51 crc kubenswrapper[4953]: I1211 10:13:51.081443 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7ffjt" event={"ID":"6940241d-144c-44c2-bc2b-6b27c9ed106d","Type":"ContainerStarted","Data":"007fd24c0e7bc1481a0e7ddaa8594483730788476e4d42e0ce55548aa83fc9ab"} Dec 11 10:13:51 crc kubenswrapper[4953]: W1211 10:13:51.139903 4953 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod06554344_a634_4dec_aaf7_e3d9919d9e80.slice/crio-366a26b1116e09d72e9985dd5d5cf3c2279f6e338d085e54287029b14246fc31 WatchSource:0}: Error finding container 366a26b1116e09d72e9985dd5d5cf3c2279f6e338d085e54287029b14246fc31: Status 404 returned error can't find the container with id 366a26b1116e09d72e9985dd5d5cf3c2279f6e338d085e54287029b14246fc31 Dec 11 10:13:51 crc kubenswrapper[4953]: I1211 10:13:51.141480 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-9jt44" event={"ID":"63ca4931-8019-4e0d-ab43-ae5bd50b8d91","Type":"ContainerStarted","Data":"d3ff1c8e6b8f7630e0834b4b53c19923edc6f04c8c5cdfa03dc70d3fb5a65db5"} Dec 11 10:13:51 crc kubenswrapper[4953]: I1211 10:13:51.151438 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 10:13:51 crc kubenswrapper[4953]: E1211 10:13:51.151923 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 10:13:51.651906078 +0000 UTC m=+149.675765101 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:13:51 crc kubenswrapper[4953]: I1211 10:13:51.161025 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vrm5k"] Dec 11 10:13:51 crc kubenswrapper[4953]: I1211 10:13:51.175264 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7b4cr"] Dec 11 10:13:51 crc kubenswrapper[4953]: I1211 10:13:51.189437 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-cl6x8"] Dec 11 10:13:51 crc kubenswrapper[4953]: I1211 10:13:51.190627 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-qmmnp"] Dec 11 10:13:51 crc kubenswrapper[4953]: W1211 10:13:51.241135 4953 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod24a3a305_afdf_4c02_b335_b8c173651e93.slice/crio-19fd1d7e3c6f7ad76e8943ee5f8127e03a655086723f63bfc5e43bf2aa5ca81f WatchSource:0}: Error finding container 19fd1d7e3c6f7ad76e8943ee5f8127e03a655086723f63bfc5e43bf2aa5ca81f: Status 404 returned error can't find the container with id 19fd1d7e3c6f7ad76e8943ee5f8127e03a655086723f63bfc5e43bf2aa5ca81f Dec 11 10:13:51 crc kubenswrapper[4953]: I1211 10:13:51.252625 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-r99w9\" (UID: \"c1a4e773-6467-424c-935e-40ef82e5fa99\") " pod="openshift-image-registry/image-registry-697d97f7c8-r99w9" Dec 11 10:13:51 crc kubenswrapper[4953]: E1211 10:13:51.253772 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 10:13:51.753752632 +0000 UTC m=+149.777611665 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-r99w9" (UID: "c1a4e773-6467-424c-935e-40ef82e5fa99") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:13:51 crc kubenswrapper[4953]: W1211 10:13:51.307848 4953 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode4692382_7018_4f9f_b54e_4bcb83044387.slice/crio-1db36f9d15e2a2d98ef324d224b750c9cb89138f26191154c69d9f8532b63934 WatchSource:0}: Error finding container 1db36f9d15e2a2d98ef324d224b750c9cb89138f26191154c69d9f8532b63934: Status 404 returned error can't find the container with id 1db36f9d15e2a2d98ef324d224b750c9cb89138f26191154c69d9f8532b63934 Dec 11 10:13:51 crc kubenswrapper[4953]: I1211 10:13:51.361772 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 10:13:51 crc kubenswrapper[4953]: I1211 10:13:51.362024 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 10:13:51 crc kubenswrapper[4953]: I1211 10:13:51.363360 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 10:13:51 crc kubenswrapper[4953]: E1211 10:13:51.364216 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 10:13:51.864191208 +0000 UTC m=+149.888050261 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:13:51 crc kubenswrapper[4953]: W1211 10:13:51.368888 4953 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d5a8173_8a2a_42c1_9935_2433336c3be7.slice/crio-8bada8bcae3ce49c0bc12f033be8974078026c7b565e729c9a1ffc5e10a4a5f3 WatchSource:0}: Error finding container 8bada8bcae3ce49c0bc12f033be8974078026c7b565e729c9a1ffc5e10a4a5f3: Status 404 returned error can't find the container with id 8bada8bcae3ce49c0bc12f033be8974078026c7b565e729c9a1ffc5e10a4a5f3 Dec 11 10:13:51 crc kubenswrapper[4953]: I1211 10:13:51.370674 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424120-hdqwl"] Dec 11 10:13:51 crc kubenswrapper[4953]: I1211 10:13:51.409926 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-9sdjn"] Dec 11 10:13:51 crc kubenswrapper[4953]: I1211 10:13:51.411269 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-jb2sd"] Dec 11 10:13:51 crc kubenswrapper[4953]: I1211 10:13:51.436983 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-ml8wp"] Dec 11 10:13:51 crc kubenswrapper[4953]: I1211 10:13:51.449297 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-b55xt"] Dec 11 10:13:51 crc kubenswrapper[4953]: I1211 10:13:51.449354 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-krg44"] Dec 11 10:13:51 crc kubenswrapper[4953]: I1211 10:13:51.451933 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-65zv8"] Dec 11 10:13:51 crc kubenswrapper[4953]: I1211 10:13:51.452079 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6jmjq"] Dec 11 10:13:51 crc kubenswrapper[4953]: I1211 10:13:51.454009 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-m69bw"] Dec 11 10:13:51 crc kubenswrapper[4953]: I1211 10:13:51.465114 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-r99w9\" (UID: \"c1a4e773-6467-424c-935e-40ef82e5fa99\") " pod="openshift-image-registry/image-registry-697d97f7c8-r99w9" Dec 11 10:13:51 crc kubenswrapper[4953]: E1211 10:13:51.465447 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 10:13:51.965432921 +0000 UTC m=+149.989291954 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-r99w9" (UID: "c1a4e773-6467-424c-935e-40ef82e5fa99") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:13:51 crc kubenswrapper[4953]: W1211 10:13:51.498656 4953 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod88498e28_0a15_43a5_b157_5a3baccfaaaf.slice/crio-0663c1f7bcd738da2d586b7682bdf7a4dd951c70c5ea8c8362f97f69e222c90b WatchSource:0}: Error finding container 0663c1f7bcd738da2d586b7682bdf7a4dd951c70c5ea8c8362f97f69e222c90b: Status 404 returned error can't find the container with id 0663c1f7bcd738da2d586b7682bdf7a4dd951c70c5ea8c8362f97f69e222c90b Dec 11 10:13:51 crc kubenswrapper[4953]: I1211 10:13:51.566456 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 10:13:51 crc kubenswrapper[4953]: E1211 10:13:51.566815 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 10:13:52.066786968 +0000 UTC m=+150.090646001 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:13:51 crc kubenswrapper[4953]: I1211 10:13:51.567157 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-r99w9\" (UID: \"c1a4e773-6467-424c-935e-40ef82e5fa99\") " pod="openshift-image-registry/image-registry-697d97f7c8-r99w9" Dec 11 10:13:51 crc kubenswrapper[4953]: E1211 10:13:51.567540 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 10:13:52.067527913 +0000 UTC m=+150.091387026 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-r99w9" (UID: "c1a4e773-6467-424c-935e-40ef82e5fa99") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:13:51 crc kubenswrapper[4953]: W1211 10:13:51.575910 4953 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda7b5a1d1_788d_448e_b859_c29daecb9a9b.slice/crio-db6d9bf39d3618af1f1a9d0a9b761da8cfb89571ff4d419a114dfbcb38864cc0 WatchSource:0}: Error finding container db6d9bf39d3618af1f1a9d0a9b761da8cfb89571ff4d419a114dfbcb38864cc0: Status 404 returned error can't find the container with id db6d9bf39d3618af1f1a9d0a9b761da8cfb89571ff4d419a114dfbcb38864cc0 Dec 11 10:13:51 crc kubenswrapper[4953]: W1211 10:13:51.597021 4953 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3374cd21_f51d_4fd3_afe6_8fd43d81622a.slice/crio-3a55cf7298103798e51040c52439e5408172bbb63328d0f559052fe9a9334792 WatchSource:0}: Error finding container 3a55cf7298103798e51040c52439e5408172bbb63328d0f559052fe9a9334792: Status 404 returned error can't find the container with id 3a55cf7298103798e51040c52439e5408172bbb63328d0f559052fe9a9334792 Dec 11 10:13:51 crc kubenswrapper[4953]: I1211 10:13:51.671698 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 10:13:51 crc kubenswrapper[4953]: E1211 10:13:51.671849 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 10:13:52.171823396 +0000 UTC m=+150.195682429 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:13:51 crc kubenswrapper[4953]: I1211 10:13:51.672256 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-r99w9\" (UID: \"c1a4e773-6467-424c-935e-40ef82e5fa99\") " pod="openshift-image-registry/image-registry-697d97f7c8-r99w9" Dec 11 10:13:51 crc kubenswrapper[4953]: E1211 10:13:51.672659 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 10:13:52.172639544 +0000 UTC m=+150.196498577 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-r99w9" (UID: "c1a4e773-6467-424c-935e-40ef82e5fa99") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:13:51 crc kubenswrapper[4953]: I1211 10:13:51.698847 4953 patch_prober.go:28] interesting pod/router-default-5444994796-v8699 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 11 10:13:51 crc kubenswrapper[4953]: [-]has-synced failed: reason withheld Dec 11 10:13:51 crc kubenswrapper[4953]: [+]process-running ok Dec 11 10:13:51 crc kubenswrapper[4953]: healthz check failed Dec 11 10:13:51 crc kubenswrapper[4953]: I1211 10:13:51.699137 4953 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-v8699" podUID="d16293c2-d5aa-41fe-859c-0cc5201b6f0b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 10:13:51 crc kubenswrapper[4953]: I1211 10:13:51.820599 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 10:13:51 crc kubenswrapper[4953]: I1211 10:13:51.820862 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 10:13:51 crc kubenswrapper[4953]: I1211 10:13:51.820911 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 10:13:51 crc kubenswrapper[4953]: I1211 10:13:51.820940 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 10:13:51 crc kubenswrapper[4953]: E1211 10:13:51.822438 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 10:13:52.322410575 +0000 UTC m=+150.346269608 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:13:51 crc kubenswrapper[4953]: I1211 10:13:51.833387 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 10:13:51 crc kubenswrapper[4953]: I1211 10:13:51.836250 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 10:13:51 crc kubenswrapper[4953]: I1211 10:13:51.838152 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 10:13:51 crc kubenswrapper[4953]: I1211 10:13:51.886602 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 10:13:51 crc kubenswrapper[4953]: I1211 10:13:51.946112 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-r99w9\" (UID: \"c1a4e773-6467-424c-935e-40ef82e5fa99\") " pod="openshift-image-registry/image-registry-697d97f7c8-r99w9" Dec 11 10:13:51 crc kubenswrapper[4953]: E1211 10:13:51.946564 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 10:13:52.446547672 +0000 UTC m=+150.470406705 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-r99w9" (UID: "c1a4e773-6467-424c-935e-40ef82e5fa99") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:13:51 crc kubenswrapper[4953]: I1211 10:13:51.947525 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 10:13:51 crc kubenswrapper[4953]: I1211 10:13:51.947926 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 10:13:52 crc kubenswrapper[4953]: I1211 10:13:52.047775 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 10:13:52 crc kubenswrapper[4953]: E1211 10:13:52.049197 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 10:13:52.549161181 +0000 UTC m=+150.573020254 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:13:52 crc kubenswrapper[4953]: I1211 10:13:52.149652 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-r99w9\" (UID: \"c1a4e773-6467-424c-935e-40ef82e5fa99\") " pod="openshift-image-registry/image-registry-697d97f7c8-r99w9" Dec 11 10:13:52 crc kubenswrapper[4953]: E1211 10:13:52.150290 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 10:13:52.650266049 +0000 UTC m=+150.674125082 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-r99w9" (UID: "c1a4e773-6467-424c-935e-40ef82e5fa99") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:13:52 crc kubenswrapper[4953]: I1211 10:13:52.157510 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-svgfk" event={"ID":"3b330811-c6d6-4052-a061-d3c7781d619e","Type":"ContainerStarted","Data":"bff4e51c61dc7459a2d1f852d5d735715f675ad340c98b66056290ce3620502c"} Dec 11 10:13:52 crc kubenswrapper[4953]: I1211 10:13:52.157550 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-svgfk" event={"ID":"3b330811-c6d6-4052-a061-d3c7781d619e","Type":"ContainerStarted","Data":"11ce0ba68b3b2ebaea5dd30b9b293a9b0808456c2c659373487dcbc9dad1266d"} Dec 11 10:13:52 crc kubenswrapper[4953]: I1211 10:13:52.161364 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-m69bw" event={"ID":"090e3900-f3c2-4c4b-aa6f-3f2b77fa67f3","Type":"ContainerStarted","Data":"b3eec446ff19b64e07de930206353434040387aa40e044b9e6e94ad4a553fa76"} Dec 11 10:13:52 crc kubenswrapper[4953]: I1211 10:13:52.182028 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-65zv8" event={"ID":"85c2f4f5-9757-4a9b-947e-27caba1bcf40","Type":"ContainerStarted","Data":"3f5bc5b8155275369f018b6779561dee44b0bae3beeaacb256e3e136e232068a"} Dec 11 10:13:52 crc kubenswrapper[4953]: I1211 10:13:52.360590 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 10:13:52 crc kubenswrapper[4953]: E1211 10:13:52.360952 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 10:13:52.860931984 +0000 UTC m=+150.884791017 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:13:52 crc kubenswrapper[4953]: I1211 10:13:52.365629 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-8s4mq" event={"ID":"f3b9e0de-9d50-4564-b075-9e56de0d6d20","Type":"ContainerStarted","Data":"fb0238f0017e9236b1a4c2b5762dacff0701152f9cfbaa2a148e8686e2f14ecd"} Dec 11 10:13:52 crc kubenswrapper[4953]: I1211 10:13:52.370110 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-nzrxl" event={"ID":"908334c7-0bff-48d7-b294-70e88f29aa95","Type":"ContainerStarted","Data":"73d0d0a22b985d4b38f27abee40bc49fe1956d5c49310ba23014107daa4136b3"} Dec 11 10:13:52 crc kubenswrapper[4953]: I1211 10:13:52.370165 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-nzrxl" event={"ID":"908334c7-0bff-48d7-b294-70e88f29aa95","Type":"ContainerStarted","Data":"6f18af52dc2318df77893914b18323c18a230098a36516e7ef6437f23a4ac311"} Dec 11 10:13:52 crc kubenswrapper[4953]: I1211 10:13:52.377174 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hmk2h" event={"ID":"d5a48387-bf09-4cf3-b4c0-2b75d3c9b3f9","Type":"ContainerStarted","Data":"9963647be13983d82998a1a73165b9e9d0a7e47c07f700c59a1fe37fdb80c5af"} Dec 11 10:13:52 crc kubenswrapper[4953]: I1211 10:13:52.377912 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hmk2h" Dec 11 10:13:52 crc kubenswrapper[4953]: I1211 10:13:52.383820 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-crtp9" event={"ID":"b9ce2b59-c756-43bf-8114-9fe86a8c8cd9","Type":"ContainerDied","Data":"38ff998889a02485dc9d4395825eff63022446ffa71e62b8b23b7530351eb30f"} Dec 11 10:13:52 crc kubenswrapper[4953]: I1211 10:13:52.383766 4953 generic.go:334] "Generic (PLEG): container finished" podID="b9ce2b59-c756-43bf-8114-9fe86a8c8cd9" containerID="38ff998889a02485dc9d4395825eff63022446ffa71e62b8b23b7530351eb30f" exitCode=0 Dec 11 10:13:52 crc kubenswrapper[4953]: I1211 10:13:52.385910 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-jb2sd" event={"ID":"7a0b9050-3135-4670-bb41-0b7cf15918e6","Type":"ContainerStarted","Data":"95e73f2519fdd6235b22cad9a8074e45a9c5f9b23b3c2de86810d095f18dbdcb"} Dec 11 10:13:52 crc kubenswrapper[4953]: I1211 10:13:52.406357 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9shds" event={"ID":"c97cb435-9028-4ea4-a6cb-7851c2845566","Type":"ContainerStarted","Data":"a09b75ea6092d8406ff5e9face205d5a7bcec72b8cd63605b16ea521688eda55"} Dec 11 10:13:52 crc kubenswrapper[4953]: I1211 10:13:52.406420 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9shds" event={"ID":"c97cb435-9028-4ea4-a6cb-7851c2845566","Type":"ContainerStarted","Data":"988044abc1cfb19f72239af9e3d2d39202c6996ba3c0a8ca9b5d7381d856e794"} Dec 11 10:13:52 crc kubenswrapper[4953]: I1211 10:13:52.416237 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-wfrqd" event={"ID":"6a593442-828c-4cff-b9b9-4efa41ef6f44","Type":"ContainerStarted","Data":"c01783552eecd0d5a0ac23b8c1bcd503a75a30f2bda5b53efa242177d19e5b48"} Dec 11 10:13:52 crc kubenswrapper[4953]: I1211 10:13:52.419701 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-b55xt" event={"ID":"c310bdcf-5786-4970-a81d-651417521b3c","Type":"ContainerStarted","Data":"6b15692bfd5600033156f526db1864d13dc79ac85fddb96e49b47e0a186ec6bf"} Dec 11 10:13:52 crc kubenswrapper[4953]: I1211 10:13:52.420972 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7b4cr" event={"ID":"e4692382-7018-4f9f-b54e-4bcb83044387","Type":"ContainerStarted","Data":"1db36f9d15e2a2d98ef324d224b750c9cb89138f26191154c69d9f8532b63934"} Dec 11 10:13:52 crc kubenswrapper[4953]: I1211 10:13:52.443650 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vrm5k" event={"ID":"24a3a305-afdf-4c02-b335-b8c173651e93","Type":"ContainerStarted","Data":"19fd1d7e3c6f7ad76e8943ee5f8127e03a655086723f63bfc5e43bf2aa5ca81f"} Dec 11 10:13:52 crc kubenswrapper[4953]: I1211 10:13:52.454832 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29424120-hdqwl" event={"ID":"88498e28-0a15-43a5-b157-5a3baccfaaaf","Type":"ContainerStarted","Data":"0663c1f7bcd738da2d586b7682bdf7a4dd951c70c5ea8c8362f97f69e222c90b"} Dec 11 10:13:52 crc kubenswrapper[4953]: I1211 10:13:52.457516 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-jnqj6" event={"ID":"49bbe965-c5d1-4c35-a42b-3b8e7a264de7","Type":"ContainerStarted","Data":"44508179cc11dcc34dbd7c78a7707efcbb07d29cdabb4d4822f6ed691c0eb73e"} Dec 11 10:13:52 crc kubenswrapper[4953]: I1211 10:13:52.459464 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-jnqj6" Dec 11 10:13:52 crc kubenswrapper[4953]: I1211 10:13:52.617167 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-r99w9\" (UID: \"c1a4e773-6467-424c-935e-40ef82e5fa99\") " pod="openshift-image-registry/image-registry-697d97f7c8-r99w9" Dec 11 10:13:52 crc kubenswrapper[4953]: E1211 10:13:52.617772 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 10:13:53.117755962 +0000 UTC m=+151.141615005 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-r99w9" (UID: "c1a4e773-6467-424c-935e-40ef82e5fa99") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:13:52 crc kubenswrapper[4953]: I1211 10:13:52.769249 4953 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-jnqj6 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Dec 11 10:13:52 crc kubenswrapper[4953]: I1211 10:13:52.769312 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 10:13:52 crc kubenswrapper[4953]: I1211 10:13:52.769314 4953 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-jnqj6" podUID="49bbe965-c5d1-4c35-a42b-3b8e7a264de7" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" Dec 11 10:13:52 crc kubenswrapper[4953]: E1211 10:13:52.769868 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 10:13:53.26982807 +0000 UTC m=+151.293687113 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:13:52 crc kubenswrapper[4953]: I1211 10:13:52.833445 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-x8dvj" event={"ID":"2100f1b5-4d63-421f-8090-601fbb1ce20d","Type":"ContainerStarted","Data":"4eb0562b573d0aad67204388997a07572daa59df179bcd4fea55fccb6ebc2a5c"} Dec 11 10:13:53 crc kubenswrapper[4953]: I1211 10:13:53.020197 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-r99w9\" (UID: \"c1a4e773-6467-424c-935e-40ef82e5fa99\") " pod="openshift-image-registry/image-registry-697d97f7c8-r99w9" Dec 11 10:13:53 crc kubenswrapper[4953]: E1211 10:13:53.020519 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 10:13:53.520501283 +0000 UTC m=+151.544360316 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-r99w9" (UID: "c1a4e773-6467-424c-935e-40ef82e5fa99") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:13:53 crc kubenswrapper[4953]: I1211 10:13:53.020777 4953 patch_prober.go:28] interesting pod/router-default-5444994796-v8699 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 11 10:13:53 crc kubenswrapper[4953]: [-]has-synced failed: reason withheld Dec 11 10:13:53 crc kubenswrapper[4953]: [+]process-running ok Dec 11 10:13:53 crc kubenswrapper[4953]: healthz check failed Dec 11 10:13:53 crc kubenswrapper[4953]: I1211 10:13:53.020849 4953 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-v8699" podUID="d16293c2-d5aa-41fe-859c-0cc5201b6f0b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 10:13:53 crc kubenswrapper[4953]: I1211 10:13:53.095809 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hmk2h" podStartSLOduration=127.095793952 podStartE2EDuration="2m7.095793952s" podCreationTimestamp="2025-12-11 10:11:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:13:52.835085058 +0000 UTC m=+150.858944081" watchObservedRunningTime="2025-12-11 10:13:53.095793952 +0000 UTC m=+151.119652985" Dec 11 10:13:53 crc kubenswrapper[4953]: I1211 10:13:53.123897 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 10:13:53 crc kubenswrapper[4953]: E1211 10:13:53.124307 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 10:13:53.624274101 +0000 UTC m=+151.648133134 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:13:53 crc kubenswrapper[4953]: I1211 10:13:53.125699 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-r99w9\" (UID: \"c1a4e773-6467-424c-935e-40ef82e5fa99\") " pod="openshift-image-registry/image-registry-697d97f7c8-r99w9" Dec 11 10:13:53 crc kubenswrapper[4953]: I1211 10:13:53.127768 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-xmb4p" event={"ID":"06554344-a634-4dec-aaf7-e3d9919d9e80","Type":"ContainerStarted","Data":"512517f86924282acd209acc698ebf59a82a4e0987feffc8ef093ea10d90139f"} Dec 11 10:13:53 crc kubenswrapper[4953]: I1211 10:13:53.128513 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-xmb4p" event={"ID":"06554344-a634-4dec-aaf7-e3d9919d9e80","Type":"ContainerStarted","Data":"366a26b1116e09d72e9985dd5d5cf3c2279f6e338d085e54287029b14246fc31"} Dec 11 10:13:53 crc kubenswrapper[4953]: I1211 10:13:53.128831 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-xmb4p" Dec 11 10:13:53 crc kubenswrapper[4953]: E1211 10:13:53.129010 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 10:13:53.628997946 +0000 UTC m=+151.652856969 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-r99w9" (UID: "c1a4e773-6467-424c-935e-40ef82e5fa99") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:13:53 crc kubenswrapper[4953]: I1211 10:13:53.159142 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-22hb8" event={"ID":"cc3eba09-e19d-4f1e-abbf-01d6f9463022","Type":"ContainerStarted","Data":"69e4b1480d01c674f195dc634c12d191a8705eec0c7ce2e152bcfefd24cf13d4"} Dec 11 10:13:53 crc kubenswrapper[4953]: I1211 10:13:53.184103 4953 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-xmb4p container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" start-of-body= Dec 11 10:13:53 crc kubenswrapper[4953]: I1211 10:13:53.184173 4953 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-xmb4p" podUID="06554344-a634-4dec-aaf7-e3d9919d9e80" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" Dec 11 10:13:53 crc kubenswrapper[4953]: I1211 10:13:53.184375 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-69q67" event={"ID":"b1904c62-b304-45f8-a72b-e89e77597ec1","Type":"ContainerStarted","Data":"5c76bdfc1bf6ba4c43cf6ce5313755212ef44afd7532a09c3bde73290c3021bc"} Dec 11 10:13:53 crc kubenswrapper[4953]: I1211 10:13:53.217210 4953 generic.go:334] "Generic (PLEG): container finished" podID="35703302-61e8-4383-9d13-0449584419e4" containerID="ca25b6e67eff1575d1c488c8efd450d6b7b1c0937b50c04aba8ba33cb080e7c0" exitCode=0 Dec 11 10:13:53 crc kubenswrapper[4953]: I1211 10:13:53.217331 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-j88r5" event={"ID":"35703302-61e8-4383-9d13-0449584419e4","Type":"ContainerDied","Data":"ca25b6e67eff1575d1c488c8efd450d6b7b1c0937b50c04aba8ba33cb080e7c0"} Dec 11 10:13:53 crc kubenswrapper[4953]: I1211 10:13:53.217369 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-j88r5" event={"ID":"35703302-61e8-4383-9d13-0449584419e4","Type":"ContainerStarted","Data":"2fc77c552843caf09604575d96afad1e6c8a906850655c2fdc09649a4c1b5d90"} Dec 11 10:13:53 crc kubenswrapper[4953]: I1211 10:13:53.224459 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-9jt44" event={"ID":"63ca4931-8019-4e0d-ab43-ae5bd50b8d91","Type":"ContainerStarted","Data":"2fd19b1d3525293fe8a1689b91e17acf46c7fad4d58d6e03ed0463a14eac4aa9"} Dec 11 10:13:53 crc kubenswrapper[4953]: I1211 10:13:53.225392 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-9jt44" Dec 11 10:13:53 crc kubenswrapper[4953]: I1211 10:13:53.622713 4953 patch_prober.go:28] interesting pod/downloads-7954f5f757-9jt44 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Dec 11 10:13:53 crc kubenswrapper[4953]: I1211 10:13:53.622757 4953 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-9jt44" podUID="63ca4931-8019-4e0d-ab43-ae5bd50b8d91" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Dec 11 10:13:53 crc kubenswrapper[4953]: I1211 10:13:53.625829 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-wfrqd" podStartSLOduration=129.625809695 podStartE2EDuration="2m9.625809695s" podCreationTimestamp="2025-12-11 10:11:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:13:53.622716183 +0000 UTC m=+151.646575236" watchObservedRunningTime="2025-12-11 10:13:53.625809695 +0000 UTC m=+151.649668728" Dec 11 10:13:53 crc kubenswrapper[4953]: I1211 10:13:53.629788 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 10:13:53 crc kubenswrapper[4953]: E1211 10:13:53.629950 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 10:13:54.629912181 +0000 UTC m=+152.653771234 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:13:53 crc kubenswrapper[4953]: I1211 10:13:53.632643 4953 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-hmk2h container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 11 10:13:53 crc kubenswrapper[4953]: I1211 10:13:53.632693 4953 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hmk2h" podUID="d5a48387-bf09-4cf3-b4c0-2b75d3c9b3f9" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 10:13:53 crc kubenswrapper[4953]: I1211 10:13:53.642753 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-9sdjn" event={"ID":"a7b5a1d1-788d-448e-b859-c29daecb9a9b","Type":"ContainerStarted","Data":"db6d9bf39d3618af1f1a9d0a9b761da8cfb89571ff4d419a114dfbcb38864cc0"} Dec 11 10:13:54 crc kubenswrapper[4953]: I1211 10:13:54.327877 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 10:13:54 crc kubenswrapper[4953]: E1211 10:13:54.332454 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 10:13:54.832430239 +0000 UTC m=+152.856289272 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:13:54 crc kubenswrapper[4953]: I1211 10:13:54.334352 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9bt8h" event={"ID":"7eac0fc7-e06a-4d6c-8e8a-a9cebae9d6cf","Type":"ContainerStarted","Data":"ce19f905137bc09006d42c76547b2e7075ea36747d8a382c4ea0a0c457cbd7cc"} Dec 11 10:13:54 crc kubenswrapper[4953]: I1211 10:13:54.967969 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 10:13:54 crc kubenswrapper[4953]: E1211 10:13:54.969975 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 10:13:55.969952749 +0000 UTC m=+153.993811782 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:13:54 crc kubenswrapper[4953]: I1211 10:13:54.970515 4953 patch_prober.go:28] interesting pod/router-default-5444994796-v8699 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 11 10:13:54 crc kubenswrapper[4953]: [-]has-synced failed: reason withheld Dec 11 10:13:54 crc kubenswrapper[4953]: [+]process-running ok Dec 11 10:13:54 crc kubenswrapper[4953]: healthz check failed Dec 11 10:13:54 crc kubenswrapper[4953]: I1211 10:13:54.970603 4953 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-v8699" podUID="d16293c2-d5aa-41fe-859c-0cc5201b6f0b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 10:13:54 crc kubenswrapper[4953]: I1211 10:13:54.978326 4953 patch_prober.go:28] interesting pod/router-default-5444994796-v8699 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 11 10:13:54 crc kubenswrapper[4953]: [-]has-synced failed: reason withheld Dec 11 10:13:54 crc kubenswrapper[4953]: [+]process-running ok Dec 11 10:13:54 crc kubenswrapper[4953]: healthz check failed Dec 11 10:13:54 crc kubenswrapper[4953]: I1211 10:13:54.978370 4953 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-v8699" podUID="d16293c2-d5aa-41fe-859c-0cc5201b6f0b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 10:13:55 crc kubenswrapper[4953]: I1211 10:13:55.507053 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-r99w9\" (UID: \"c1a4e773-6467-424c-935e-40ef82e5fa99\") " pod="openshift-image-registry/image-registry-697d97f7c8-r99w9" Dec 11 10:13:55 crc kubenswrapper[4953]: E1211 10:13:55.507450 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 10:13:56.007432756 +0000 UTC m=+154.031291789 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-r99w9" (UID: "c1a4e773-6467-424c-935e-40ef82e5fa99") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:13:55 crc kubenswrapper[4953]: I1211 10:13:55.507937 4953 patch_prober.go:28] interesting pod/downloads-7954f5f757-9jt44 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Dec 11 10:13:55 crc kubenswrapper[4953]: I1211 10:13:55.507948 4953 patch_prober.go:28] interesting pod/downloads-7954f5f757-9jt44 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Dec 11 10:13:55 crc kubenswrapper[4953]: I1211 10:13:55.508125 4953 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-9jt44" podUID="63ca4931-8019-4e0d-ab43-ae5bd50b8d91" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Dec 11 10:13:55 crc kubenswrapper[4953]: I1211 10:13:55.508096 4953 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-9jt44" podUID="63ca4931-8019-4e0d-ab43-ae5bd50b8d91" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Dec 11 10:13:55 crc kubenswrapper[4953]: I1211 10:13:55.523042 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-qmmnp" event={"ID":"9d5a8173-8a2a-42c1-9935-2433336c3be7","Type":"ContainerStarted","Data":"8bada8bcae3ce49c0bc12f033be8974078026c7b565e729c9a1ffc5e10a4a5f3"} Dec 11 10:13:55 crc kubenswrapper[4953]: I1211 10:13:55.523093 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-wfrqd" Dec 11 10:13:55 crc kubenswrapper[4953]: I1211 10:13:55.523108 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-v8699" Dec 11 10:13:55 crc kubenswrapper[4953]: I1211 10:13:55.523160 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-wfrqd" Dec 11 10:13:55 crc kubenswrapper[4953]: I1211 10:13:55.913788 4953 patch_prober.go:28] interesting pod/console-f9d7485db-wfrqd container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.14:8443/health\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Dec 11 10:13:55 crc kubenswrapper[4953]: E1211 10:13:55.914089 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 10:13:56.414072295 +0000 UTC m=+154.437931328 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:13:55 crc kubenswrapper[4953]: I1211 10:13:55.914097 4953 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-wfrqd" podUID="6a593442-828c-4cff-b9b9-4efa41ef6f44" containerName="console" probeResult="failure" output="Get \"https://10.217.0.14:8443/health\": dial tcp 10.217.0.14:8443: connect: connection refused" Dec 11 10:13:55 crc kubenswrapper[4953]: I1211 10:13:55.914028 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 10:13:55 crc kubenswrapper[4953]: I1211 10:13:55.914489 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-r99w9\" (UID: \"c1a4e773-6467-424c-935e-40ef82e5fa99\") " pod="openshift-image-registry/image-registry-697d97f7c8-r99w9" Dec 11 10:13:55 crc kubenswrapper[4953]: E1211 10:13:55.916639 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 10:13:56.416625849 +0000 UTC m=+154.440484962 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-r99w9" (UID: "c1a4e773-6467-424c-935e-40ef82e5fa99") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:13:55 crc kubenswrapper[4953]: I1211 10:13:55.922475 4953 patch_prober.go:28] interesting pod/router-default-5444994796-v8699 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 11 10:13:55 crc kubenswrapper[4953]: [-]has-synced failed: reason withheld Dec 11 10:13:55 crc kubenswrapper[4953]: [+]process-running ok Dec 11 10:13:55 crc kubenswrapper[4953]: healthz check failed Dec 11 10:13:55 crc kubenswrapper[4953]: I1211 10:13:55.922527 4953 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-v8699" podUID="d16293c2-d5aa-41fe-859c-0cc5201b6f0b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 10:13:55 crc kubenswrapper[4953]: I1211 10:13:55.933503 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-8lqhc" event={"ID":"0b1583dc-078f-4ced-a9d9-a16856b18406","Type":"ContainerStarted","Data":"b2f31621983aa668d1609875b8af2d74a46b93bb13e119f4dbd3f359774f45b7"} Dec 11 10:13:55 crc kubenswrapper[4953]: I1211 10:13:55.933553 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-8lqhc" event={"ID":"0b1583dc-078f-4ced-a9d9-a16856b18406","Type":"ContainerStarted","Data":"f0db6c47159e6292cc71a5e6ce74a98b1fadece4e93debcf358f213ee28be9e6"} Dec 11 10:13:55 crc kubenswrapper[4953]: I1211 10:13:55.935867 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-jnqj6" Dec 11 10:13:56 crc kubenswrapper[4953]: I1211 10:13:55.947633 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hmk2h" Dec 11 10:13:56 crc kubenswrapper[4953]: I1211 10:13:56.315322 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 10:13:56 crc kubenswrapper[4953]: E1211 10:13:56.315750 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 10:13:56.815731001 +0000 UTC m=+154.839590034 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:13:56 crc kubenswrapper[4953]: I1211 10:13:56.395285 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-pzsms" event={"ID":"a3347424-53c5-4365-bcca-5ec96a8b2c0b","Type":"ContainerStarted","Data":"38875f491088eefea4d7fb96a5a2e8ed7d61c8ed24bacf7c6b7ad8d0397210a2"} Dec 11 10:13:56 crc kubenswrapper[4953]: I1211 10:13:56.395351 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-pzsms" event={"ID":"a3347424-53c5-4365-bcca-5ec96a8b2c0b","Type":"ContainerStarted","Data":"b2f2fe79bfa5f09f98852e01697836bc1110423eecbff94431f3f9ba7bd2bf8c"} Dec 11 10:13:56 crc kubenswrapper[4953]: I1211 10:13:56.396734 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-pzsms" Dec 11 10:13:56 crc kubenswrapper[4953]: I1211 10:13:56.403069 4953 patch_prober.go:28] interesting pod/console-operator-58897d9998-pzsms container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.13:8443/readyz\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Dec 11 10:13:56 crc kubenswrapper[4953]: I1211 10:13:56.403129 4953 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-pzsms" podUID="a3347424-53c5-4365-bcca-5ec96a8b2c0b" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.13:8443/readyz\": dial tcp 10.217.0.13:8443: connect: connection refused" Dec 11 10:13:56 crc kubenswrapper[4953]: I1211 10:13:56.407493 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kb52r" event={"ID":"16c21d06-dc6b-45ea-8dc9-3a9de57e0b9b","Type":"ContainerStarted","Data":"076756ba1909b53491d6fd59d370294cb6831295e6b036bf06c99429bee3647d"} Dec 11 10:13:56 crc kubenswrapper[4953]: I1211 10:13:56.407531 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kb52r" event={"ID":"16c21d06-dc6b-45ea-8dc9-3a9de57e0b9b","Type":"ContainerStarted","Data":"35205f958fa4e612ab3f329643284ba103966cb1e0b9aa34cd5a81b50bee611b"} Dec 11 10:13:56 crc kubenswrapper[4953]: I1211 10:13:56.451195 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-r99w9\" (UID: \"c1a4e773-6467-424c-935e-40ef82e5fa99\") " pod="openshift-image-registry/image-registry-697d97f7c8-r99w9" Dec 11 10:13:56 crc kubenswrapper[4953]: E1211 10:13:56.451650 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 10:13:56.951638326 +0000 UTC m=+154.975497359 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-r99w9" (UID: "c1a4e773-6467-424c-935e-40ef82e5fa99") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:13:56 crc kubenswrapper[4953]: I1211 10:13:56.563872 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 10:13:56 crc kubenswrapper[4953]: E1211 10:13:56.567186 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 10:13:57.06716642 +0000 UTC m=+155.091025453 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:13:56 crc kubenswrapper[4953]: I1211 10:13:56.590038 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zlmqt" event={"ID":"51a8b0f7-87b1-48af-961b-5802873c6f76","Type":"ContainerStarted","Data":"e799f75f1ff40f1b71e97d850e888a43dfdce78f61a8ceec5c6356e15d837f91"} Dec 11 10:13:56 crc kubenswrapper[4953]: I1211 10:13:56.590181 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zlmqt" event={"ID":"51a8b0f7-87b1-48af-961b-5802873c6f76","Type":"ContainerStarted","Data":"38352ac619d4b62beff3d1755783a0160c65349f2dfcd708fad3cb73c1d83535"} Dec 11 10:13:56 crc kubenswrapper[4953]: I1211 10:13:56.600029 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-ml8wp" event={"ID":"f829d83c-e4f2-4e16-b02e-57b445b6fa41","Type":"ContainerStarted","Data":"ebd389efdd5df3dac887ce9d5e7dafb64143c6b06b44790c2885d09ba757791a"} Dec 11 10:13:56 crc kubenswrapper[4953]: I1211 10:13:56.698206 4953 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-xmb4p container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" start-of-body= Dec 11 10:13:56 crc kubenswrapper[4953]: I1211 10:13:56.698547 4953 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-xmb4p" podUID="06554344-a634-4dec-aaf7-e3d9919d9e80" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" Dec 11 10:13:56 crc kubenswrapper[4953]: I1211 10:13:56.698989 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-r99w9\" (UID: \"c1a4e773-6467-424c-935e-40ef82e5fa99\") " pod="openshift-image-registry/image-registry-697d97f7c8-r99w9" Dec 11 10:13:56 crc kubenswrapper[4953]: E1211 10:13:56.699391 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 10:13:57.199364672 +0000 UTC m=+155.223223705 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-r99w9" (UID: "c1a4e773-6467-424c-935e-40ef82e5fa99") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:13:56 crc kubenswrapper[4953]: I1211 10:13:56.703369 4953 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-xmb4p container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" start-of-body= Dec 11 10:13:56 crc kubenswrapper[4953]: I1211 10:13:56.703406 4953 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-79b997595-xmb4p" podUID="06554344-a634-4dec-aaf7-e3d9919d9e80" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" Dec 11 10:13:56 crc kubenswrapper[4953]: I1211 10:13:56.753944 4953 patch_prober.go:28] interesting pod/router-default-5444994796-v8699 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 11 10:13:56 crc kubenswrapper[4953]: [-]has-synced failed: reason withheld Dec 11 10:13:56 crc kubenswrapper[4953]: [+]process-running ok Dec 11 10:13:56 crc kubenswrapper[4953]: healthz check failed Dec 11 10:13:56 crc kubenswrapper[4953]: I1211 10:13:56.753996 4953 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-v8699" podUID="d16293c2-d5aa-41fe-859c-0cc5201b6f0b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 10:13:56 crc kubenswrapper[4953]: I1211 10:13:56.765851 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7ffjt" event={"ID":"6940241d-144c-44c2-bc2b-6b27c9ed106d","Type":"ContainerStarted","Data":"feb5d2e90478073d886cb062a9427eb5ee1b87b88a552e254cec16e639f1477e"} Dec 11 10:13:56 crc kubenswrapper[4953]: I1211 10:13:56.829331 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-b82cx" event={"ID":"5c6ad29d-983a-4388-a962-3ee6af6f042f","Type":"ContainerStarted","Data":"3b21723f17bd8c8355530d9a8b9deaf685c8f3bd936e9ae698e2f49e0e68b890"} Dec 11 10:13:56 crc kubenswrapper[4953]: I1211 10:13:56.833290 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 10:13:56 crc kubenswrapper[4953]: E1211 10:13:56.835170 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 10:13:57.335151483 +0000 UTC m=+155.359010516 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:13:56 crc kubenswrapper[4953]: I1211 10:13:56.862609 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-vjv7f" event={"ID":"c573c49d-036d-4d92-a63d-4f830df8a262","Type":"ContainerStarted","Data":"ced72f405a3bb2eefa57f904a407f3fe8398bc1fd93b83bb11df85ddc300eeca"} Dec 11 10:13:56 crc kubenswrapper[4953]: I1211 10:13:56.972314 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-r99w9\" (UID: \"c1a4e773-6467-424c-935e-40ef82e5fa99\") " pod="openshift-image-registry/image-registry-697d97f7c8-r99w9" Dec 11 10:13:57 crc kubenswrapper[4953]: E1211 10:13:56.972765 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 10:13:57.472752244 +0000 UTC m=+155.496611277 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-r99w9" (UID: "c1a4e773-6467-424c-935e-40ef82e5fa99") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:13:57 crc kubenswrapper[4953]: I1211 10:13:56.987595 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-cl6x8" event={"ID":"3cee2756-70ae-44b9-b52a-43cf1bc552e0","Type":"ContainerStarted","Data":"b74aacc0ec3fc0c3f973c79f99b35f437c1da67bca4154b083521ee8b3077f63"} Dec 11 10:13:57 crc kubenswrapper[4953]: I1211 10:13:57.002906 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bc5n5" event={"ID":"7c7d0de5-9432-4fd6-b44d-6529c186be7e","Type":"ContainerStarted","Data":"85d4a2ceba4f41f38835612052b38e3355a0f3e315964f8922c9c862926e163e"} Dec 11 10:13:57 crc kubenswrapper[4953]: I1211 10:13:57.024262 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8dr5c" event={"ID":"872f79b9-6f54-4b5c-bc80-cd2404dc3156","Type":"ContainerStarted","Data":"1b76379bc3f98114964eafa48c7144b5652186464fa5bb903b21e75282ed15ef"} Dec 11 10:13:57 crc kubenswrapper[4953]: I1211 10:13:57.032784 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-krg44" event={"ID":"3374cd21-f51d-4fd3-afe6-8fd43d81622a","Type":"ContainerStarted","Data":"3a55cf7298103798e51040c52439e5408172bbb63328d0f559052fe9a9334792"} Dec 11 10:13:57 crc kubenswrapper[4953]: I1211 10:13:57.053185 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6jmjq" event={"ID":"332434db-75e4-4fce-8973-aff84310d0f5","Type":"ContainerStarted","Data":"9d4901533846ff0f3f13087e6033c5fd902208f057ac2d2574a460d5bff6bba5"} Dec 11 10:13:57 crc kubenswrapper[4953]: I1211 10:13:57.089504 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 10:13:57 crc kubenswrapper[4953]: E1211 10:13:57.090846 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 10:13:57.590829791 +0000 UTC m=+155.614688824 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:13:57 crc kubenswrapper[4953]: I1211 10:13:57.211196 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-r99w9\" (UID: \"c1a4e773-6467-424c-935e-40ef82e5fa99\") " pod="openshift-image-registry/image-registry-697d97f7c8-r99w9" Dec 11 10:13:57 crc kubenswrapper[4953]: E1211 10:13:57.211745 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 10:13:57.711728111 +0000 UTC m=+155.735587144 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-r99w9" (UID: "c1a4e773-6467-424c-935e-40ef82e5fa99") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:13:57 crc kubenswrapper[4953]: I1211 10:13:57.444031 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 10:13:57 crc kubenswrapper[4953]: E1211 10:13:57.444426 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 10:13:57.944411793 +0000 UTC m=+155.968270826 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:13:57 crc kubenswrapper[4953]: I1211 10:13:57.545121 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-r99w9\" (UID: \"c1a4e773-6467-424c-935e-40ef82e5fa99\") " pod="openshift-image-registry/image-registry-697d97f7c8-r99w9" Dec 11 10:13:57 crc kubenswrapper[4953]: E1211 10:13:57.545590 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 10:13:58.045548803 +0000 UTC m=+156.069407836 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-r99w9" (UID: "c1a4e773-6467-424c-935e-40ef82e5fa99") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:13:57 crc kubenswrapper[4953]: I1211 10:13:57.645986 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 10:13:57 crc kubenswrapper[4953]: E1211 10:13:57.646455 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 10:13:58.146426374 +0000 UTC m=+156.170285457 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:13:57 crc kubenswrapper[4953]: I1211 10:13:57.647126 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-r99w9\" (UID: \"c1a4e773-6467-424c-935e-40ef82e5fa99\") " pod="openshift-image-registry/image-registry-697d97f7c8-r99w9" Dec 11 10:13:57 crc kubenswrapper[4953]: I1211 10:13:57.647245 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-jnqj6" podStartSLOduration=132.647227231 podStartE2EDuration="2m12.647227231s" podCreationTimestamp="2025-12-11 10:11:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:13:57.643181837 +0000 UTC m=+155.667040870" watchObservedRunningTime="2025-12-11 10:13:57.647227231 +0000 UTC m=+155.671086264" Dec 11 10:13:57 crc kubenswrapper[4953]: E1211 10:13:57.647466 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 10:13:58.147452658 +0000 UTC m=+156.171311691 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-r99w9" (UID: "c1a4e773-6467-424c-935e-40ef82e5fa99") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:13:57 crc kubenswrapper[4953]: I1211 10:13:57.735629 4953 patch_prober.go:28] interesting pod/router-default-5444994796-v8699 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 11 10:13:57 crc kubenswrapper[4953]: [-]has-synced failed: reason withheld Dec 11 10:13:57 crc kubenswrapper[4953]: [+]process-running ok Dec 11 10:13:57 crc kubenswrapper[4953]: healthz check failed Dec 11 10:13:57 crc kubenswrapper[4953]: I1211 10:13:57.735693 4953 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-v8699" podUID="d16293c2-d5aa-41fe-859c-0cc5201b6f0b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 10:13:57 crc kubenswrapper[4953]: I1211 10:13:57.749789 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 10:13:57 crc kubenswrapper[4953]: E1211 10:13:57.750262 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 10:13:58.250245493 +0000 UTC m=+156.274104526 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:13:57 crc kubenswrapper[4953]: I1211 10:13:57.908905 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-r99w9\" (UID: \"c1a4e773-6467-424c-935e-40ef82e5fa99\") " pod="openshift-image-registry/image-registry-697d97f7c8-r99w9" Dec 11 10:13:57 crc kubenswrapper[4953]: E1211 10:13:57.909536 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 10:13:58.409504255 +0000 UTC m=+156.433363288 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-r99w9" (UID: "c1a4e773-6467-424c-935e-40ef82e5fa99") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:13:58 crc kubenswrapper[4953]: I1211 10:13:58.009951 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 10:13:58 crc kubenswrapper[4953]: E1211 10:13:58.010294 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 10:13:58.510278341 +0000 UTC m=+156.534137374 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:13:58 crc kubenswrapper[4953]: I1211 10:13:58.116438 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-r99w9\" (UID: \"c1a4e773-6467-424c-935e-40ef82e5fa99\") " pod="openshift-image-registry/image-registry-697d97f7c8-r99w9" Dec 11 10:13:58 crc kubenswrapper[4953]: E1211 10:13:58.117133 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 10:13:58.617117996 +0000 UTC m=+156.640977029 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-r99w9" (UID: "c1a4e773-6467-424c-935e-40ef82e5fa99") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:13:58 crc kubenswrapper[4953]: I1211 10:13:58.179016 4953 generic.go:334] "Generic (PLEG): container finished" podID="c97cb435-9028-4ea4-a6cb-7851c2845566" containerID="a09b75ea6092d8406ff5e9face205d5a7bcec72b8cd63605b16ea521688eda55" exitCode=0 Dec 11 10:13:58 crc kubenswrapper[4953]: I1211 10:13:58.179114 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9shds" event={"ID":"c97cb435-9028-4ea4-a6cb-7851c2845566","Type":"ContainerDied","Data":"a09b75ea6092d8406ff5e9face205d5a7bcec72b8cd63605b16ea521688eda55"} Dec 11 10:13:58 crc kubenswrapper[4953]: I1211 10:13:58.225795 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 10:13:58 crc kubenswrapper[4953]: E1211 10:13:58.226121 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 10:13:58.726102242 +0000 UTC m=+156.749961275 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:13:58 crc kubenswrapper[4953]: I1211 10:13:58.328526 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-r99w9\" (UID: \"c1a4e773-6467-424c-935e-40ef82e5fa99\") " pod="openshift-image-registry/image-registry-697d97f7c8-r99w9" Dec 11 10:13:58 crc kubenswrapper[4953]: E1211 10:13:58.329161 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 10:13:58.829148223 +0000 UTC m=+156.853007256 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-r99w9" (UID: "c1a4e773-6467-424c-935e-40ef82e5fa99") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:13:58 crc kubenswrapper[4953]: I1211 10:13:58.381796 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-8s4mq" Dec 11 10:13:58 crc kubenswrapper[4953]: I1211 10:13:58.383827 4953 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-8s4mq container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.8:6443/healthz\": dial tcp 10.217.0.8:6443: connect: connection refused" start-of-body= Dec 11 10:13:58 crc kubenswrapper[4953]: I1211 10:13:58.383990 4953 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-8s4mq" podUID="f3b9e0de-9d50-4564-b075-9e56de0d6d20" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.8:6443/healthz\": dial tcp 10.217.0.8:6443: connect: connection refused" Dec 11 10:13:58 crc kubenswrapper[4953]: I1211 10:13:58.394219 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"cd504c1a9ee09363e7fea46a081fed73b31998a4c49a1611a1539ce054e593e2"} Dec 11 10:13:58 crc kubenswrapper[4953]: I1211 10:13:58.408757 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-x8dvj" event={"ID":"2100f1b5-4d63-421f-8090-601fbb1ce20d","Type":"ContainerStarted","Data":"da2c1acec603d629ea2bc61055eb377007521b82ac8e6f975032762f420b2f9b"} Dec 11 10:13:58 crc kubenswrapper[4953]: I1211 10:13:58.498365 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 10:13:58 crc kubenswrapper[4953]: E1211 10:13:58.498782 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 10:13:58.998766829 +0000 UTC m=+157.022625862 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:13:58 crc kubenswrapper[4953]: I1211 10:13:58.560751 4953 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-7b4cr container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" start-of-body= Dec 11 10:13:58 crc kubenswrapper[4953]: I1211 10:13:58.563728 4953 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7b4cr" podUID="e4692382-7018-4f9f-b54e-4bcb83044387" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" Dec 11 10:13:58 crc kubenswrapper[4953]: I1211 10:13:58.634227 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7b4cr" Dec 11 10:13:58 crc kubenswrapper[4953]: I1211 10:13:58.634272 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7b4cr" event={"ID":"e4692382-7018-4f9f-b54e-4bcb83044387","Type":"ContainerStarted","Data":"e4b53d2ec2d3a49f26e8fa70717497fc2a8151c8e4620087bb607c375d605421"} Dec 11 10:13:58 crc kubenswrapper[4953]: I1211 10:13:58.634294 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-rxj74" event={"ID":"74a6bf4e-fce1-4865-a637-13252c668255","Type":"ContainerStarted","Data":"395f54337c10ad07b6471f38b321db68defe7d8339c840031ab7d0b767f6dcd4"} Dec 11 10:13:58 crc kubenswrapper[4953]: I1211 10:13:58.638038 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-r99w9\" (UID: \"c1a4e773-6467-424c-935e-40ef82e5fa99\") " pod="openshift-image-registry/image-registry-697d97f7c8-r99w9" Dec 11 10:13:58 crc kubenswrapper[4953]: E1211 10:13:58.641286 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 10:13:59.14125847 +0000 UTC m=+157.165117573 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-r99w9" (UID: "c1a4e773-6467-424c-935e-40ef82e5fa99") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:13:58 crc kubenswrapper[4953]: I1211 10:13:58.645709 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-b82cx" event={"ID":"5c6ad29d-983a-4388-a962-3ee6af6f042f","Type":"ContainerStarted","Data":"093ab679c6be364e299df7fb964ed9e3b72feb74339e2e42aa6147cae3a1f048"} Dec 11 10:13:58 crc kubenswrapper[4953]: I1211 10:13:58.663245 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-krg44" event={"ID":"3374cd21-f51d-4fd3-afe6-8fd43d81622a","Type":"ContainerStarted","Data":"6c6c9f65892103edb5521b497352e24c778aa6fce75299e705fe40d330bbabcc"} Dec 11 10:13:58 crc kubenswrapper[4953]: I1211 10:13:58.675162 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"d8a0cf80f9a264e6b56ea6318604df6377fc392ab7ed2ec810a13aec4e189ea9"} Dec 11 10:13:58 crc kubenswrapper[4953]: I1211 10:13:58.685261 4953 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-xmb4p container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" start-of-body= Dec 11 10:13:58 crc kubenswrapper[4953]: I1211 10:13:58.685413 4953 patch_prober.go:28] interesting pod/console-operator-58897d9998-pzsms container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.13:8443/readyz\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Dec 11 10:13:58 crc kubenswrapper[4953]: I1211 10:13:58.685607 4953 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-xmb4p" podUID="06554344-a634-4dec-aaf7-e3d9919d9e80" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" Dec 11 10:13:58 crc kubenswrapper[4953]: I1211 10:13:58.685853 4953 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-pzsms" podUID="a3347424-53c5-4365-bcca-5ec96a8b2c0b" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.13:8443/readyz\": dial tcp 10.217.0.13:8443: connect: connection refused" Dec 11 10:13:58 crc kubenswrapper[4953]: I1211 10:13:58.685469 4953 patch_prober.go:28] interesting pod/downloads-7954f5f757-9jt44 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Dec 11 10:13:58 crc kubenswrapper[4953]: I1211 10:13:58.686215 4953 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-9jt44" podUID="63ca4931-8019-4e0d-ab43-ae5bd50b8d91" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Dec 11 10:13:58 crc kubenswrapper[4953]: I1211 10:13:58.691478 4953 patch_prober.go:28] interesting pod/router-default-5444994796-v8699 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 11 10:13:58 crc kubenswrapper[4953]: [-]has-synced failed: reason withheld Dec 11 10:13:58 crc kubenswrapper[4953]: [+]process-running ok Dec 11 10:13:58 crc kubenswrapper[4953]: healthz check failed Dec 11 10:13:58 crc kubenswrapper[4953]: I1211 10:13:58.691541 4953 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-v8699" podUID="d16293c2-d5aa-41fe-859c-0cc5201b6f0b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 10:13:58 crc kubenswrapper[4953]: I1211 10:13:58.739501 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 10:13:58 crc kubenswrapper[4953]: E1211 10:13:58.740341 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 10:13:59.240325191 +0000 UTC m=+157.264184224 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:13:58 crc kubenswrapper[4953]: I1211 10:13:58.841256 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-r99w9\" (UID: \"c1a4e773-6467-424c-935e-40ef82e5fa99\") " pod="openshift-image-registry/image-registry-697d97f7c8-r99w9" Dec 11 10:13:58 crc kubenswrapper[4953]: E1211 10:13:58.864102 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 10:13:59.364084566 +0000 UTC m=+157.387943599 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-r99w9" (UID: "c1a4e773-6467-424c-935e-40ef82e5fa99") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:13:58 crc kubenswrapper[4953]: I1211 10:13:58.944071 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 10:13:58 crc kubenswrapper[4953]: E1211 10:13:58.944538 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 10:13:59.444518376 +0000 UTC m=+157.468377419 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:13:59 crc kubenswrapper[4953]: I1211 10:13:59.045313 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-r99w9\" (UID: \"c1a4e773-6467-424c-935e-40ef82e5fa99\") " pod="openshift-image-registry/image-registry-697d97f7c8-r99w9" Dec 11 10:13:59 crc kubenswrapper[4953]: E1211 10:13:59.045709 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 10:13:59.545696166 +0000 UTC m=+157.569555199 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-r99w9" (UID: "c1a4e773-6467-424c-935e-40ef82e5fa99") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:13:59 crc kubenswrapper[4953]: I1211 10:13:59.155269 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 10:13:59 crc kubenswrapper[4953]: E1211 10:13:59.155675 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 10:13:59.655657297 +0000 UTC m=+157.679516330 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:13:59 crc kubenswrapper[4953]: I1211 10:13:59.270144 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-r99w9\" (UID: \"c1a4e773-6467-424c-935e-40ef82e5fa99\") " pod="openshift-image-registry/image-registry-697d97f7c8-r99w9" Dec 11 10:13:59 crc kubenswrapper[4953]: E1211 10:13:59.270523 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 10:13:59.770498798 +0000 UTC m=+157.794357831 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-r99w9" (UID: "c1a4e773-6467-424c-935e-40ef82e5fa99") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:13:59 crc kubenswrapper[4953]: I1211 10:13:59.429720 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 10:13:59 crc kubenswrapper[4953]: E1211 10:13:59.430136 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 10:13:59.930115974 +0000 UTC m=+157.953975017 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:13:59 crc kubenswrapper[4953]: I1211 10:13:59.565953 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-r99w9\" (UID: \"c1a4e773-6467-424c-935e-40ef82e5fa99\") " pod="openshift-image-registry/image-registry-697d97f7c8-r99w9" Dec 11 10:13:59 crc kubenswrapper[4953]: E1211 10:13:59.567001 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 10:14:00.06697976 +0000 UTC m=+158.090838803 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-r99w9" (UID: "c1a4e773-6467-424c-935e-40ef82e5fa99") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:13:59 crc kubenswrapper[4953]: I1211 10:13:59.688689 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 10:13:59 crc kubenswrapper[4953]: E1211 10:13:59.689053 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 10:14:00.189033309 +0000 UTC m=+158.212892342 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:13:59 crc kubenswrapper[4953]: I1211 10:13:59.691643 4953 patch_prober.go:28] interesting pod/router-default-5444994796-v8699 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 11 10:13:59 crc kubenswrapper[4953]: [-]has-synced failed: reason withheld Dec 11 10:13:59 crc kubenswrapper[4953]: [+]process-running ok Dec 11 10:13:59 crc kubenswrapper[4953]: healthz check failed Dec 11 10:13:59 crc kubenswrapper[4953]: I1211 10:13:59.691698 4953 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-v8699" podUID="d16293c2-d5aa-41fe-859c-0cc5201b6f0b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 10:13:59 crc kubenswrapper[4953]: I1211 10:13:59.708524 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-m69bw" event={"ID":"090e3900-f3c2-4c4b-aa6f-3f2b77fa67f3","Type":"ContainerStarted","Data":"3b7ba41518ba55ddeddcf41a00ed1f0fd0d15cc0005439eed96f7deb0d3f0f14"} Dec 11 10:13:59 crc kubenswrapper[4953]: I1211 10:13:59.782474 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-69q67" event={"ID":"b1904c62-b304-45f8-a72b-e89e77597ec1","Type":"ContainerStarted","Data":"24c4f1d2be87323dac343ad71a0acabb4af49858fac2d50da22a09baea795bc6"} Dec 11 10:13:59 crc kubenswrapper[4953]: I1211 10:13:59.785569 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-nzrxl" event={"ID":"908334c7-0bff-48d7-b294-70e88f29aa95","Type":"ContainerStarted","Data":"8048e48bc123c85e4d3faf139d1fb386bde2391939a1128995f0a3a60a0fb3c7"} Dec 11 10:13:59 crc kubenswrapper[4953]: I1211 10:13:59.787349 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-cl6x8" event={"ID":"3cee2756-70ae-44b9-b52a-43cf1bc552e0","Type":"ContainerStarted","Data":"16779bbc80f7901b6a818a97efeaf45158347fba82f44159053fbe7c5a17d76b"} Dec 11 10:13:59 crc kubenswrapper[4953]: I1211 10:13:59.788559 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-cl6x8" Dec 11 10:13:59 crc kubenswrapper[4953]: I1211 10:13:59.792987 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-r99w9\" (UID: \"c1a4e773-6467-424c-935e-40ef82e5fa99\") " pod="openshift-image-registry/image-registry-697d97f7c8-r99w9" Dec 11 10:13:59 crc kubenswrapper[4953]: E1211 10:13:59.793429 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 10:14:00.293412955 +0000 UTC m=+158.317271988 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-r99w9" (UID: "c1a4e773-6467-424c-935e-40ef82e5fa99") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:13:59 crc kubenswrapper[4953]: I1211 10:13:59.797686 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9bt8h" event={"ID":"7eac0fc7-e06a-4d6c-8e8a-a9cebae9d6cf","Type":"ContainerStarted","Data":"c1d5948be82c22c3d8e2b0ad460cf23a3e144276ce93040953bac84c25927dd8"} Dec 11 10:13:59 crc kubenswrapper[4953]: I1211 10:13:59.799164 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vrm5k" event={"ID":"24a3a305-afdf-4c02-b335-b8c173651e93","Type":"ContainerStarted","Data":"63786860c83e045548939f8945b6b005ff1a1712d60ae4b80ea3fa3d8b98c639"} Dec 11 10:13:59 crc kubenswrapper[4953]: I1211 10:13:59.800340 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-jb2sd" event={"ID":"7a0b9050-3135-4670-bb41-0b7cf15918e6","Type":"ContainerStarted","Data":"b84d3624a8858307d54930cb848ca604228b162e76f670cecf4a348a83bfd639"} Dec 11 10:13:59 crc kubenswrapper[4953]: I1211 10:13:59.801728 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zlmqt" event={"ID":"51a8b0f7-87b1-48af-961b-5802873c6f76","Type":"ContainerStarted","Data":"9916a9ab79d144d6dd0c06b6ee35b504c9e12f759a52d44d64376257503236e2"} Dec 11 10:13:59 crc kubenswrapper[4953]: I1211 10:13:59.803014 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-8s4mq" event={"ID":"f3b9e0de-9d50-4564-b075-9e56de0d6d20","Type":"ContainerStarted","Data":"b61d2299ee3d2f27ab6d088e5b26241daa5026da83845ea59aed8f0b7d22afb2"} Dec 11 10:13:59 crc kubenswrapper[4953]: I1211 10:13:59.808191 4953 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-8s4mq container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.8:6443/healthz\": dial tcp 10.217.0.8:6443: connect: connection refused" start-of-body= Dec 11 10:13:59 crc kubenswrapper[4953]: I1211 10:13:59.808222 4953 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-8s4mq" podUID="f3b9e0de-9d50-4564-b075-9e56de0d6d20" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.8:6443/healthz\": dial tcp 10.217.0.8:6443: connect: connection refused" Dec 11 10:13:59 crc kubenswrapper[4953]: I1211 10:13:59.810754 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"9e0988ed3146e6e1b2436cd52b88218e0a9b6c7843e6ac97227cc6a45daf6622"} Dec 11 10:13:59 crc kubenswrapper[4953]: I1211 10:13:59.812469 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bc5n5" event={"ID":"7c7d0de5-9432-4fd6-b44d-6529c186be7e","Type":"ContainerStarted","Data":"7fa8e5216b31f865c2e2fa5550beca5b49aba8f5460ef4c72b22f58f71e641be"} Dec 11 10:13:59 crc kubenswrapper[4953]: I1211 10:13:59.815565 4953 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-7b4cr container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" start-of-body= Dec 11 10:13:59 crc kubenswrapper[4953]: I1211 10:13:59.815598 4953 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7b4cr" podUID="e4692382-7018-4f9f-b54e-4bcb83044387" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" Dec 11 10:13:59 crc kubenswrapper[4953]: I1211 10:13:59.815875 4953 patch_prober.go:28] interesting pod/downloads-7954f5f757-9jt44 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Dec 11 10:13:59 crc kubenswrapper[4953]: I1211 10:13:59.815892 4953 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-9jt44" podUID="63ca4931-8019-4e0d-ab43-ae5bd50b8d91" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Dec 11 10:13:59 crc kubenswrapper[4953]: I1211 10:13:59.897454 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 10:13:59 crc kubenswrapper[4953]: E1211 10:13:59.899744 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 10:14:00.399727876 +0000 UTC m=+158.423586909 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:13:59 crc kubenswrapper[4953]: I1211 10:13:59.900297 4953 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-cl6x8 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.39:8443/healthz\": dial tcp 10.217.0.39:8443: connect: connection refused" start-of-body= Dec 11 10:13:59 crc kubenswrapper[4953]: I1211 10:13:59.900333 4953 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-cl6x8" podUID="3cee2756-70ae-44b9-b52a-43cf1bc552e0" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.39:8443/healthz\": dial tcp 10.217.0.39:8443: connect: connection refused" Dec 11 10:14:00 crc kubenswrapper[4953]: I1211 10:14:00.040808 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-r99w9\" (UID: \"c1a4e773-6467-424c-935e-40ef82e5fa99\") " pod="openshift-image-registry/image-registry-697d97f7c8-r99w9" Dec 11 10:14:00 crc kubenswrapper[4953]: E1211 10:14:00.045553 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 10:14:00.545527276 +0000 UTC m=+158.569386309 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-r99w9" (UID: "c1a4e773-6467-424c-935e-40ef82e5fa99") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:14:00 crc kubenswrapper[4953]: I1211 10:14:00.080921 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-vjv7f" podStartSLOduration=136.080904192 podStartE2EDuration="2m16.080904192s" podCreationTimestamp="2025-12-11 10:11:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:14:00.06235245 +0000 UTC m=+158.086211483" watchObservedRunningTime="2025-12-11 10:14:00.080904192 +0000 UTC m=+158.104763225" Dec 11 10:14:00 crc kubenswrapper[4953]: I1211 10:14:00.195868 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-22hb8" podStartSLOduration=136.19585171 podStartE2EDuration="2m16.19585171s" podCreationTimestamp="2025-12-11 10:11:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:14:00.195236799 +0000 UTC m=+158.219095832" watchObservedRunningTime="2025-12-11 10:14:00.19585171 +0000 UTC m=+158.219710743" Dec 11 10:14:00 crc kubenswrapper[4953]: I1211 10:14:00.201863 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 10:14:00 crc kubenswrapper[4953]: E1211 10:14:00.202149 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 10:14:00.702131846 +0000 UTC m=+158.725990879 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:14:00 crc kubenswrapper[4953]: I1211 10:14:00.346984 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-r99w9\" (UID: \"c1a4e773-6467-424c-935e-40ef82e5fa99\") " pod="openshift-image-registry/image-registry-697d97f7c8-r99w9" Dec 11 10:14:00 crc kubenswrapper[4953]: E1211 10:14:00.347392 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 10:14:00.847373395 +0000 UTC m=+158.871232428 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-r99w9" (UID: "c1a4e773-6467-424c-935e-40ef82e5fa99") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:14:00 crc kubenswrapper[4953]: I1211 10:14:00.348617 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7b4cr" podStartSLOduration=134.348580395 podStartE2EDuration="2m14.348580395s" podCreationTimestamp="2025-12-11 10:11:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:14:00.346120574 +0000 UTC m=+158.369979607" watchObservedRunningTime="2025-12-11 10:14:00.348580395 +0000 UTC m=+158.372439428" Dec 11 10:14:00 crc kubenswrapper[4953]: I1211 10:14:00.439092 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-cl6x8" podStartSLOduration=134.439070795 podStartE2EDuration="2m14.439070795s" podCreationTimestamp="2025-12-11 10:11:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:14:00.42799037 +0000 UTC m=+158.451849403" watchObservedRunningTime="2025-12-11 10:14:00.439070795 +0000 UTC m=+158.462929828" Dec 11 10:14:00 crc kubenswrapper[4953]: I1211 10:14:00.448110 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 10:14:00 crc kubenswrapper[4953]: E1211 10:14:00.448426 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 10:14:00.948401561 +0000 UTC m=+158.972260594 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:14:00 crc kubenswrapper[4953]: I1211 10:14:00.465238 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-x6f57" Dec 11 10:14:00 crc kubenswrapper[4953]: I1211 10:14:00.531043 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kb52r" podStartSLOduration=135.531020402 podStartE2EDuration="2m15.531020402s" podCreationTimestamp="2025-12-11 10:11:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:14:00.528099236 +0000 UTC m=+158.551958269" watchObservedRunningTime="2025-12-11 10:14:00.531020402 +0000 UTC m=+158.554879435" Dec 11 10:14:00 crc kubenswrapper[4953]: I1211 10:14:00.549593 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-r99w9\" (UID: \"c1a4e773-6467-424c-935e-40ef82e5fa99\") " pod="openshift-image-registry/image-registry-697d97f7c8-r99w9" Dec 11 10:14:00 crc kubenswrapper[4953]: E1211 10:14:00.550026 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 10:14:01.050012397 +0000 UTC m=+159.073871430 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-r99w9" (UID: "c1a4e773-6467-424c-935e-40ef82e5fa99") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:14:00 crc kubenswrapper[4953]: I1211 10:14:00.650839 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 10:14:00 crc kubenswrapper[4953]: I1211 10:14:00.660415 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-pzsms" podStartSLOduration=136.660397942 podStartE2EDuration="2m16.660397942s" podCreationTimestamp="2025-12-11 10:11:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:14:00.650200346 +0000 UTC m=+158.674059379" watchObservedRunningTime="2025-12-11 10:14:00.660397942 +0000 UTC m=+158.684256975" Dec 11 10:14:00 crc kubenswrapper[4953]: E1211 10:14:00.663135 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 10:14:01.163120052 +0000 UTC m=+159.186979085 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:14:00 crc kubenswrapper[4953]: I1211 10:14:00.725166 4953 patch_prober.go:28] interesting pod/router-default-5444994796-v8699 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 11 10:14:00 crc kubenswrapper[4953]: [-]has-synced failed: reason withheld Dec 11 10:14:00 crc kubenswrapper[4953]: [+]process-running ok Dec 11 10:14:00 crc kubenswrapper[4953]: healthz check failed Dec 11 10:14:00 crc kubenswrapper[4953]: I1211 10:14:00.760361 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-r99w9\" (UID: \"c1a4e773-6467-424c-935e-40ef82e5fa99\") " pod="openshift-image-registry/image-registry-697d97f7c8-r99w9" Dec 11 10:14:00 crc kubenswrapper[4953]: E1211 10:14:00.762413 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 10:14:01.26239413 +0000 UTC m=+159.286253153 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-r99w9" (UID: "c1a4e773-6467-424c-935e-40ef82e5fa99") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:14:00 crc kubenswrapper[4953]: I1211 10:14:00.725219 4953 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-v8699" podUID="d16293c2-d5aa-41fe-859c-0cc5201b6f0b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 10:14:00 crc kubenswrapper[4953]: I1211 10:14:00.770121 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-9jt44" podStartSLOduration=135.770100693 podStartE2EDuration="2m15.770100693s" podCreationTimestamp="2025-12-11 10:11:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:14:00.766819966 +0000 UTC m=+158.790678999" watchObservedRunningTime="2025-12-11 10:14:00.770100693 +0000 UTC m=+158.793959736" Dec 11 10:14:00 crc kubenswrapper[4953]: I1211 10:14:00.858140 4953 patch_prober.go:28] interesting pod/console-operator-58897d9998-pzsms container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.13:8443/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 11 10:14:00 crc kubenswrapper[4953]: I1211 10:14:00.858212 4953 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-pzsms" podUID="a3347424-53c5-4365-bcca-5ec96a8b2c0b" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.13:8443/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 10:14:00 crc kubenswrapper[4953]: I1211 10:14:00.862999 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 10:14:00 crc kubenswrapper[4953]: E1211 10:14:00.863635 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 10:14:01.363615943 +0000 UTC m=+159.387474976 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:14:00 crc kubenswrapper[4953]: I1211 10:14:00.960329 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-xmb4p" podStartSLOduration=134.960311717 podStartE2EDuration="2m14.960311717s" podCreationTimestamp="2025-12-11 10:11:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:14:00.951206977 +0000 UTC m=+158.975066000" watchObservedRunningTime="2025-12-11 10:14:00.960311717 +0000 UTC m=+158.984170750" Dec 11 10:14:00 crc kubenswrapper[4953]: I1211 10:14:00.964976 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-r99w9\" (UID: \"c1a4e773-6467-424c-935e-40ef82e5fa99\") " pod="openshift-image-registry/image-registry-697d97f7c8-r99w9" Dec 11 10:14:00 crc kubenswrapper[4953]: E1211 10:14:00.965415 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 10:14:01.465395674 +0000 UTC m=+159.489254707 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-r99w9" (UID: "c1a4e773-6467-424c-935e-40ef82e5fa99") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:14:00 crc kubenswrapper[4953]: I1211 10:14:00.994771 4953 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-8s4mq container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.8:6443/healthz\": dial tcp 10.217.0.8:6443: connect: connection refused" start-of-body= Dec 11 10:14:00 crc kubenswrapper[4953]: I1211 10:14:00.994829 4953 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-8s4mq" podUID="f3b9e0de-9d50-4564-b075-9e56de0d6d20" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.8:6443/healthz\": dial tcp 10.217.0.8:6443: connect: connection refused" Dec 11 10:14:00 crc kubenswrapper[4953]: I1211 10:14:00.995549 4953 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-cl6x8 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.39:8443/healthz\": dial tcp 10.217.0.39:8443: connect: connection refused" start-of-body= Dec 11 10:14:00 crc kubenswrapper[4953]: I1211 10:14:00.995613 4953 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-cl6x8" podUID="3cee2756-70ae-44b9-b52a-43cf1bc552e0" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.39:8443/healthz\": dial tcp 10.217.0.39:8443: connect: connection refused" Dec 11 10:14:01 crc kubenswrapper[4953]: I1211 10:14:01.030759 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-69q67" podStartSLOduration=136.030736495 podStartE2EDuration="2m16.030736495s" podCreationTimestamp="2025-12-11 10:11:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:14:01.007543762 +0000 UTC m=+159.031402795" watchObservedRunningTime="2025-12-11 10:14:01.030736495 +0000 UTC m=+159.054595528" Dec 11 10:14:01 crc kubenswrapper[4953]: I1211 10:14:01.031716 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-ml8wp" event={"ID":"f829d83c-e4f2-4e16-b02e-57b445b6fa41","Type":"ContainerStarted","Data":"c1de21163aeab64b067793442969ea0050d9cb9e0aedfd7098d4b504b561a10a"} Dec 11 10:14:01 crc kubenswrapper[4953]: I1211 10:14:01.031741 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-9sdjn" event={"ID":"a7b5a1d1-788d-448e-b859-c29daecb9a9b","Type":"ContainerStarted","Data":"b1cce6f316714f08c249994d8891b1eeaabc73d7c2230af13175656d01716bc6"} Dec 11 10:14:01 crc kubenswrapper[4953]: I1211 10:14:01.069476 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 10:14:01 crc kubenswrapper[4953]: E1211 10:14:01.070024 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 10:14:01.570003348 +0000 UTC m=+159.593862381 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:14:01 crc kubenswrapper[4953]: I1211 10:14:01.225128 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-r99w9\" (UID: \"c1a4e773-6467-424c-935e-40ef82e5fa99\") " pod="openshift-image-registry/image-registry-697d97f7c8-r99w9" Dec 11 10:14:01 crc kubenswrapper[4953]: E1211 10:14:01.235371 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 10:14:01.73530415 +0000 UTC m=+159.759163173 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-r99w9" (UID: "c1a4e773-6467-424c-935e-40ef82e5fa99") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:14:01 crc kubenswrapper[4953]: I1211 10:14:01.293778 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7ffjt" podStartSLOduration=136.293762065 podStartE2EDuration="2m16.293762065s" podCreationTimestamp="2025-12-11 10:11:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:14:01.240794841 +0000 UTC m=+159.264653874" watchObservedRunningTime="2025-12-11 10:14:01.293762065 +0000 UTC m=+159.317621098" Dec 11 10:14:01 crc kubenswrapper[4953]: I1211 10:14:01.316140 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vrm5k" podStartSLOduration=135.316124721 podStartE2EDuration="2m15.316124721s" podCreationTimestamp="2025-12-11 10:11:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:14:01.315099658 +0000 UTC m=+159.338958691" watchObservedRunningTime="2025-12-11 10:14:01.316124721 +0000 UTC m=+159.339983754" Dec 11 10:14:01 crc kubenswrapper[4953]: I1211 10:14:01.318354 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zlmqt" podStartSLOduration=135.318340605 podStartE2EDuration="2m15.318340605s" podCreationTimestamp="2025-12-11 10:11:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:14:01.294950845 +0000 UTC m=+159.318809878" watchObservedRunningTime="2025-12-11 10:14:01.318340605 +0000 UTC m=+159.342199638" Dec 11 10:14:01 crc kubenswrapper[4953]: I1211 10:14:01.328851 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 10:14:01 crc kubenswrapper[4953]: E1211 10:14:01.329236 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 10:14:01.829218933 +0000 UTC m=+159.853077956 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:14:01 crc kubenswrapper[4953]: I1211 10:14:01.431266 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-r99w9\" (UID: \"c1a4e773-6467-424c-935e-40ef82e5fa99\") " pod="openshift-image-registry/image-registry-697d97f7c8-r99w9" Dec 11 10:14:01 crc kubenswrapper[4953]: E1211 10:14:01.431696 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 10:14:01.931670986 +0000 UTC m=+159.955530019 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-r99w9" (UID: "c1a4e773-6467-424c-935e-40ef82e5fa99") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:14:01 crc kubenswrapper[4953]: I1211 10:14:01.459266 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-nzrxl" podStartSLOduration=135.459245174 podStartE2EDuration="2m15.459245174s" podCreationTimestamp="2025-12-11 10:11:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:14:01.454226979 +0000 UTC m=+159.478086032" watchObservedRunningTime="2025-12-11 10:14:01.459245174 +0000 UTC m=+159.483104207" Dec 11 10:14:01 crc kubenswrapper[4953]: I1211 10:14:01.461510 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-8lqhc" podStartSLOduration=136.461502648 podStartE2EDuration="2m16.461502648s" podCreationTimestamp="2025-12-11 10:11:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:14:01.416465956 +0000 UTC m=+159.440324989" watchObservedRunningTime="2025-12-11 10:14:01.461502648 +0000 UTC m=+159.485361681" Dec 11 10:14:01 crc kubenswrapper[4953]: I1211 10:14:01.532486 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 10:14:01 crc kubenswrapper[4953]: E1211 10:14:01.532692 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 10:14:02.032665761 +0000 UTC m=+160.056524794 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:14:01 crc kubenswrapper[4953]: I1211 10:14:01.532914 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-r99w9\" (UID: \"c1a4e773-6467-424c-935e-40ef82e5fa99\") " pod="openshift-image-registry/image-registry-697d97f7c8-r99w9" Dec 11 10:14:01 crc kubenswrapper[4953]: E1211 10:14:01.533272 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 10:14:02.033247691 +0000 UTC m=+160.057106724 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-r99w9" (UID: "c1a4e773-6467-424c-935e-40ef82e5fa99") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:14:01 crc kubenswrapper[4953]: I1211 10:14:01.558382 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-x8dvj" podStartSLOduration=136.558364738 podStartE2EDuration="2m16.558364738s" podCreationTimestamp="2025-12-11 10:11:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:14:01.556557859 +0000 UTC m=+159.580416922" watchObservedRunningTime="2025-12-11 10:14:01.558364738 +0000 UTC m=+159.582223771" Dec 11 10:14:01 crc kubenswrapper[4953]: I1211 10:14:01.657764 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 10:14:01 crc kubenswrapper[4953]: E1211 10:14:01.657963 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 10:14:02.157932426 +0000 UTC m=+160.181791459 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:14:01 crc kubenswrapper[4953]: I1211 10:14:01.658321 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-r99w9\" (UID: \"c1a4e773-6467-424c-935e-40ef82e5fa99\") " pod="openshift-image-registry/image-registry-697d97f7c8-r99w9" Dec 11 10:14:01 crc kubenswrapper[4953]: E1211 10:14:01.659062 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 10:14:02.159048633 +0000 UTC m=+160.182907666 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-r99w9" (UID: "c1a4e773-6467-424c-935e-40ef82e5fa99") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:14:01 crc kubenswrapper[4953]: I1211 10:14:01.661335 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-jb2sd" podStartSLOduration=135.661320507 podStartE2EDuration="2m15.661320507s" podCreationTimestamp="2025-12-11 10:11:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:14:01.657626836 +0000 UTC m=+159.681485869" watchObservedRunningTime="2025-12-11 10:14:01.661320507 +0000 UTC m=+159.685179540" Dec 11 10:14:01 crc kubenswrapper[4953]: I1211 10:14:01.701666 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 11 10:14:01 crc kubenswrapper[4953]: I1211 10:14:01.702355 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 11 10:14:01 crc kubenswrapper[4953]: I1211 10:14:01.703261 4953 patch_prober.go:28] interesting pod/router-default-5444994796-v8699 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 11 10:14:01 crc kubenswrapper[4953]: [-]has-synced failed: reason withheld Dec 11 10:14:01 crc kubenswrapper[4953]: [+]process-running ok Dec 11 10:14:01 crc kubenswrapper[4953]: healthz check failed Dec 11 10:14:01 crc kubenswrapper[4953]: I1211 10:14:01.703288 4953 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-v8699" podUID="d16293c2-d5aa-41fe-859c-0cc5201b6f0b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 10:14:01 crc kubenswrapper[4953]: I1211 10:14:01.712861 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Dec 11 10:14:01 crc kubenswrapper[4953]: I1211 10:14:01.712864 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Dec 11 10:14:01 crc kubenswrapper[4953]: I1211 10:14:01.714115 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 11 10:14:01 crc kubenswrapper[4953]: I1211 10:14:01.776953 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-8s4mq" podStartSLOduration=137.776918694 podStartE2EDuration="2m17.776918694s" podCreationTimestamp="2025-12-11 10:11:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:14:01.776767439 +0000 UTC m=+159.800626472" watchObservedRunningTime="2025-12-11 10:14:01.776918694 +0000 UTC m=+159.800777727" Dec 11 10:14:01 crc kubenswrapper[4953]: I1211 10:14:01.804287 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 10:14:01 crc kubenswrapper[4953]: I1211 10:14:01.804919 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/80d43f58-fcb7-4227-b9ef-9e302b7ee878-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"80d43f58-fcb7-4227-b9ef-9e302b7ee878\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 11 10:14:01 crc kubenswrapper[4953]: I1211 10:14:01.805072 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/80d43f58-fcb7-4227-b9ef-9e302b7ee878-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"80d43f58-fcb7-4227-b9ef-9e302b7ee878\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 11 10:14:01 crc kubenswrapper[4953]: E1211 10:14:01.805352 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 10:14:02.305302799 +0000 UTC m=+160.329161832 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:14:01 crc kubenswrapper[4953]: I1211 10:14:01.890743 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9bt8h" podStartSLOduration=136.89072457 podStartE2EDuration="2m16.89072457s" podCreationTimestamp="2025-12-11 10:11:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:14:01.881791196 +0000 UTC m=+159.905650229" watchObservedRunningTime="2025-12-11 10:14:01.89072457 +0000 UTC m=+159.914583603" Dec 11 10:14:01 crc kubenswrapper[4953]: I1211 10:14:01.906368 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-r99w9\" (UID: \"c1a4e773-6467-424c-935e-40ef82e5fa99\") " pod="openshift-image-registry/image-registry-697d97f7c8-r99w9" Dec 11 10:14:01 crc kubenswrapper[4953]: I1211 10:14:01.906432 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/80d43f58-fcb7-4227-b9ef-9e302b7ee878-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"80d43f58-fcb7-4227-b9ef-9e302b7ee878\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 11 10:14:01 crc kubenswrapper[4953]: I1211 10:14:01.906469 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/80d43f58-fcb7-4227-b9ef-9e302b7ee878-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"80d43f58-fcb7-4227-b9ef-9e302b7ee878\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 11 10:14:01 crc kubenswrapper[4953]: I1211 10:14:01.906901 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/80d43f58-fcb7-4227-b9ef-9e302b7ee878-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"80d43f58-fcb7-4227-b9ef-9e302b7ee878\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 11 10:14:01 crc kubenswrapper[4953]: E1211 10:14:01.907122 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 10:14:02.4071103 +0000 UTC m=+160.430969333 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-r99w9" (UID: "c1a4e773-6467-424c-935e-40ef82e5fa99") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:14:02 crc kubenswrapper[4953]: I1211 10:14:02.167491 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 10:14:02 crc kubenswrapper[4953]: E1211 10:14:02.169196 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 10:14:02.668699393 +0000 UTC m=+160.692558436 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:14:02 crc kubenswrapper[4953]: I1211 10:14:02.424815 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-r99w9\" (UID: \"c1a4e773-6467-424c-935e-40ef82e5fa99\") " pod="openshift-image-registry/image-registry-697d97f7c8-r99w9" Dec 11 10:14:02 crc kubenswrapper[4953]: E1211 10:14:02.425217 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 10:14:02.925202358 +0000 UTC m=+160.949061391 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-r99w9" (UID: "c1a4e773-6467-424c-935e-40ef82e5fa99") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:14:02 crc kubenswrapper[4953]: I1211 10:14:02.439843 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-65zv8" event={"ID":"85c2f4f5-9757-4a9b-947e-27caba1bcf40","Type":"ContainerStarted","Data":"296dbc776fe62dccf61ccbbee40c0e6bf68ede6fb3935fa5b8b63f9e7e6f3ea7"} Dec 11 10:14:02 crc kubenswrapper[4953]: I1211 10:14:02.440633 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/80d43f58-fcb7-4227-b9ef-9e302b7ee878-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"80d43f58-fcb7-4227-b9ef-9e302b7ee878\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 11 10:14:02 crc kubenswrapper[4953]: I1211 10:14:02.443571 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-svgfk" event={"ID":"3b330811-c6d6-4052-a061-d3c7781d619e","Type":"ContainerStarted","Data":"e8c6d3d4b0aedaac6f270430cfc8118b026d32c83bf6fb005b8ebe8ac0ca046f"} Dec 11 10:14:02 crc kubenswrapper[4953]: I1211 10:14:02.489828 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-b55xt" event={"ID":"c310bdcf-5786-4970-a81d-651417521b3c","Type":"ContainerStarted","Data":"e479f2b280991ab97511bcb02c770fad3ae833358fe9acd0282e5c8d209cc7df"} Dec 11 10:14:02 crc kubenswrapper[4953]: I1211 10:14:02.577312 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 10:14:02 crc kubenswrapper[4953]: E1211 10:14:02.579512 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 10:14:03.079493289 +0000 UTC m=+161.103352322 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:14:02 crc kubenswrapper[4953]: I1211 10:14:02.579648 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-r99w9\" (UID: \"c1a4e773-6467-424c-935e-40ef82e5fa99\") " pod="openshift-image-registry/image-registry-697d97f7c8-r99w9" Dec 11 10:14:02 crc kubenswrapper[4953]: E1211 10:14:02.580467 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 10:14:03.08043398 +0000 UTC m=+161.104293183 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-r99w9" (UID: "c1a4e773-6467-424c-935e-40ef82e5fa99") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:14:02 crc kubenswrapper[4953]: I1211 10:14:02.637141 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 11 10:14:02 crc kubenswrapper[4953]: I1211 10:14:02.683269 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 10:14:02 crc kubenswrapper[4953]: E1211 10:14:02.685061 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 10:14:03.185038163 +0000 UTC m=+161.208897196 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:14:02 crc kubenswrapper[4953]: I1211 10:14:02.747727 4953 patch_prober.go:28] interesting pod/router-default-5444994796-v8699 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 11 10:14:02 crc kubenswrapper[4953]: [-]has-synced failed: reason withheld Dec 11 10:14:02 crc kubenswrapper[4953]: [+]process-running ok Dec 11 10:14:02 crc kubenswrapper[4953]: healthz check failed Dec 11 10:14:02 crc kubenswrapper[4953]: I1211 10:14:02.747816 4953 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-v8699" podUID="d16293c2-d5aa-41fe-859c-0cc5201b6f0b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 10:14:02 crc kubenswrapper[4953]: I1211 10:14:02.820029 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-r99w9\" (UID: \"c1a4e773-6467-424c-935e-40ef82e5fa99\") " pod="openshift-image-registry/image-registry-697d97f7c8-r99w9" Dec 11 10:14:02 crc kubenswrapper[4953]: E1211 10:14:02.820502 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 10:14:03.320484723 +0000 UTC m=+161.344343756 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-r99w9" (UID: "c1a4e773-6467-424c-935e-40ef82e5fa99") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:14:02 crc kubenswrapper[4953]: I1211 10:14:02.837974 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6jmjq" event={"ID":"332434db-75e4-4fce-8973-aff84310d0f5","Type":"ContainerStarted","Data":"fd9e8ea4949ff756765d866b8136afc38c9cc04f74117d13fa40b578edfef3c9"} Dec 11 10:14:02 crc kubenswrapper[4953]: I1211 10:14:02.838506 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6jmjq" Dec 11 10:14:02 crc kubenswrapper[4953]: I1211 10:14:02.931466 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 10:14:02 crc kubenswrapper[4953]: E1211 10:14:02.932940 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 10:14:03.432921346 +0000 UTC m=+161.456780379 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:14:02 crc kubenswrapper[4953]: I1211 10:14:02.983090 4953 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-6jmjq container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:5443/healthz\": dial tcp 10.217.0.40:5443: connect: connection refused" start-of-body= Dec 11 10:14:02 crc kubenswrapper[4953]: I1211 10:14:02.983164 4953 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6jmjq" podUID="332434db-75e4-4fce-8973-aff84310d0f5" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.40:5443/healthz\": dial tcp 10.217.0.40:5443: connect: connection refused" Dec 11 10:14:02 crc kubenswrapper[4953]: I1211 10:14:02.991981 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9shds" event={"ID":"c97cb435-9028-4ea4-a6cb-7851c2845566","Type":"ContainerStarted","Data":"33a1f1f889ec4c0bad4786326b6d83e5f8eeec81613d394f9034ed45bac43ed6"} Dec 11 10:14:02 crc kubenswrapper[4953]: I1211 10:14:02.994280 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 10:14:02 crc kubenswrapper[4953]: I1211 10:14:02.995550 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8dr5c" event={"ID":"872f79b9-6f54-4b5c-bc80-cd2404dc3156","Type":"ContainerStarted","Data":"c8670a7d14369c0e33512bdd8641ad8807ab47c0e9858e8ea066b9adaba7cfb4"} Dec 11 10:14:02 crc kubenswrapper[4953]: I1211 10:14:02.996556 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-qmmnp" event={"ID":"9d5a8173-8a2a-42c1-9935-2433336c3be7","Type":"ContainerStarted","Data":"ac83c0b7c8314d0b95ce13f606c10daacf71de031d29b1ceb573e84c64e7fc47"} Dec 11 10:14:02 crc kubenswrapper[4953]: I1211 10:14:02.998617 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-crtp9" event={"ID":"b9ce2b59-c756-43bf-8114-9fe86a8c8cd9","Type":"ContainerStarted","Data":"f183abe1691c218366fbb4971c0ead0a606b827bc0d02d663dbb0e38e1b661fa"} Dec 11 10:14:02 crc kubenswrapper[4953]: I1211 10:14:02.999483 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-crtp9" Dec 11 10:14:03 crc kubenswrapper[4953]: I1211 10:14:03.000390 4953 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-cl6x8 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.39:8443/healthz\": dial tcp 10.217.0.39:8443: connect: connection refused" start-of-body= Dec 11 10:14:03 crc kubenswrapper[4953]: I1211 10:14:03.000423 4953 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-cl6x8" podUID="3cee2756-70ae-44b9-b52a-43cf1bc552e0" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.39:8443/healthz\": dial tcp 10.217.0.39:8443: connect: connection refused" Dec 11 10:14:03 crc kubenswrapper[4953]: I1211 10:14:03.034425 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-r99w9\" (UID: \"c1a4e773-6467-424c-935e-40ef82e5fa99\") " pod="openshift-image-registry/image-registry-697d97f7c8-r99w9" Dec 11 10:14:03 crc kubenswrapper[4953]: E1211 10:14:03.078273 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 10:14:03.578254221 +0000 UTC m=+161.602113314 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-r99w9" (UID: "c1a4e773-6467-424c-935e-40ef82e5fa99") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:14:03 crc kubenswrapper[4953]: I1211 10:14:03.244319 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 10:14:03 crc kubenswrapper[4953]: E1211 10:14:03.244849 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 10:14:03.744831105 +0000 UTC m=+161.768690138 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:14:03 crc kubenswrapper[4953]: I1211 10:14:03.353357 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-r99w9\" (UID: \"c1a4e773-6467-424c-935e-40ef82e5fa99\") " pod="openshift-image-registry/image-registry-697d97f7c8-r99w9" Dec 11 10:14:03 crc kubenswrapper[4953]: E1211 10:14:03.353854 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 10:14:03.853831335 +0000 UTC m=+161.877690368 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-r99w9" (UID: "c1a4e773-6467-424c-935e-40ef82e5fa99") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:14:03 crc kubenswrapper[4953]: I1211 10:14:03.592952 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 10:14:03 crc kubenswrapper[4953]: E1211 10:14:03.593504 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 10:14:04.093475905 +0000 UTC m=+162.117334938 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:14:03 crc kubenswrapper[4953]: I1211 10:14:03.882294 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-r99w9\" (UID: \"c1a4e773-6467-424c-935e-40ef82e5fa99\") " pod="openshift-image-registry/image-registry-697d97f7c8-r99w9" Dec 11 10:14:03 crc kubenswrapper[4953]: E1211 10:14:03.882627 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 10:14:04.382614685 +0000 UTC m=+162.406473718 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-r99w9" (UID: "c1a4e773-6467-424c-935e-40ef82e5fa99") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:14:03 crc kubenswrapper[4953]: I1211 10:14:03.893885 4953 patch_prober.go:28] interesting pod/router-default-5444994796-v8699 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 11 10:14:03 crc kubenswrapper[4953]: [-]has-synced failed: reason withheld Dec 11 10:14:03 crc kubenswrapper[4953]: [+]process-running ok Dec 11 10:14:03 crc kubenswrapper[4953]: healthz check failed Dec 11 10:14:03 crc kubenswrapper[4953]: I1211 10:14:03.894191 4953 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-v8699" podUID="d16293c2-d5aa-41fe-859c-0cc5201b6f0b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 10:14:03 crc kubenswrapper[4953]: I1211 10:14:03.984955 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 10:14:03 crc kubenswrapper[4953]: E1211 10:14:03.985292 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 10:14:04.485256474 +0000 UTC m=+162.509115517 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:14:03 crc kubenswrapper[4953]: I1211 10:14:03.985350 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-r99w9\" (UID: \"c1a4e773-6467-424c-935e-40ef82e5fa99\") " pod="openshift-image-registry/image-registry-697d97f7c8-r99w9" Dec 11 10:14:03 crc kubenswrapper[4953]: E1211 10:14:03.985793 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 10:14:04.485779162 +0000 UTC m=+162.509638195 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-r99w9" (UID: "c1a4e773-6467-424c-935e-40ef82e5fa99") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:14:04 crc kubenswrapper[4953]: I1211 10:14:04.111423 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 10:14:04 crc kubenswrapper[4953]: E1211 10:14:04.111824 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 10:14:04.6118048 +0000 UTC m=+162.635663833 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:14:04 crc kubenswrapper[4953]: I1211 10:14:04.445887 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-r99w9\" (UID: \"c1a4e773-6467-424c-935e-40ef82e5fa99\") " pod="openshift-image-registry/image-registry-697d97f7c8-r99w9" Dec 11 10:14:04 crc kubenswrapper[4953]: E1211 10:14:04.446393 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 10:14:04.946367986 +0000 UTC m=+162.970227019 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-r99w9" (UID: "c1a4e773-6467-424c-935e-40ef82e5fa99") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:14:04 crc kubenswrapper[4953]: I1211 10:14:04.653497 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 10:14:04 crc kubenswrapper[4953]: E1211 10:14:04.653882 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 10:14:05.153834857 +0000 UTC m=+163.177693890 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:14:04 crc kubenswrapper[4953]: I1211 10:14:04.654598 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-r99w9\" (UID: \"c1a4e773-6467-424c-935e-40ef82e5fa99\") " pod="openshift-image-registry/image-registry-697d97f7c8-r99w9" Dec 11 10:14:04 crc kubenswrapper[4953]: E1211 10:14:04.654991 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 10:14:05.154969295 +0000 UTC m=+163.178828328 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-r99w9" (UID: "c1a4e773-6467-424c-935e-40ef82e5fa99") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:14:04 crc kubenswrapper[4953]: I1211 10:14:04.678538 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"ee47a47c7995f5234b25184d5ab9e34f53c2e6a2d9a87aee1d872abebd110ac0"} Dec 11 10:14:04 crc kubenswrapper[4953]: I1211 10:14:04.692502 4953 patch_prober.go:28] interesting pod/router-default-5444994796-v8699 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 11 10:14:04 crc kubenswrapper[4953]: [-]has-synced failed: reason withheld Dec 11 10:14:04 crc kubenswrapper[4953]: [+]process-running ok Dec 11 10:14:04 crc kubenswrapper[4953]: healthz check failed Dec 11 10:14:04 crc kubenswrapper[4953]: I1211 10:14:04.692563 4953 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-v8699" podUID="d16293c2-d5aa-41fe-859c-0cc5201b6f0b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 10:14:04 crc kubenswrapper[4953]: I1211 10:14:04.697197 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-krg44" event={"ID":"3374cd21-f51d-4fd3-afe6-8fd43d81622a","Type":"ContainerStarted","Data":"ac2bb5969ad40965dc317c3aa35373b076cb550ba10cf18ee5347b63815750e1"} Dec 11 10:14:04 crc kubenswrapper[4953]: I1211 10:14:04.775316 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 10:14:04 crc kubenswrapper[4953]: E1211 10:14:04.775882 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 10:14:05.275857596 +0000 UTC m=+163.299716629 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:14:05 crc kubenswrapper[4953]: I1211 10:14:05.120648 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-r99w9\" (UID: \"c1a4e773-6467-424c-935e-40ef82e5fa99\") " pod="openshift-image-registry/image-registry-697d97f7c8-r99w9" Dec 11 10:14:05 crc kubenswrapper[4953]: I1211 10:14:05.121235 4953 patch_prober.go:28] interesting pod/console-f9d7485db-wfrqd container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.14:8443/health\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Dec 11 10:14:05 crc kubenswrapper[4953]: I1211 10:14:05.121312 4953 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-wfrqd" podUID="6a593442-828c-4cff-b9b9-4efa41ef6f44" containerName="console" probeResult="failure" output="Get \"https://10.217.0.14:8443/health\": dial tcp 10.217.0.14:8443: connect: connection refused" Dec 11 10:14:05 crc kubenswrapper[4953]: E1211 10:14:05.122193 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 10:14:05.622176148 +0000 UTC m=+163.646035261 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-r99w9" (UID: "c1a4e773-6467-424c-935e-40ef82e5fa99") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:14:05 crc kubenswrapper[4953]: I1211 10:14:05.126241 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-65zv8" podStartSLOduration=23.126220801 podStartE2EDuration="23.126220801s" podCreationTimestamp="2025-12-11 10:13:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:14:04.707832596 +0000 UTC m=+162.731691629" watchObservedRunningTime="2025-12-11 10:14:05.126220801 +0000 UTC m=+163.150079834" Dec 11 10:14:05 crc kubenswrapper[4953]: I1211 10:14:05.257114 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"ae6d3ab0b8cba20554ece54e2f4a13aaa29e866cbe37bc01baa9c391cd255568"} Dec 11 10:14:05 crc kubenswrapper[4953]: I1211 10:14:05.257473 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 10:14:05 crc kubenswrapper[4953]: E1211 10:14:05.258037 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 10:14:05.75801675 +0000 UTC m=+163.781875783 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:14:05 crc kubenswrapper[4953]: I1211 10:14:05.258889 4953 patch_prober.go:28] interesting pod/downloads-7954f5f757-9jt44 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Dec 11 10:14:05 crc kubenswrapper[4953]: I1211 10:14:05.258953 4953 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-9jt44" podUID="63ca4931-8019-4e0d-ab43-ae5bd50b8d91" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Dec 11 10:14:05 crc kubenswrapper[4953]: I1211 10:14:05.259578 4953 patch_prober.go:28] interesting pod/downloads-7954f5f757-9jt44 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Dec 11 10:14:05 crc kubenswrapper[4953]: I1211 10:14:05.259752 4953 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-9jt44" podUID="63ca4931-8019-4e0d-ab43-ae5bd50b8d91" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Dec 11 10:14:05 crc kubenswrapper[4953]: I1211 10:14:05.288325 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-svgfk" podStartSLOduration=139.288306658 podStartE2EDuration="2m19.288306658s" podCreationTimestamp="2025-12-11 10:11:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:14:05.121601049 +0000 UTC m=+163.145460082" watchObservedRunningTime="2025-12-11 10:14:05.288306658 +0000 UTC m=+163.312165681" Dec 11 10:14:05 crc kubenswrapper[4953]: I1211 10:14:05.307319 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-rxj74" event={"ID":"74a6bf4e-fce1-4865-a637-13252c668255","Type":"ContainerStarted","Data":"b82299a0a018af7787bc7a4fc09ddc28c0d4e9ec5ac455d149d4018840000534"} Dec 11 10:14:05 crc kubenswrapper[4953]: I1211 10:14:05.310975 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-pzsms" Dec 11 10:14:05 crc kubenswrapper[4953]: I1211 10:14:05.343522 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29424120-hdqwl" event={"ID":"88498e28-0a15-43a5-b157-5a3baccfaaaf","Type":"ContainerStarted","Data":"1d08248671906f09dfebb27a3caa1268bf31d38878e09af2bf48efe79e0f1eef"} Dec 11 10:14:05 crc kubenswrapper[4953]: I1211 10:14:05.346194 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-b82cx" event={"ID":"5c6ad29d-983a-4388-a962-3ee6af6f042f","Type":"ContainerStarted","Data":"ddc5571ee5e017b0cb4323d3ace85687b958a6c0f8cb440044b5249c431265ae"} Dec 11 10:14:05 crc kubenswrapper[4953]: I1211 10:14:05.347116 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-b82cx" Dec 11 10:14:05 crc kubenswrapper[4953]: I1211 10:14:05.359774 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-r99w9\" (UID: \"c1a4e773-6467-424c-935e-40ef82e5fa99\") " pod="openshift-image-registry/image-registry-697d97f7c8-r99w9" Dec 11 10:14:05 crc kubenswrapper[4953]: E1211 10:14:05.360250 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 10:14:05.860238217 +0000 UTC m=+163.884097240 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-r99w9" (UID: "c1a4e773-6467-424c-935e-40ef82e5fa99") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:14:05 crc kubenswrapper[4953]: I1211 10:14:05.432767 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9shds" Dec 11 10:14:05 crc kubenswrapper[4953]: I1211 10:14:05.434797 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9shds" Dec 11 10:14:05 crc kubenswrapper[4953]: I1211 10:14:05.439844 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-m69bw" event={"ID":"090e3900-f3c2-4c4b-aa6f-3f2b77fa67f3","Type":"ContainerStarted","Data":"02b961ecf703655d1cd35e6734bc55f58879b073755d4204a3df1d1604f46d5a"} Dec 11 10:14:05 crc kubenswrapper[4953]: I1211 10:14:05.441145 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-m69bw" Dec 11 10:14:05 crc kubenswrapper[4953]: I1211 10:14:05.450254 4953 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-9shds container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="Get \"https://10.217.0.10:8443/livez\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Dec 11 10:14:05 crc kubenswrapper[4953]: I1211 10:14:05.450515 4953 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9shds" podUID="c97cb435-9028-4ea4-a6cb-7851c2845566" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.10:8443/livez\": dial tcp 10.217.0.10:8443: connect: connection refused" Dec 11 10:14:05 crc kubenswrapper[4953]: I1211 10:14:05.460168 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-b55xt" podStartSLOduration=140.460138735 podStartE2EDuration="2m20.460138735s" podCreationTimestamp="2025-12-11 10:11:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:14:05.305308457 +0000 UTC m=+163.329167490" watchObservedRunningTime="2025-12-11 10:14:05.460138735 +0000 UTC m=+163.483997768" Dec 11 10:14:05 crc kubenswrapper[4953]: I1211 10:14:05.461049 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6jmjq" podStartSLOduration=139.461040646 podStartE2EDuration="2m19.461040646s" podCreationTimestamp="2025-12-11 10:11:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:14:05.434285044 +0000 UTC m=+163.458144077" watchObservedRunningTime="2025-12-11 10:14:05.461040646 +0000 UTC m=+163.484899679" Dec 11 10:14:05 crc kubenswrapper[4953]: I1211 10:14:05.461618 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 10:14:05 crc kubenswrapper[4953]: E1211 10:14:05.463326 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 10:14:05.96329642 +0000 UTC m=+163.987155453 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:14:05 crc kubenswrapper[4953]: I1211 10:14:05.465288 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"7813b60d59cd210f27510889b872634cc9ab7c2cc7af3710ce5e4a0bccea0db6"} Dec 11 10:14:05 crc kubenswrapper[4953]: I1211 10:14:05.577104 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-r99w9\" (UID: \"c1a4e773-6467-424c-935e-40ef82e5fa99\") " pod="openshift-image-registry/image-registry-697d97f7c8-r99w9" Dec 11 10:14:05 crc kubenswrapper[4953]: E1211 10:14:05.577548 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 10:14:06.077532111 +0000 UTC m=+164.101391144 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-r99w9" (UID: "c1a4e773-6467-424c-935e-40ef82e5fa99") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:14:05 crc kubenswrapper[4953]: I1211 10:14:05.581908 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bc5n5" event={"ID":"7c7d0de5-9432-4fd6-b44d-6529c186be7e","Type":"ContainerStarted","Data":"3b4c1f63d2929187a757628ffd07d81249a5e2f686154f3d6849c9ee21fe4d4e"} Dec 11 10:14:05 crc kubenswrapper[4953]: I1211 10:14:05.681496 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 10:14:05 crc kubenswrapper[4953]: E1211 10:14:05.682502 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 10:14:06.182487157 +0000 UTC m=+164.206346190 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:14:05 crc kubenswrapper[4953]: I1211 10:14:05.706224 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-j88r5" event={"ID":"35703302-61e8-4383-9d13-0449584419e4","Type":"ContainerStarted","Data":"d6e95273bc6b5fe79837643f50259375eeb89985c01e0ad497b3ba7bae8e87a8"} Dec 11 10:14:05 crc kubenswrapper[4953]: I1211 10:14:05.708629 4953 patch_prober.go:28] interesting pod/router-default-5444994796-v8699 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 11 10:14:05 crc kubenswrapper[4953]: [-]has-synced failed: reason withheld Dec 11 10:14:05 crc kubenswrapper[4953]: [+]process-running ok Dec 11 10:14:05 crc kubenswrapper[4953]: healthz check failed Dec 11 10:14:05 crc kubenswrapper[4953]: I1211 10:14:05.708690 4953 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-v8699" podUID="d16293c2-d5aa-41fe-859c-0cc5201b6f0b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 10:14:05 crc kubenswrapper[4953]: I1211 10:14:05.713826 4953 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-6jmjq container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:5443/healthz\": dial tcp 10.217.0.40:5443: connect: connection refused" start-of-body= Dec 11 10:14:05 crc kubenswrapper[4953]: I1211 10:14:05.713906 4953 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6jmjq" podUID="332434db-75e4-4fce-8973-aff84310d0f5" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.40:5443/healthz\": dial tcp 10.217.0.40:5443: connect: connection refused" Dec 11 10:14:05 crc kubenswrapper[4953]: I1211 10:14:05.727760 4953 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-crtp9 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: connect: connection refused" start-of-body= Dec 11 10:14:05 crc kubenswrapper[4953]: I1211 10:14:05.727828 4953 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-crtp9" podUID="b9ce2b59-c756-43bf-8114-9fe86a8c8cd9" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: connect: connection refused" Dec 11 10:14:05 crc kubenswrapper[4953]: I1211 10:14:05.790667 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-r99w9\" (UID: \"c1a4e773-6467-424c-935e-40ef82e5fa99\") " pod="openshift-image-registry/image-registry-697d97f7c8-r99w9" Dec 11 10:14:05 crc kubenswrapper[4953]: E1211 10:14:05.791067 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 10:14:06.291051501 +0000 UTC m=+164.314910534 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-r99w9" (UID: "c1a4e773-6467-424c-935e-40ef82e5fa99") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:14:05 crc kubenswrapper[4953]: I1211 10:14:05.830419 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-krg44" podStartSLOduration=139.830399377 podStartE2EDuration="2m19.830399377s" podCreationTimestamp="2025-12-11 10:11:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:14:05.681031078 +0000 UTC m=+163.704890111" watchObservedRunningTime="2025-12-11 10:14:05.830399377 +0000 UTC m=+163.854258430" Dec 11 10:14:05 crc kubenswrapper[4953]: I1211 10:14:05.839439 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-8s4mq" Dec 11 10:14:05 crc kubenswrapper[4953]: I1211 10:14:05.861751 4953 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-crtp9 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: connect: connection refused" start-of-body= Dec 11 10:14:05 crc kubenswrapper[4953]: I1211 10:14:05.861806 4953 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-crtp9" podUID="b9ce2b59-c756-43bf-8114-9fe86a8c8cd9" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: connect: connection refused" Dec 11 10:14:05 crc kubenswrapper[4953]: I1211 10:14:05.862151 4953 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-crtp9 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: connect: connection refused" start-of-body= Dec 11 10:14:05 crc kubenswrapper[4953]: I1211 10:14:05.862178 4953 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-crtp9" podUID="b9ce2b59-c756-43bf-8114-9fe86a8c8cd9" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: connect: connection refused" Dec 11 10:14:05 crc kubenswrapper[4953]: I1211 10:14:05.915878 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 10:14:05 crc kubenswrapper[4953]: E1211 10:14:05.917273 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 10:14:06.417256237 +0000 UTC m=+164.441115270 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:14:06 crc kubenswrapper[4953]: I1211 10:14:06.314930 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-r99w9\" (UID: \"c1a4e773-6467-424c-935e-40ef82e5fa99\") " pod="openshift-image-registry/image-registry-697d97f7c8-r99w9" Dec 11 10:14:06 crc kubenswrapper[4953]: E1211 10:14:06.315610 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 10:14:06.815596285 +0000 UTC m=+164.839455318 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-r99w9" (UID: "c1a4e773-6467-424c-935e-40ef82e5fa99") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:14:06 crc kubenswrapper[4953]: I1211 10:14:06.350244 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29424120-hdqwl" podStartSLOduration=141.350228658 podStartE2EDuration="2m21.350228658s" podCreationTimestamp="2025-12-11 10:11:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:14:05.94255227 +0000 UTC m=+163.966411363" watchObservedRunningTime="2025-12-11 10:14:06.350228658 +0000 UTC m=+164.374087691" Dec 11 10:14:06 crc kubenswrapper[4953]: I1211 10:14:06.594748 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 10:14:06 crc kubenswrapper[4953]: E1211 10:14:06.595419 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 10:14:07.095398005 +0000 UTC m=+165.119257048 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:14:06 crc kubenswrapper[4953]: I1211 10:14:06.698565 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-r99w9\" (UID: \"c1a4e773-6467-424c-935e-40ef82e5fa99\") " pod="openshift-image-registry/image-registry-697d97f7c8-r99w9" Dec 11 10:14:06 crc kubenswrapper[4953]: E1211 10:14:06.699146 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 10:14:07.19912964 +0000 UTC m=+165.222988683 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-r99w9" (UID: "c1a4e773-6467-424c-935e-40ef82e5fa99") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:14:06 crc kubenswrapper[4953]: I1211 10:14:06.725307 4953 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-6jmjq container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.40:5443/healthz\": dial tcp 10.217.0.40:5443: connect: connection refused" start-of-body= Dec 11 10:14:06 crc kubenswrapper[4953]: I1211 10:14:06.725349 4953 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6jmjq" podUID="332434db-75e4-4fce-8973-aff84310d0f5" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.40:5443/healthz\": dial tcp 10.217.0.40:5443: connect: connection refused" Dec 11 10:14:06 crc kubenswrapper[4953]: I1211 10:14:06.725425 4953 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-cl6x8 container/olm-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.39:8443/healthz\": dial tcp 10.217.0.39:8443: connect: connection refused" start-of-body= Dec 11 10:14:06 crc kubenswrapper[4953]: I1211 10:14:06.725439 4953 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-cl6x8" podUID="3cee2756-70ae-44b9-b52a-43cf1bc552e0" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.39:8443/healthz\": dial tcp 10.217.0.39:8443: connect: connection refused" Dec 11 10:14:06 crc kubenswrapper[4953]: I1211 10:14:06.725476 4953 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-cl6x8 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.39:8443/healthz\": dial tcp 10.217.0.39:8443: connect: connection refused" start-of-body= Dec 11 10:14:06 crc kubenswrapper[4953]: I1211 10:14:06.725487 4953 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-cl6x8" podUID="3cee2756-70ae-44b9-b52a-43cf1bc552e0" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.39:8443/healthz\": dial tcp 10.217.0.39:8443: connect: connection refused" Dec 11 10:14:06 crc kubenswrapper[4953]: I1211 10:14:06.727288 4953 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-6jmjq container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:5443/healthz\": dial tcp 10.217.0.40:5443: connect: connection refused" start-of-body= Dec 11 10:14:06 crc kubenswrapper[4953]: I1211 10:14:06.727376 4953 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6jmjq" podUID="332434db-75e4-4fce-8973-aff84310d0f5" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.40:5443/healthz\": dial tcp 10.217.0.40:5443: connect: connection refused" Dec 11 10:14:06 crc kubenswrapper[4953]: I1211 10:14:06.732868 4953 patch_prober.go:28] interesting pod/router-default-5444994796-v8699 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 11 10:14:06 crc kubenswrapper[4953]: [-]has-synced failed: reason withheld Dec 11 10:14:06 crc kubenswrapper[4953]: [+]process-running ok Dec 11 10:14:06 crc kubenswrapper[4953]: healthz check failed Dec 11 10:14:06 crc kubenswrapper[4953]: I1211 10:14:06.733154 4953 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-v8699" podUID="d16293c2-d5aa-41fe-859c-0cc5201b6f0b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 10:14:06 crc kubenswrapper[4953]: I1211 10:14:06.759275 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-xmb4p" Dec 11 10:14:06 crc kubenswrapper[4953]: I1211 10:14:06.774929 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8dr5c" event={"ID":"872f79b9-6f54-4b5c-bc80-cd2404dc3156","Type":"ContainerStarted","Data":"74e01ed2c5e0c94bd58e8934d6dde4921244859695d7daf87ce7dd381a3c6c09"} Dec 11 10:14:06 crc kubenswrapper[4953]: I1211 10:14:06.799250 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 10:14:06 crc kubenswrapper[4953]: E1211 10:14:06.799633 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 10:14:07.299614169 +0000 UTC m=+165.323473202 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:14:06 crc kubenswrapper[4953]: I1211 10:14:06.804612 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7b4cr" Dec 11 10:14:06 crc kubenswrapper[4953]: I1211 10:14:06.822210 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-qmmnp" event={"ID":"9d5a8173-8a2a-42c1-9935-2433336c3be7","Type":"ContainerStarted","Data":"e77050286b92b517b749c7d5907f757a41992142b5de85da3ea04ab0908f0571"} Dec 11 10:14:06 crc kubenswrapper[4953]: I1211 10:14:06.824500 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-rxj74" podStartSLOduration=141.824485848 podStartE2EDuration="2m21.824485848s" podCreationTimestamp="2025-12-11 10:11:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:14:06.822930246 +0000 UTC m=+164.846789279" watchObservedRunningTime="2025-12-11 10:14:06.824485848 +0000 UTC m=+164.848344881" Dec 11 10:14:06 crc kubenswrapper[4953]: I1211 10:14:06.827342 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-j88r5" event={"ID":"35703302-61e8-4383-9d13-0449584419e4","Type":"ContainerStarted","Data":"8d3f9181c999c15ade4d1159f94512215fc57c712fb7c5ba9be4020fe5b46487"} Dec 11 10:14:06 crc kubenswrapper[4953]: I1211 10:14:06.851199 4953 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-crtp9 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: connect: connection refused" start-of-body= Dec 11 10:14:06 crc kubenswrapper[4953]: I1211 10:14:06.851309 4953 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-crtp9" podUID="b9ce2b59-c756-43bf-8114-9fe86a8c8cd9" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: connect: connection refused" Dec 11 10:14:06 crc kubenswrapper[4953]: I1211 10:14:06.866334 4953 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-6jmjq container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:5443/healthz\": dial tcp 10.217.0.40:5443: connect: connection refused" start-of-body= Dec 11 10:14:06 crc kubenswrapper[4953]: I1211 10:14:06.866412 4953 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6jmjq" podUID="332434db-75e4-4fce-8973-aff84310d0f5" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.40:5443/healthz\": dial tcp 10.217.0.40:5443: connect: connection refused" Dec 11 10:14:06 crc kubenswrapper[4953]: I1211 10:14:06.888700 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-crtp9" podStartSLOduration=142.888677051 podStartE2EDuration="2m22.888677051s" podCreationTimestamp="2025-12-11 10:11:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:14:06.888200555 +0000 UTC m=+164.912059588" watchObservedRunningTime="2025-12-11 10:14:06.888677051 +0000 UTC m=+164.912536084" Dec 11 10:14:07 crc kubenswrapper[4953]: I1211 10:14:07.187920 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-r99w9\" (UID: \"c1a4e773-6467-424c-935e-40ef82e5fa99\") " pod="openshift-image-registry/image-registry-697d97f7c8-r99w9" Dec 11 10:14:07 crc kubenswrapper[4953]: E1211 10:14:07.193226 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 10:14:07.693200287 +0000 UTC m=+165.717059330 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-r99w9" (UID: "c1a4e773-6467-424c-935e-40ef82e5fa99") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:14:07 crc kubenswrapper[4953]: I1211 10:14:07.258973 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-ml8wp" podStartSLOduration=141.258957703 podStartE2EDuration="2m21.258957703s" podCreationTimestamp="2025-12-11 10:11:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:14:07.237048531 +0000 UTC m=+165.260907564" watchObservedRunningTime="2025-12-11 10:14:07.258957703 +0000 UTC m=+165.282816736" Dec 11 10:14:07 crc kubenswrapper[4953]: I1211 10:14:07.289312 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 10:14:07 crc kubenswrapper[4953]: E1211 10:14:07.289687 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 10:14:07.789671814 +0000 UTC m=+165.813530847 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:14:07 crc kubenswrapper[4953]: I1211 10:14:07.487696 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-r99w9\" (UID: \"c1a4e773-6467-424c-935e-40ef82e5fa99\") " pod="openshift-image-registry/image-registry-697d97f7c8-r99w9" Dec 11 10:14:07 crc kubenswrapper[4953]: E1211 10:14:07.488250 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 10:14:07.988236562 +0000 UTC m=+166.012095595 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-r99w9" (UID: "c1a4e773-6467-424c-935e-40ef82e5fa99") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:14:07 crc kubenswrapper[4953]: I1211 10:14:07.624252 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 10:14:07 crc kubenswrapper[4953]: E1211 10:14:07.625031 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 10:14:08.124999035 +0000 UTC m=+166.148858068 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:14:07 crc kubenswrapper[4953]: I1211 10:14:07.668049 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-w2rvh"] Dec 11 10:14:07 crc kubenswrapper[4953]: I1211 10:14:07.669345 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w2rvh" Dec 11 10:14:08 crc kubenswrapper[4953]: I1211 10:14:08.009919 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-b82cx" podStartSLOduration=142.009891337 podStartE2EDuration="2m22.009891337s" podCreationTimestamp="2025-12-11 10:11:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:14:07.678194086 +0000 UTC m=+165.702053139" watchObservedRunningTime="2025-12-11 10:14:08.009891337 +0000 UTC m=+166.033750370" Dec 11 10:14:08 crc kubenswrapper[4953]: I1211 10:14:08.014282 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-r99w9\" (UID: \"c1a4e773-6467-424c-935e-40ef82e5fa99\") " pod="openshift-image-registry/image-registry-697d97f7c8-r99w9" Dec 11 10:14:08 crc kubenswrapper[4953]: E1211 10:14:08.014757 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 10:14:08.514737557 +0000 UTC m=+166.538596590 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-r99w9" (UID: "c1a4e773-6467-424c-935e-40ef82e5fa99") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:14:08 crc kubenswrapper[4953]: I1211 10:14:08.015011 4953 patch_prober.go:28] interesting pod/router-default-5444994796-v8699 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 11 10:14:08 crc kubenswrapper[4953]: [-]has-synced failed: reason withheld Dec 11 10:14:08 crc kubenswrapper[4953]: [+]process-running ok Dec 11 10:14:08 crc kubenswrapper[4953]: healthz check failed Dec 11 10:14:08 crc kubenswrapper[4953]: I1211 10:14:08.015045 4953 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-v8699" podUID="d16293c2-d5aa-41fe-859c-0cc5201b6f0b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 10:14:08 crc kubenswrapper[4953]: I1211 10:14:08.024504 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-l5pbm"] Dec 11 10:14:08 crc kubenswrapper[4953]: I1211 10:14:08.025719 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l5pbm" Dec 11 10:14:08 crc kubenswrapper[4953]: I1211 10:14:08.029116 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-w2rvh"] Dec 11 10:14:08 crc kubenswrapper[4953]: I1211 10:14:08.065075 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 11 10:14:08 crc kubenswrapper[4953]: I1211 10:14:08.195393 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 10:14:08 crc kubenswrapper[4953]: E1211 10:14:08.195797 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 10:14:08.695780208 +0000 UTC m=+166.719639241 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:14:08 crc kubenswrapper[4953]: I1211 10:14:08.195942 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bcb98d03-7cdb-49c6-baa8-c4aa9605a2d6-utilities\") pod \"certified-operators-w2rvh\" (UID: \"bcb98d03-7cdb-49c6-baa8-c4aa9605a2d6\") " pod="openshift-marketplace/certified-operators-w2rvh" Dec 11 10:14:08 crc kubenswrapper[4953]: I1211 10:14:08.196083 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6xx8\" (UniqueName: \"kubernetes.io/projected/bcb98d03-7cdb-49c6-baa8-c4aa9605a2d6-kube-api-access-x6xx8\") pod \"certified-operators-w2rvh\" (UID: \"bcb98d03-7cdb-49c6-baa8-c4aa9605a2d6\") " pod="openshift-marketplace/certified-operators-w2rvh" Dec 11 10:14:08 crc kubenswrapper[4953]: I1211 10:14:08.196194 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b099bc8-faec-451b-88a3-f03e46e3ad94-utilities\") pod \"community-operators-l5pbm\" (UID: \"3b099bc8-faec-451b-88a3-f03e46e3ad94\") " pod="openshift-marketplace/community-operators-l5pbm" Dec 11 10:14:08 crc kubenswrapper[4953]: I1211 10:14:08.196346 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67vlb\" (UniqueName: \"kubernetes.io/projected/3b099bc8-faec-451b-88a3-f03e46e3ad94-kube-api-access-67vlb\") pod \"community-operators-l5pbm\" (UID: \"3b099bc8-faec-451b-88a3-f03e46e3ad94\") " pod="openshift-marketplace/community-operators-l5pbm" Dec 11 10:14:08 crc kubenswrapper[4953]: I1211 10:14:08.196521 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-r99w9\" (UID: \"c1a4e773-6467-424c-935e-40ef82e5fa99\") " pod="openshift-image-registry/image-registry-697d97f7c8-r99w9" Dec 11 10:14:08 crc kubenswrapper[4953]: I1211 10:14:08.196639 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b099bc8-faec-451b-88a3-f03e46e3ad94-catalog-content\") pod \"community-operators-l5pbm\" (UID: \"3b099bc8-faec-451b-88a3-f03e46e3ad94\") " pod="openshift-marketplace/community-operators-l5pbm" Dec 11 10:14:08 crc kubenswrapper[4953]: I1211 10:14:08.196811 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bcb98d03-7cdb-49c6-baa8-c4aa9605a2d6-catalog-content\") pod \"certified-operators-w2rvh\" (UID: \"bcb98d03-7cdb-49c6-baa8-c4aa9605a2d6\") " pod="openshift-marketplace/certified-operators-w2rvh" Dec 11 10:14:08 crc kubenswrapper[4953]: E1211 10:14:08.198204 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 10:14:08.698183587 +0000 UTC m=+166.722042690 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-r99w9" (UID: "c1a4e773-6467-424c-935e-40ef82e5fa99") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:14:08 crc kubenswrapper[4953]: I1211 10:14:08.376869 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 10:14:08 crc kubenswrapper[4953]: I1211 10:14:08.377147 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bcb98d03-7cdb-49c6-baa8-c4aa9605a2d6-utilities\") pod \"certified-operators-w2rvh\" (UID: \"bcb98d03-7cdb-49c6-baa8-c4aa9605a2d6\") " pod="openshift-marketplace/certified-operators-w2rvh" Dec 11 10:14:08 crc kubenswrapper[4953]: I1211 10:14:08.377185 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6xx8\" (UniqueName: \"kubernetes.io/projected/bcb98d03-7cdb-49c6-baa8-c4aa9605a2d6-kube-api-access-x6xx8\") pod \"certified-operators-w2rvh\" (UID: \"bcb98d03-7cdb-49c6-baa8-c4aa9605a2d6\") " pod="openshift-marketplace/certified-operators-w2rvh" Dec 11 10:14:08 crc kubenswrapper[4953]: I1211 10:14:08.377220 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b099bc8-faec-451b-88a3-f03e46e3ad94-utilities\") pod \"community-operators-l5pbm\" (UID: \"3b099bc8-faec-451b-88a3-f03e46e3ad94\") " pod="openshift-marketplace/community-operators-l5pbm" Dec 11 10:14:08 crc kubenswrapper[4953]: I1211 10:14:08.377269 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67vlb\" (UniqueName: \"kubernetes.io/projected/3b099bc8-faec-451b-88a3-f03e46e3ad94-kube-api-access-67vlb\") pod \"community-operators-l5pbm\" (UID: \"3b099bc8-faec-451b-88a3-f03e46e3ad94\") " pod="openshift-marketplace/community-operators-l5pbm" Dec 11 10:14:08 crc kubenswrapper[4953]: I1211 10:14:08.377320 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b099bc8-faec-451b-88a3-f03e46e3ad94-catalog-content\") pod \"community-operators-l5pbm\" (UID: \"3b099bc8-faec-451b-88a3-f03e46e3ad94\") " pod="openshift-marketplace/community-operators-l5pbm" Dec 11 10:14:08 crc kubenswrapper[4953]: I1211 10:14:08.377366 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bcb98d03-7cdb-49c6-baa8-c4aa9605a2d6-catalog-content\") pod \"certified-operators-w2rvh\" (UID: \"bcb98d03-7cdb-49c6-baa8-c4aa9605a2d6\") " pod="openshift-marketplace/certified-operators-w2rvh" Dec 11 10:14:08 crc kubenswrapper[4953]: I1211 10:14:08.377647 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-m69bw" Dec 11 10:14:08 crc kubenswrapper[4953]: I1211 10:14:08.377999 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bcb98d03-7cdb-49c6-baa8-c4aa9605a2d6-catalog-content\") pod \"certified-operators-w2rvh\" (UID: \"bcb98d03-7cdb-49c6-baa8-c4aa9605a2d6\") " pod="openshift-marketplace/certified-operators-w2rvh" Dec 11 10:14:08 crc kubenswrapper[4953]: E1211 10:14:08.378009 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 10:14:08.877995105 +0000 UTC m=+166.901854138 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:14:08 crc kubenswrapper[4953]: I1211 10:14:08.378431 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bcb98d03-7cdb-49c6-baa8-c4aa9605a2d6-utilities\") pod \"certified-operators-w2rvh\" (UID: \"bcb98d03-7cdb-49c6-baa8-c4aa9605a2d6\") " pod="openshift-marketplace/certified-operators-w2rvh" Dec 11 10:14:08 crc kubenswrapper[4953]: I1211 10:14:08.378549 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b099bc8-faec-451b-88a3-f03e46e3ad94-utilities\") pod \"community-operators-l5pbm\" (UID: \"3b099bc8-faec-451b-88a3-f03e46e3ad94\") " pod="openshift-marketplace/community-operators-l5pbm" Dec 11 10:14:08 crc kubenswrapper[4953]: I1211 10:14:08.378941 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b099bc8-faec-451b-88a3-f03e46e3ad94-catalog-content\") pod \"community-operators-l5pbm\" (UID: \"3b099bc8-faec-451b-88a3-f03e46e3ad94\") " pod="openshift-marketplace/community-operators-l5pbm" Dec 11 10:14:08 crc kubenswrapper[4953]: I1211 10:14:08.471892 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 11 10:14:08 crc kubenswrapper[4953]: I1211 10:14:08.482937 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-r99w9\" (UID: \"c1a4e773-6467-424c-935e-40ef82e5fa99\") " pod="openshift-image-registry/image-registry-697d97f7c8-r99w9" Dec 11 10:14:08 crc kubenswrapper[4953]: E1211 10:14:08.483360 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 10:14:08.983347596 +0000 UTC m=+167.007206629 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-r99w9" (UID: "c1a4e773-6467-424c-935e-40ef82e5fa99") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:14:08 crc kubenswrapper[4953]: I1211 10:14:08.499005 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 11 10:14:08 crc kubenswrapper[4953]: I1211 10:14:08.509242 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-cf6dd"] Dec 11 10:14:08 crc kubenswrapper[4953]: I1211 10:14:08.511861 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cf6dd" Dec 11 10:14:08 crc kubenswrapper[4953]: I1211 10:14:08.520810 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-dbdlx"] Dec 11 10:14:08 crc kubenswrapper[4953]: I1211 10:14:08.522925 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dbdlx" Dec 11 10:14:08 crc kubenswrapper[4953]: I1211 10:14:08.543562 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-l5pbm"] Dec 11 10:14:08 crc kubenswrapper[4953]: I1211 10:14:08.589376 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 10:14:08 crc kubenswrapper[4953]: I1211 10:14:08.590309 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvzfq\" (UniqueName: \"kubernetes.io/projected/c6173b60-4d44-435b-a606-0b3836f71ad2-kube-api-access-zvzfq\") pod \"community-operators-dbdlx\" (UID: \"c6173b60-4d44-435b-a606-0b3836f71ad2\") " pod="openshift-marketplace/community-operators-dbdlx" Dec 11 10:14:08 crc kubenswrapper[4953]: I1211 10:14:08.590510 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldkqc\" (UniqueName: \"kubernetes.io/projected/cef0b6d3-40d2-4981-894b-962df1304c36-kube-api-access-ldkqc\") pod \"certified-operators-cf6dd\" (UID: \"cef0b6d3-40d2-4981-894b-962df1304c36\") " pod="openshift-marketplace/certified-operators-cf6dd" Dec 11 10:14:08 crc kubenswrapper[4953]: I1211 10:14:08.590652 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6173b60-4d44-435b-a606-0b3836f71ad2-utilities\") pod \"community-operators-dbdlx\" (UID: \"c6173b60-4d44-435b-a606-0b3836f71ad2\") " pod="openshift-marketplace/community-operators-dbdlx" Dec 11 10:14:08 crc kubenswrapper[4953]: I1211 10:14:08.590844 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6173b60-4d44-435b-a606-0b3836f71ad2-catalog-content\") pod \"community-operators-dbdlx\" (UID: \"c6173b60-4d44-435b-a606-0b3836f71ad2\") " pod="openshift-marketplace/community-operators-dbdlx" Dec 11 10:14:08 crc kubenswrapper[4953]: I1211 10:14:08.590979 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cef0b6d3-40d2-4981-894b-962df1304c36-utilities\") pod \"certified-operators-cf6dd\" (UID: \"cef0b6d3-40d2-4981-894b-962df1304c36\") " pod="openshift-marketplace/certified-operators-cf6dd" Dec 11 10:14:08 crc kubenswrapper[4953]: I1211 10:14:08.591112 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cef0b6d3-40d2-4981-894b-962df1304c36-catalog-content\") pod \"certified-operators-cf6dd\" (UID: \"cef0b6d3-40d2-4981-894b-962df1304c36\") " pod="openshift-marketplace/certified-operators-cf6dd" Dec 11 10:14:08 crc kubenswrapper[4953]: E1211 10:14:08.591619 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 10:14:09.09156172 +0000 UTC m=+167.115420763 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:14:08 crc kubenswrapper[4953]: I1211 10:14:08.626714 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6xx8\" (UniqueName: \"kubernetes.io/projected/bcb98d03-7cdb-49c6-baa8-c4aa9605a2d6-kube-api-access-x6xx8\") pod \"certified-operators-w2rvh\" (UID: \"bcb98d03-7cdb-49c6-baa8-c4aa9605a2d6\") " pod="openshift-marketplace/certified-operators-w2rvh" Dec 11 10:14:08 crc kubenswrapper[4953]: I1211 10:14:08.631453 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w2rvh" Dec 11 10:14:08 crc kubenswrapper[4953]: I1211 10:14:08.648413 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67vlb\" (UniqueName: \"kubernetes.io/projected/3b099bc8-faec-451b-88a3-f03e46e3ad94-kube-api-access-67vlb\") pod \"community-operators-l5pbm\" (UID: \"3b099bc8-faec-451b-88a3-f03e46e3ad94\") " pod="openshift-marketplace/community-operators-l5pbm" Dec 11 10:14:08 crc kubenswrapper[4953]: I1211 10:14:08.672870 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l5pbm" Dec 11 10:14:08 crc kubenswrapper[4953]: I1211 10:14:08.718617 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvzfq\" (UniqueName: \"kubernetes.io/projected/c6173b60-4d44-435b-a606-0b3836f71ad2-kube-api-access-zvzfq\") pod \"community-operators-dbdlx\" (UID: \"c6173b60-4d44-435b-a606-0b3836f71ad2\") " pod="openshift-marketplace/community-operators-dbdlx" Dec 11 10:14:08 crc kubenswrapper[4953]: I1211 10:14:08.718666 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ldkqc\" (UniqueName: \"kubernetes.io/projected/cef0b6d3-40d2-4981-894b-962df1304c36-kube-api-access-ldkqc\") pod \"certified-operators-cf6dd\" (UID: \"cef0b6d3-40d2-4981-894b-962df1304c36\") " pod="openshift-marketplace/certified-operators-cf6dd" Dec 11 10:14:08 crc kubenswrapper[4953]: I1211 10:14:08.718709 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-r99w9\" (UID: \"c1a4e773-6467-424c-935e-40ef82e5fa99\") " pod="openshift-image-registry/image-registry-697d97f7c8-r99w9" Dec 11 10:14:08 crc kubenswrapper[4953]: I1211 10:14:08.718741 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6173b60-4d44-435b-a606-0b3836f71ad2-utilities\") pod \"community-operators-dbdlx\" (UID: \"c6173b60-4d44-435b-a606-0b3836f71ad2\") " pod="openshift-marketplace/community-operators-dbdlx" Dec 11 10:14:08 crc kubenswrapper[4953]: I1211 10:14:08.718769 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6173b60-4d44-435b-a606-0b3836f71ad2-catalog-content\") pod \"community-operators-dbdlx\" (UID: \"c6173b60-4d44-435b-a606-0b3836f71ad2\") " pod="openshift-marketplace/community-operators-dbdlx" Dec 11 10:14:08 crc kubenswrapper[4953]: I1211 10:14:08.718789 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cef0b6d3-40d2-4981-894b-962df1304c36-utilities\") pod \"certified-operators-cf6dd\" (UID: \"cef0b6d3-40d2-4981-894b-962df1304c36\") " pod="openshift-marketplace/certified-operators-cf6dd" Dec 11 10:14:08 crc kubenswrapper[4953]: I1211 10:14:08.718821 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cef0b6d3-40d2-4981-894b-962df1304c36-catalog-content\") pod \"certified-operators-cf6dd\" (UID: \"cef0b6d3-40d2-4981-894b-962df1304c36\") " pod="openshift-marketplace/certified-operators-cf6dd" Dec 11 10:14:08 crc kubenswrapper[4953]: E1211 10:14:08.719656 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 10:14:09.219636946 +0000 UTC m=+167.243495989 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-r99w9" (UID: "c1a4e773-6467-424c-935e-40ef82e5fa99") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:14:08 crc kubenswrapper[4953]: I1211 10:14:08.720105 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cf6dd"] Dec 11 10:14:08 crc kubenswrapper[4953]: I1211 10:14:08.720326 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6173b60-4d44-435b-a606-0b3836f71ad2-catalog-content\") pod \"community-operators-dbdlx\" (UID: \"c6173b60-4d44-435b-a606-0b3836f71ad2\") " pod="openshift-marketplace/community-operators-dbdlx" Dec 11 10:14:08 crc kubenswrapper[4953]: I1211 10:14:08.720643 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6173b60-4d44-435b-a606-0b3836f71ad2-utilities\") pod \"community-operators-dbdlx\" (UID: \"c6173b60-4d44-435b-a606-0b3836f71ad2\") " pod="openshift-marketplace/community-operators-dbdlx" Dec 11 10:14:08 crc kubenswrapper[4953]: I1211 10:14:08.759526 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cef0b6d3-40d2-4981-894b-962df1304c36-utilities\") pod \"certified-operators-cf6dd\" (UID: \"cef0b6d3-40d2-4981-894b-962df1304c36\") " pod="openshift-marketplace/certified-operators-cf6dd" Dec 11 10:14:08 crc kubenswrapper[4953]: I1211 10:14:08.759962 4953 patch_prober.go:28] interesting pod/router-default-5444994796-v8699 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 11 10:14:08 crc kubenswrapper[4953]: [-]has-synced failed: reason withheld Dec 11 10:14:08 crc kubenswrapper[4953]: [+]process-running ok Dec 11 10:14:08 crc kubenswrapper[4953]: healthz check failed Dec 11 10:14:08 crc kubenswrapper[4953]: I1211 10:14:08.760002 4953 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-v8699" podUID="d16293c2-d5aa-41fe-859c-0cc5201b6f0b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 10:14:08 crc kubenswrapper[4953]: I1211 10:14:08.766728 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dbdlx"] Dec 11 10:14:08 crc kubenswrapper[4953]: I1211 10:14:08.759352 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cef0b6d3-40d2-4981-894b-962df1304c36-catalog-content\") pod \"certified-operators-cf6dd\" (UID: \"cef0b6d3-40d2-4981-894b-962df1304c36\") " pod="openshift-marketplace/certified-operators-cf6dd" Dec 11 10:14:09 crc kubenswrapper[4953]: I1211 10:14:09.222697 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 10:14:09 crc kubenswrapper[4953]: I1211 10:14:09.224477 4953 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-crtp9 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: connect: connection refused" start-of-body= Dec 11 10:14:09 crc kubenswrapper[4953]: I1211 10:14:09.224542 4953 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-crtp9" podUID="b9ce2b59-c756-43bf-8114-9fe86a8c8cd9" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: connect: connection refused" Dec 11 10:14:09 crc kubenswrapper[4953]: I1211 10:14:09.225958 4953 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-crtp9 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: connect: connection refused" start-of-body= Dec 11 10:14:09 crc kubenswrapper[4953]: I1211 10:14:09.226033 4953 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-crtp9" podUID="b9ce2b59-c756-43bf-8114-9fe86a8c8cd9" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: connect: connection refused" Dec 11 10:14:09 crc kubenswrapper[4953]: E1211 10:14:09.229469 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 10:14:10.229437068 +0000 UTC m=+168.253296101 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:14:09 crc kubenswrapper[4953]: I1211 10:14:09.262058 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9shds" podStartSLOduration=143.262034944 podStartE2EDuration="2m23.262034944s" podCreationTimestamp="2025-12-11 10:11:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:14:09.245932083 +0000 UTC m=+167.269791116" watchObservedRunningTime="2025-12-11 10:14:09.262034944 +0000 UTC m=+167.285893977" Dec 11 10:14:09 crc kubenswrapper[4953]: I1211 10:14:09.339747 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvzfq\" (UniqueName: \"kubernetes.io/projected/c6173b60-4d44-435b-a606-0b3836f71ad2-kube-api-access-zvzfq\") pod \"community-operators-dbdlx\" (UID: \"c6173b60-4d44-435b-a606-0b3836f71ad2\") " pod="openshift-marketplace/community-operators-dbdlx" Dec 11 10:14:09 crc kubenswrapper[4953]: E1211 10:14:09.336041 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 10:14:09.836025811 +0000 UTC m=+167.859884844 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:14:09 crc kubenswrapper[4953]: I1211 10:14:09.335972 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 10:14:09 crc kubenswrapper[4953]: I1211 10:14:09.340984 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-r99w9\" (UID: \"c1a4e773-6467-424c-935e-40ef82e5fa99\") " pod="openshift-image-registry/image-registry-697d97f7c8-r99w9" Dec 11 10:14:09 crc kubenswrapper[4953]: E1211 10:14:09.341595 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 10:14:09.841569934 +0000 UTC m=+167.865428967 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-r99w9" (UID: "c1a4e773-6467-424c-935e-40ef82e5fa99") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:14:09 crc kubenswrapper[4953]: I1211 10:14:09.392565 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldkqc\" (UniqueName: \"kubernetes.io/projected/cef0b6d3-40d2-4981-894b-962df1304c36-kube-api-access-ldkqc\") pod \"certified-operators-cf6dd\" (UID: \"cef0b6d3-40d2-4981-894b-962df1304c36\") " pod="openshift-marketplace/certified-operators-cf6dd" Dec 11 10:14:09 crc kubenswrapper[4953]: I1211 10:14:09.556674 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 10:14:09 crc kubenswrapper[4953]: E1211 10:14:09.641217 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 10:14:10.141175899 +0000 UTC m=+168.165034932 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:14:09 crc kubenswrapper[4953]: I1211 10:14:09.648951 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/86f65b63-32e0-49cc-bc96-272ecfb987ed-metrics-certs\") pod \"network-metrics-daemon-qm4mr\" (UID: \"86f65b63-32e0-49cc-bc96-272ecfb987ed\") " pod="openshift-multus/network-metrics-daemon-qm4mr" Dec 11 10:14:09 crc kubenswrapper[4953]: I1211 10:14:09.832942 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-r99w9\" (UID: \"c1a4e773-6467-424c-935e-40ef82e5fa99\") " pod="openshift-image-registry/image-registry-697d97f7c8-r99w9" Dec 11 10:14:09 crc kubenswrapper[4953]: E1211 10:14:09.833284 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 10:14:10.333264853 +0000 UTC m=+168.357123886 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-r99w9" (UID: "c1a4e773-6467-424c-935e-40ef82e5fa99") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:14:09 crc kubenswrapper[4953]: I1211 10:14:09.858805 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cf6dd" Dec 11 10:14:14 crc kubenswrapper[4953]: I1211 10:14:10.392976 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-j88r5" Dec 11 10:14:14 crc kubenswrapper[4953]: I1211 10:14:10.393511 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 10:14:14 crc kubenswrapper[4953]: E1211 10:14:10.393854 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 10:14:11.39383509 +0000 UTC m=+169.417694123 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:14:14 crc kubenswrapper[4953]: I1211 10:14:10.396605 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/86f65b63-32e0-49cc-bc96-272ecfb987ed-metrics-certs\") pod \"network-metrics-daemon-qm4mr\" (UID: \"86f65b63-32e0-49cc-bc96-272ecfb987ed\") " pod="openshift-multus/network-metrics-daemon-qm4mr" Dec 11 10:14:14 crc kubenswrapper[4953]: I1211 10:14:10.397862 4953 patch_prober.go:28] interesting pod/router-default-5444994796-v8699 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 11 10:14:14 crc kubenswrapper[4953]: [-]has-synced failed: reason withheld Dec 11 10:14:14 crc kubenswrapper[4953]: [+]process-running ok Dec 11 10:14:14 crc kubenswrapper[4953]: healthz check failed Dec 11 10:14:14 crc kubenswrapper[4953]: I1211 10:14:10.397906 4953 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-v8699" podUID="d16293c2-d5aa-41fe-859c-0cc5201b6f0b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 10:14:14 crc kubenswrapper[4953]: I1211 10:14:10.399424 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qm4mr" Dec 11 10:14:14 crc kubenswrapper[4953]: I1211 10:14:10.405922 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-j88r5" Dec 11 10:14:14 crc kubenswrapper[4953]: I1211 10:14:10.519418 4953 patch_prober.go:28] interesting pod/apiserver-76f77b778f-j88r5 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="Get \"https://10.217.0.5:8443/livez\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Dec 11 10:14:14 crc kubenswrapper[4953]: I1211 10:14:10.519461 4953 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-j88r5" podUID="35703302-61e8-4383-9d13-0449584419e4" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.5:8443/livez\": dial tcp 10.217.0.5:8443: connect: connection refused" Dec 11 10:14:14 crc kubenswrapper[4953]: I1211 10:14:10.523305 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 10:14:14 crc kubenswrapper[4953]: E1211 10:14:10.524121 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 10:14:11.024105909 +0000 UTC m=+169.047964942 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:14:14 crc kubenswrapper[4953]: I1211 10:14:10.961429 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-r99w9\" (UID: \"c1a4e773-6467-424c-935e-40ef82e5fa99\") " pod="openshift-image-registry/image-registry-697d97f7c8-r99w9" Dec 11 10:14:14 crc kubenswrapper[4953]: I1211 10:14:10.991962 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bc5n5" podStartSLOduration=145.991942823 podStartE2EDuration="2m25.991942823s" podCreationTimestamp="2025-12-11 10:11:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:14:10.519249359 +0000 UTC m=+168.543108402" watchObservedRunningTime="2025-12-11 10:14:10.991942823 +0000 UTC m=+169.015801856" Dec 11 10:14:14 crc kubenswrapper[4953]: E1211 10:14:11.003646 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 10:14:11.503563236 +0000 UTC m=+169.527422269 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-r99w9" (UID: "c1a4e773-6467-424c-935e-40ef82e5fa99") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:14:14 crc kubenswrapper[4953]: I1211 10:14:11.003760 4953 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-9shds container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="Get \"https://10.217.0.10:8443/livez\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Dec 11 10:14:14 crc kubenswrapper[4953]: I1211 10:14:11.003799 4953 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9shds" podUID="c97cb435-9028-4ea4-a6cb-7851c2845566" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.10:8443/livez\": dial tcp 10.217.0.10:8443: connect: connection refused" Dec 11 10:14:14 crc kubenswrapper[4953]: I1211 10:14:11.048477 4953 patch_prober.go:28] interesting pod/router-default-5444994796-v8699 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 11 10:14:14 crc kubenswrapper[4953]: [-]has-synced failed: reason withheld Dec 11 10:14:14 crc kubenswrapper[4953]: [+]process-running ok Dec 11 10:14:14 crc kubenswrapper[4953]: healthz check failed Dec 11 10:14:14 crc kubenswrapper[4953]: I1211 10:14:11.048509 4953 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-v8699" podUID="d16293c2-d5aa-41fe-859c-0cc5201b6f0b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 10:14:14 crc kubenswrapper[4953]: I1211 10:14:11.063177 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 10:14:14 crc kubenswrapper[4953]: E1211 10:14:11.063874 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 10:14:11.563845451 +0000 UTC m=+169.587704494 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:14:14 crc kubenswrapper[4953]: I1211 10:14:11.236135 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-r99w9\" (UID: \"c1a4e773-6467-424c-935e-40ef82e5fa99\") " pod="openshift-image-registry/image-registry-697d97f7c8-r99w9" Dec 11 10:14:14 crc kubenswrapper[4953]: E1211 10:14:11.263893 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 10:14:11.763862906 +0000 UTC m=+169.787721939 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-r99w9" (UID: "c1a4e773-6467-424c-935e-40ef82e5fa99") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:14:14 crc kubenswrapper[4953]: I1211 10:14:11.423234 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 10:14:14 crc kubenswrapper[4953]: E1211 10:14:11.436721 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 10:14:11.936647715 +0000 UTC m=+169.960506768 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:14:14 crc kubenswrapper[4953]: I1211 10:14:11.879684 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-r99w9\" (UID: \"c1a4e773-6467-424c-935e-40ef82e5fa99\") " pod="openshift-image-registry/image-registry-697d97f7c8-r99w9" Dec 11 10:14:14 crc kubenswrapper[4953]: E1211 10:14:11.880186 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 10:14:12.380170089 +0000 UTC m=+170.404029122 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-r99w9" (UID: "c1a4e773-6467-424c-935e-40ef82e5fa99") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:14:14 crc kubenswrapper[4953]: I1211 10:14:11.999308 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 10:14:14 crc kubenswrapper[4953]: E1211 10:14:12.008040 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 10:14:12.508006937 +0000 UTC m=+170.531866020 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:14:14 crc kubenswrapper[4953]: I1211 10:14:12.019547 4953 patch_prober.go:28] interesting pod/router-default-5444994796-v8699 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 11 10:14:14 crc kubenswrapper[4953]: [-]has-synced failed: reason withheld Dec 11 10:14:14 crc kubenswrapper[4953]: [+]process-running ok Dec 11 10:14:14 crc kubenswrapper[4953]: healthz check failed Dec 11 10:14:14 crc kubenswrapper[4953]: I1211 10:14:12.019615 4953 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-v8699" podUID="d16293c2-d5aa-41fe-859c-0cc5201b6f0b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 10:14:14 crc kubenswrapper[4953]: I1211 10:14:12.524432 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 10:14:14 crc kubenswrapper[4953]: E1211 10:14:12.830880 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 10:14:13.83085454 +0000 UTC m=+171.854713573 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:14:14 crc kubenswrapper[4953]: I1211 10:14:12.877712 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-r99w9\" (UID: \"c1a4e773-6467-424c-935e-40ef82e5fa99\") " pod="openshift-image-registry/image-registry-697d97f7c8-r99w9" Dec 11 10:14:14 crc kubenswrapper[4953]: I1211 10:14:12.881921 4953 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-crtp9 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.25:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 11 10:14:14 crc kubenswrapper[4953]: I1211 10:14:12.881969 4953 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-crtp9" podUID="b9ce2b59-c756-43bf-8114-9fe86a8c8cd9" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.25:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 11 10:14:14 crc kubenswrapper[4953]: I1211 10:14:12.899146 4953 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-crtp9 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.25:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 11 10:14:14 crc kubenswrapper[4953]: I1211 10:14:12.899216 4953 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-crtp9" podUID="b9ce2b59-c756-43bf-8114-9fe86a8c8cd9" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.25:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 10:14:14 crc kubenswrapper[4953]: E1211 10:14:12.902200 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 10:14:13.378273761 +0000 UTC m=+171.402132794 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-r99w9" (UID: "c1a4e773-6467-424c-935e-40ef82e5fa99") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:14:14 crc kubenswrapper[4953]: I1211 10:14:12.928090 4953 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Dec 11 10:14:14 crc kubenswrapper[4953]: I1211 10:14:13.182824 4953 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-12-11T10:14:12.928130894Z","Handler":null,"Name":""} Dec 11 10:14:14 crc kubenswrapper[4953]: I1211 10:14:13.188993 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 10:14:14 crc kubenswrapper[4953]: E1211 10:14:13.189853 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 10:14:13.68983536 +0000 UTC m=+171.713694393 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 10:14:14 crc kubenswrapper[4953]: I1211 10:14:13.242708 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-m69bw" podStartSLOduration=31.24268615 podStartE2EDuration="31.24268615s" podCreationTimestamp="2025-12-11 10:13:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:14:13.238555314 +0000 UTC m=+171.262414347" watchObservedRunningTime="2025-12-11 10:14:13.24268615 +0000 UTC m=+171.266545183" Dec 11 10:14:14 crc kubenswrapper[4953]: I1211 10:14:13.278998 4953 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Dec 11 10:14:14 crc kubenswrapper[4953]: I1211 10:14:13.279446 4953 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Dec 11 10:14:14 crc kubenswrapper[4953]: I1211 10:14:13.781717 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 10:14:14 crc kubenswrapper[4953]: I1211 10:14:13.807778 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-j88r5" podStartSLOduration=149.807762395 podStartE2EDuration="2m29.807762395s" podCreationTimestamp="2025-12-11 10:11:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:14:13.777051844 +0000 UTC m=+171.800910897" watchObservedRunningTime="2025-12-11 10:14:13.807762395 +0000 UTC m=+171.831621428" Dec 11 10:14:14 crc kubenswrapper[4953]: I1211 10:14:13.859849 4953 patch_prober.go:28] interesting pod/router-default-5444994796-v8699 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 11 10:14:14 crc kubenswrapper[4953]: [-]has-synced failed: reason withheld Dec 11 10:14:14 crc kubenswrapper[4953]: [+]process-running ok Dec 11 10:14:14 crc kubenswrapper[4953]: healthz check failed Dec 11 10:14:14 crc kubenswrapper[4953]: I1211 10:14:13.860206 4953 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-v8699" podUID="d16293c2-d5aa-41fe-859c-0cc5201b6f0b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 10:14:14 crc kubenswrapper[4953]: I1211 10:14:13.860653 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 11 10:14:14 crc kubenswrapper[4953]: I1211 10:14:13.866694 4953 patch_prober.go:28] interesting pod/router-default-5444994796-v8699 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 11 10:14:14 crc kubenswrapper[4953]: [-]has-synced failed: reason withheld Dec 11 10:14:14 crc kubenswrapper[4953]: [+]process-running ok Dec 11 10:14:14 crc kubenswrapper[4953]: healthz check failed Dec 11 10:14:14 crc kubenswrapper[4953]: I1211 10:14:13.866719 4953 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-v8699" podUID="d16293c2-d5aa-41fe-859c-0cc5201b6f0b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 10:14:14 crc kubenswrapper[4953]: I1211 10:14:13.887536 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-r99w9\" (UID: \"c1a4e773-6467-424c-935e-40ef82e5fa99\") " pod="openshift-image-registry/image-registry-697d97f7c8-r99w9" Dec 11 10:14:14 crc kubenswrapper[4953]: I1211 10:14:13.914087 4953 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 11 10:14:14 crc kubenswrapper[4953]: I1211 10:14:13.914150 4953 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-r99w9\" (UID: \"c1a4e773-6467-424c-935e-40ef82e5fa99\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-r99w9" Dec 11 10:14:14 crc kubenswrapper[4953]: I1211 10:14:14.292202 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-r99w9\" (UID: \"c1a4e773-6467-424c-935e-40ef82e5fa99\") " pod="openshift-image-registry/image-registry-697d97f7c8-r99w9" Dec 11 10:14:14 crc kubenswrapper[4953]: I1211 10:14:14.389554 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Dec 11 10:14:14 crc kubenswrapper[4953]: E1211 10:14:14.390380 4953 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.878s" Dec 11 10:14:14 crc kubenswrapper[4953]: I1211 10:14:14.390411 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"80d43f58-fcb7-4227-b9ef-9e302b7ee878","Type":"ContainerStarted","Data":"891603bc4c59ef5cd58ff77d0fbf1509692e742e2c324db988fe95f45cd0dcd3"} Dec 11 10:14:14 crc kubenswrapper[4953]: I1211 10:14:14.403413 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dbdlx" Dec 11 10:14:14 crc kubenswrapper[4953]: I1211 10:14:14.459881 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-r99w9" Dec 11 10:14:14 crc kubenswrapper[4953]: I1211 10:14:14.586147 4953 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-config-operator/openshift-config-operator-7777fb866f-crtp9" Dec 11 10:14:14 crc kubenswrapper[4953]: I1211 10:14:14.586507 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-gnxp9"] Dec 11 10:14:14 crc kubenswrapper[4953]: I1211 10:14:14.588734 4953 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="openshift-config-operator" containerStatusID={"Type":"cri-o","ID":"f183abe1691c218366fbb4971c0ead0a606b827bc0d02d663dbb0e38e1b661fa"} pod="openshift-config-operator/openshift-config-operator-7777fb866f-crtp9" containerMessage="Container openshift-config-operator failed liveness probe, will be restarted" Dec 11 10:14:14 crc kubenswrapper[4953]: I1211 10:14:14.588901 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-config-operator/openshift-config-operator-7777fb866f-crtp9" podUID="b9ce2b59-c756-43bf-8114-9fe86a8c8cd9" containerName="openshift-config-operator" containerID="cri-o://f183abe1691c218366fbb4971c0ead0a606b827bc0d02d663dbb0e38e1b661fa" gracePeriod=30 Dec 11 10:14:14 crc kubenswrapper[4953]: I1211 10:14:14.606862 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-qmmnp" podStartSLOduration=148.606838896 podStartE2EDuration="2m28.606838896s" podCreationTimestamp="2025-12-11 10:11:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:14:14.60666864 +0000 UTC m=+172.630527663" watchObservedRunningTime="2025-12-11 10:14:14.606838896 +0000 UTC m=+172.630697929" Dec 11 10:14:14 crc kubenswrapper[4953]: I1211 10:14:14.615817 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gnxp9" Dec 11 10:14:14 crc kubenswrapper[4953]: I1211 10:14:14.621957 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 11 10:14:14 crc kubenswrapper[4953]: I1211 10:14:14.632358 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-crtp9" Dec 11 10:14:14 crc kubenswrapper[4953]: I1211 10:14:14.632385 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-2f46z"] Dec 11 10:14:14 crc kubenswrapper[4953]: I1211 10:14:14.637693 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-9sdjn" event={"ID":"a7b5a1d1-788d-448e-b859-c29daecb9a9b","Type":"ContainerStarted","Data":"0fb7baac859d97bf7c4641681bc0afceaf7c4d2cc777fb657b1a4d994fe4a4ab"} Dec 11 10:14:14 crc kubenswrapper[4953]: I1211 10:14:14.637759 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gnxp9"] Dec 11 10:14:14 crc kubenswrapper[4953]: I1211 10:14:14.637775 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2f46z"] Dec 11 10:14:14 crc kubenswrapper[4953]: I1211 10:14:14.637786 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-pxglb"] Dec 11 10:14:14 crc kubenswrapper[4953]: I1211 10:14:14.638493 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2f46z" Dec 11 10:14:14 crc kubenswrapper[4953]: I1211 10:14:14.640551 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-kkp25"] Dec 11 10:14:14 crc kubenswrapper[4953]: I1211 10:14:14.648907 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pxglb" Dec 11 10:14:14 crc kubenswrapper[4953]: I1211 10:14:14.654899 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pxglb"] Dec 11 10:14:14 crc kubenswrapper[4953]: I1211 10:14:14.654949 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kkp25"] Dec 11 10:14:14 crc kubenswrapper[4953]: I1211 10:14:14.654974 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 11 10:14:14 crc kubenswrapper[4953]: I1211 10:14:14.655385 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 11 10:14:14 crc kubenswrapper[4953]: I1211 10:14:14.655506 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 11 10:14:14 crc kubenswrapper[4953]: I1211 10:14:14.670657 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kkp25" Dec 11 10:14:14 crc kubenswrapper[4953]: I1211 10:14:14.684971 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Dec 11 10:14:14 crc kubenswrapper[4953]: I1211 10:14:14.685158 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 11 10:14:14 crc kubenswrapper[4953]: I1211 10:14:14.685289 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Dec 11 10:14:14 crc kubenswrapper[4953]: I1211 10:14:14.709528 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmd6v\" (UniqueName: \"kubernetes.io/projected/fe9b2116-8ab4-4c4c-8c58-74e62f28893d-kube-api-access-zmd6v\") pod \"redhat-marketplace-gnxp9\" (UID: \"fe9b2116-8ab4-4c4c-8c58-74e62f28893d\") " pod="openshift-marketplace/redhat-marketplace-gnxp9" Dec 11 10:14:14 crc kubenswrapper[4953]: I1211 10:14:14.709602 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/498719f2-a1f1-4214-b357-37160f0eabb2-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"498719f2-a1f1-4214-b357-37160f0eabb2\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 11 10:14:14 crc kubenswrapper[4953]: I1211 10:14:14.709846 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe9b2116-8ab4-4c4c-8c58-74e62f28893d-catalog-content\") pod \"redhat-marketplace-gnxp9\" (UID: \"fe9b2116-8ab4-4c4c-8c58-74e62f28893d\") " pod="openshift-marketplace/redhat-marketplace-gnxp9" Dec 11 10:14:14 crc kubenswrapper[4953]: I1211 10:14:14.709868 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46f197d9-de5c-42c2-9781-47ed42389e11-catalog-content\") pod \"redhat-operators-pxglb\" (UID: \"46f197d9-de5c-42c2-9781-47ed42389e11\") " pod="openshift-marketplace/redhat-operators-pxglb" Dec 11 10:14:14 crc kubenswrapper[4953]: I1211 10:14:14.709896 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6n47f\" (UniqueName: \"kubernetes.io/projected/2f406ece-016a-43bc-92c9-473b85ad0ca9-kube-api-access-6n47f\") pod \"redhat-operators-2f46z\" (UID: \"2f406ece-016a-43bc-92c9-473b85ad0ca9\") " pod="openshift-marketplace/redhat-operators-2f46z" Dec 11 10:14:14 crc kubenswrapper[4953]: I1211 10:14:14.709920 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sl7ms\" (UniqueName: \"kubernetes.io/projected/4468c58a-3cfc-4197-bf1b-8afc67dfda5e-kube-api-access-sl7ms\") pod \"redhat-marketplace-kkp25\" (UID: \"4468c58a-3cfc-4197-bf1b-8afc67dfda5e\") " pod="openshift-marketplace/redhat-marketplace-kkp25" Dec 11 10:14:14 crc kubenswrapper[4953]: I1211 10:14:14.709951 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f406ece-016a-43bc-92c9-473b85ad0ca9-catalog-content\") pod \"redhat-operators-2f46z\" (UID: \"2f406ece-016a-43bc-92c9-473b85ad0ca9\") " pod="openshift-marketplace/redhat-operators-2f46z" Dec 11 10:14:14 crc kubenswrapper[4953]: I1211 10:14:14.709972 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f406ece-016a-43bc-92c9-473b85ad0ca9-utilities\") pod \"redhat-operators-2f46z\" (UID: \"2f406ece-016a-43bc-92c9-473b85ad0ca9\") " pod="openshift-marketplace/redhat-operators-2f46z" Dec 11 10:14:14 crc kubenswrapper[4953]: I1211 10:14:14.710008 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4468c58a-3cfc-4197-bf1b-8afc67dfda5e-catalog-content\") pod \"redhat-marketplace-kkp25\" (UID: \"4468c58a-3cfc-4197-bf1b-8afc67dfda5e\") " pod="openshift-marketplace/redhat-marketplace-kkp25" Dec 11 10:14:14 crc kubenswrapper[4953]: I1211 10:14:14.710058 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46f197d9-de5c-42c2-9781-47ed42389e11-utilities\") pod \"redhat-operators-pxglb\" (UID: \"46f197d9-de5c-42c2-9781-47ed42389e11\") " pod="openshift-marketplace/redhat-operators-pxglb" Dec 11 10:14:14 crc kubenswrapper[4953]: I1211 10:14:14.710073 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8h45m\" (UniqueName: \"kubernetes.io/projected/46f197d9-de5c-42c2-9781-47ed42389e11-kube-api-access-8h45m\") pod \"redhat-operators-pxglb\" (UID: \"46f197d9-de5c-42c2-9781-47ed42389e11\") " pod="openshift-marketplace/redhat-operators-pxglb" Dec 11 10:14:14 crc kubenswrapper[4953]: I1211 10:14:14.710099 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/498719f2-a1f1-4214-b357-37160f0eabb2-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"498719f2-a1f1-4214-b357-37160f0eabb2\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 11 10:14:14 crc kubenswrapper[4953]: I1211 10:14:14.710126 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4468c58a-3cfc-4197-bf1b-8afc67dfda5e-utilities\") pod \"redhat-marketplace-kkp25\" (UID: \"4468c58a-3cfc-4197-bf1b-8afc67dfda5e\") " pod="openshift-marketplace/redhat-marketplace-kkp25" Dec 11 10:14:14 crc kubenswrapper[4953]: I1211 10:14:14.710147 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe9b2116-8ab4-4c4c-8c58-74e62f28893d-utilities\") pod \"redhat-marketplace-gnxp9\" (UID: \"fe9b2116-8ab4-4c4c-8c58-74e62f28893d\") " pod="openshift-marketplace/redhat-marketplace-gnxp9" Dec 11 10:14:14 crc kubenswrapper[4953]: I1211 10:14:14.763444 4953 patch_prober.go:28] interesting pod/router-default-5444994796-v8699 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 11 10:14:14 crc kubenswrapper[4953]: [-]has-synced failed: reason withheld Dec 11 10:14:14 crc kubenswrapper[4953]: [+]process-running ok Dec 11 10:14:14 crc kubenswrapper[4953]: healthz check failed Dec 11 10:14:14 crc kubenswrapper[4953]: I1211 10:14:14.763498 4953 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-v8699" podUID="d16293c2-d5aa-41fe-859c-0cc5201b6f0b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 10:14:14 crc kubenswrapper[4953]: I1211 10:14:14.825712 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/498719f2-a1f1-4214-b357-37160f0eabb2-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"498719f2-a1f1-4214-b357-37160f0eabb2\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 11 10:14:14 crc kubenswrapper[4953]: I1211 10:14:14.825773 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4468c58a-3cfc-4197-bf1b-8afc67dfda5e-utilities\") pod \"redhat-marketplace-kkp25\" (UID: \"4468c58a-3cfc-4197-bf1b-8afc67dfda5e\") " pod="openshift-marketplace/redhat-marketplace-kkp25" Dec 11 10:14:14 crc kubenswrapper[4953]: I1211 10:14:14.825818 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe9b2116-8ab4-4c4c-8c58-74e62f28893d-utilities\") pod \"redhat-marketplace-gnxp9\" (UID: \"fe9b2116-8ab4-4c4c-8c58-74e62f28893d\") " pod="openshift-marketplace/redhat-marketplace-gnxp9" Dec 11 10:14:14 crc kubenswrapper[4953]: I1211 10:14:14.825873 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zmd6v\" (UniqueName: \"kubernetes.io/projected/fe9b2116-8ab4-4c4c-8c58-74e62f28893d-kube-api-access-zmd6v\") pod \"redhat-marketplace-gnxp9\" (UID: \"fe9b2116-8ab4-4c4c-8c58-74e62f28893d\") " pod="openshift-marketplace/redhat-marketplace-gnxp9" Dec 11 10:14:14 crc kubenswrapper[4953]: I1211 10:14:14.825912 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/498719f2-a1f1-4214-b357-37160f0eabb2-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"498719f2-a1f1-4214-b357-37160f0eabb2\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 11 10:14:14 crc kubenswrapper[4953]: I1211 10:14:14.825945 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe9b2116-8ab4-4c4c-8c58-74e62f28893d-catalog-content\") pod \"redhat-marketplace-gnxp9\" (UID: \"fe9b2116-8ab4-4c4c-8c58-74e62f28893d\") " pod="openshift-marketplace/redhat-marketplace-gnxp9" Dec 11 10:14:14 crc kubenswrapper[4953]: I1211 10:14:14.825971 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46f197d9-de5c-42c2-9781-47ed42389e11-catalog-content\") pod \"redhat-operators-pxglb\" (UID: \"46f197d9-de5c-42c2-9781-47ed42389e11\") " pod="openshift-marketplace/redhat-operators-pxglb" Dec 11 10:14:14 crc kubenswrapper[4953]: I1211 10:14:14.826011 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6n47f\" (UniqueName: \"kubernetes.io/projected/2f406ece-016a-43bc-92c9-473b85ad0ca9-kube-api-access-6n47f\") pod \"redhat-operators-2f46z\" (UID: \"2f406ece-016a-43bc-92c9-473b85ad0ca9\") " pod="openshift-marketplace/redhat-operators-2f46z" Dec 11 10:14:14 crc kubenswrapper[4953]: I1211 10:14:14.826045 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sl7ms\" (UniqueName: \"kubernetes.io/projected/4468c58a-3cfc-4197-bf1b-8afc67dfda5e-kube-api-access-sl7ms\") pod \"redhat-marketplace-kkp25\" (UID: \"4468c58a-3cfc-4197-bf1b-8afc67dfda5e\") " pod="openshift-marketplace/redhat-marketplace-kkp25" Dec 11 10:14:14 crc kubenswrapper[4953]: I1211 10:14:14.826078 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f406ece-016a-43bc-92c9-473b85ad0ca9-catalog-content\") pod \"redhat-operators-2f46z\" (UID: \"2f406ece-016a-43bc-92c9-473b85ad0ca9\") " pod="openshift-marketplace/redhat-operators-2f46z" Dec 11 10:14:14 crc kubenswrapper[4953]: I1211 10:14:14.826110 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f406ece-016a-43bc-92c9-473b85ad0ca9-utilities\") pod \"redhat-operators-2f46z\" (UID: \"2f406ece-016a-43bc-92c9-473b85ad0ca9\") " pod="openshift-marketplace/redhat-operators-2f46z" Dec 11 10:14:14 crc kubenswrapper[4953]: I1211 10:14:14.826154 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4468c58a-3cfc-4197-bf1b-8afc67dfda5e-catalog-content\") pod \"redhat-marketplace-kkp25\" (UID: \"4468c58a-3cfc-4197-bf1b-8afc67dfda5e\") " pod="openshift-marketplace/redhat-marketplace-kkp25" Dec 11 10:14:14 crc kubenswrapper[4953]: I1211 10:14:14.826185 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46f197d9-de5c-42c2-9781-47ed42389e11-utilities\") pod \"redhat-operators-pxglb\" (UID: \"46f197d9-de5c-42c2-9781-47ed42389e11\") " pod="openshift-marketplace/redhat-operators-pxglb" Dec 11 10:14:14 crc kubenswrapper[4953]: I1211 10:14:14.826213 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8h45m\" (UniqueName: \"kubernetes.io/projected/46f197d9-de5c-42c2-9781-47ed42389e11-kube-api-access-8h45m\") pod \"redhat-operators-pxglb\" (UID: \"46f197d9-de5c-42c2-9781-47ed42389e11\") " pod="openshift-marketplace/redhat-operators-pxglb" Dec 11 10:14:14 crc kubenswrapper[4953]: I1211 10:14:14.828518 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4468c58a-3cfc-4197-bf1b-8afc67dfda5e-utilities\") pod \"redhat-marketplace-kkp25\" (UID: \"4468c58a-3cfc-4197-bf1b-8afc67dfda5e\") " pod="openshift-marketplace/redhat-marketplace-kkp25" Dec 11 10:14:14 crc kubenswrapper[4953]: I1211 10:14:14.829051 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe9b2116-8ab4-4c4c-8c58-74e62f28893d-utilities\") pod \"redhat-marketplace-gnxp9\" (UID: \"fe9b2116-8ab4-4c4c-8c58-74e62f28893d\") " pod="openshift-marketplace/redhat-marketplace-gnxp9" Dec 11 10:14:14 crc kubenswrapper[4953]: I1211 10:14:14.829354 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/498719f2-a1f1-4214-b357-37160f0eabb2-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"498719f2-a1f1-4214-b357-37160f0eabb2\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 11 10:14:14 crc kubenswrapper[4953]: I1211 10:14:14.829915 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe9b2116-8ab4-4c4c-8c58-74e62f28893d-catalog-content\") pod \"redhat-marketplace-gnxp9\" (UID: \"fe9b2116-8ab4-4c4c-8c58-74e62f28893d\") " pod="openshift-marketplace/redhat-marketplace-gnxp9" Dec 11 10:14:14 crc kubenswrapper[4953]: I1211 10:14:14.830330 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46f197d9-de5c-42c2-9781-47ed42389e11-catalog-content\") pod \"redhat-operators-pxglb\" (UID: \"46f197d9-de5c-42c2-9781-47ed42389e11\") " pod="openshift-marketplace/redhat-operators-pxglb" Dec 11 10:14:14 crc kubenswrapper[4953]: I1211 10:14:14.831082 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f406ece-016a-43bc-92c9-473b85ad0ca9-catalog-content\") pod \"redhat-operators-2f46z\" (UID: \"2f406ece-016a-43bc-92c9-473b85ad0ca9\") " pod="openshift-marketplace/redhat-operators-2f46z" Dec 11 10:14:14 crc kubenswrapper[4953]: I1211 10:14:14.831322 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f406ece-016a-43bc-92c9-473b85ad0ca9-utilities\") pod \"redhat-operators-2f46z\" (UID: \"2f406ece-016a-43bc-92c9-473b85ad0ca9\") " pod="openshift-marketplace/redhat-operators-2f46z" Dec 11 10:14:14 crc kubenswrapper[4953]: I1211 10:14:14.831651 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4468c58a-3cfc-4197-bf1b-8afc67dfda5e-catalog-content\") pod \"redhat-marketplace-kkp25\" (UID: \"4468c58a-3cfc-4197-bf1b-8afc67dfda5e\") " pod="openshift-marketplace/redhat-marketplace-kkp25" Dec 11 10:14:14 crc kubenswrapper[4953]: I1211 10:14:14.831892 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46f197d9-de5c-42c2-9781-47ed42389e11-utilities\") pod \"redhat-operators-pxglb\" (UID: \"46f197d9-de5c-42c2-9781-47ed42389e11\") " pod="openshift-marketplace/redhat-operators-pxglb" Dec 11 10:14:14 crc kubenswrapper[4953]: I1211 10:14:14.907092 4953 patch_prober.go:28] interesting pod/console-f9d7485db-wfrqd container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.14:8443/health\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Dec 11 10:14:14 crc kubenswrapper[4953]: I1211 10:14:14.910470 4953 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-crtp9 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: connect: connection refused" start-of-body= Dec 11 10:14:14 crc kubenswrapper[4953]: I1211 10:14:14.910524 4953 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-crtp9" podUID="b9ce2b59-c756-43bf-8114-9fe86a8c8cd9" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: connect: connection refused" Dec 11 10:14:14 crc kubenswrapper[4953]: I1211 10:14:14.920769 4953 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-wfrqd" podUID="6a593442-828c-4cff-b9b9-4efa41ef6f44" containerName="console" probeResult="failure" output="Get \"https://10.217.0.14:8443/health\": dial tcp 10.217.0.14:8443: connect: connection refused" Dec 11 10:14:14 crc kubenswrapper[4953]: I1211 10:14:14.921552 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6n47f\" (UniqueName: \"kubernetes.io/projected/2f406ece-016a-43bc-92c9-473b85ad0ca9-kube-api-access-6n47f\") pod \"redhat-operators-2f46z\" (UID: \"2f406ece-016a-43bc-92c9-473b85ad0ca9\") " pod="openshift-marketplace/redhat-operators-2f46z" Dec 11 10:14:14 crc kubenswrapper[4953]: I1211 10:14:14.923425 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8dr5c" podStartSLOduration=150.923408169 podStartE2EDuration="2m30.923408169s" podCreationTimestamp="2025-12-11 10:11:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:14:14.80863209 +0000 UTC m=+172.832491143" watchObservedRunningTime="2025-12-11 10:14:14.923408169 +0000 UTC m=+172.947267202" Dec 11 10:14:14 crc kubenswrapper[4953]: I1211 10:14:14.937995 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8h45m\" (UniqueName: \"kubernetes.io/projected/46f197d9-de5c-42c2-9781-47ed42389e11-kube-api-access-8h45m\") pod \"redhat-operators-pxglb\" (UID: \"46f197d9-de5c-42c2-9781-47ed42389e11\") " pod="openshift-marketplace/redhat-operators-pxglb" Dec 11 10:14:14 crc kubenswrapper[4953]: I1211 10:14:14.942706 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/498719f2-a1f1-4214-b357-37160f0eabb2-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"498719f2-a1f1-4214-b357-37160f0eabb2\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 11 10:14:15 crc kubenswrapper[4953]: I1211 10:14:15.130620 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zmd6v\" (UniqueName: \"kubernetes.io/projected/fe9b2116-8ab4-4c4c-8c58-74e62f28893d-kube-api-access-zmd6v\") pod \"redhat-marketplace-gnxp9\" (UID: \"fe9b2116-8ab4-4c4c-8c58-74e62f28893d\") " pod="openshift-marketplace/redhat-marketplace-gnxp9" Dec 11 10:14:15 crc kubenswrapper[4953]: I1211 10:14:15.202693 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gnxp9" Dec 11 10:14:15 crc kubenswrapper[4953]: I1211 10:14:15.230384 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sl7ms\" (UniqueName: \"kubernetes.io/projected/4468c58a-3cfc-4197-bf1b-8afc67dfda5e-kube-api-access-sl7ms\") pod \"redhat-marketplace-kkp25\" (UID: \"4468c58a-3cfc-4197-bf1b-8afc67dfda5e\") " pod="openshift-marketplace/redhat-marketplace-kkp25" Dec 11 10:14:15 crc kubenswrapper[4953]: I1211 10:14:15.242082 4953 patch_prober.go:28] interesting pod/downloads-7954f5f757-9jt44 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Dec 11 10:14:15 crc kubenswrapper[4953]: I1211 10:14:15.242135 4953 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-9jt44" podUID="63ca4931-8019-4e0d-ab43-ae5bd50b8d91" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Dec 11 10:14:15 crc kubenswrapper[4953]: I1211 10:14:15.242172 4953 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-7954f5f757-9jt44" Dec 11 10:14:15 crc kubenswrapper[4953]: I1211 10:14:15.242730 4953 patch_prober.go:28] interesting pod/downloads-7954f5f757-9jt44 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Dec 11 10:14:15 crc kubenswrapper[4953]: I1211 10:14:15.242798 4953 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-9jt44" podUID="63ca4931-8019-4e0d-ab43-ae5bd50b8d91" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Dec 11 10:14:15 crc kubenswrapper[4953]: I1211 10:14:15.242750 4953 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"2fd19b1d3525293fe8a1689b91e17acf46c7fad4d58d6e03ed0463a14eac4aa9"} pod="openshift-console/downloads-7954f5f757-9jt44" containerMessage="Container download-server failed liveness probe, will be restarted" Dec 11 10:14:15 crc kubenswrapper[4953]: I1211 10:14:15.242965 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/downloads-7954f5f757-9jt44" podUID="63ca4931-8019-4e0d-ab43-ae5bd50b8d91" containerName="download-server" containerID="cri-o://2fd19b1d3525293fe8a1689b91e17acf46c7fad4d58d6e03ed0463a14eac4aa9" gracePeriod=2 Dec 11 10:14:15 crc kubenswrapper[4953]: I1211 10:14:15.243156 4953 patch_prober.go:28] interesting pod/downloads-7954f5f757-9jt44 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Dec 11 10:14:15 crc kubenswrapper[4953]: I1211 10:14:15.243174 4953 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-9jt44" podUID="63ca4931-8019-4e0d-ab43-ae5bd50b8d91" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Dec 11 10:14:15 crc kubenswrapper[4953]: I1211 10:14:15.326842 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-qm4mr"] Dec 11 10:14:15 crc kubenswrapper[4953]: I1211 10:14:15.341814 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-l5pbm"] Dec 11 10:14:15 crc kubenswrapper[4953]: I1211 10:14:15.382967 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cf6dd"] Dec 11 10:14:15 crc kubenswrapper[4953]: I1211 10:14:15.392399 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-w2rvh"] Dec 11 10:14:15 crc kubenswrapper[4953]: I1211 10:14:15.467733 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9shds" Dec 11 10:14:15 crc kubenswrapper[4953]: I1211 10:14:15.508334 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9shds" Dec 11 10:14:15 crc kubenswrapper[4953]: I1211 10:14:15.517365 4953 generic.go:334] "Generic (PLEG): container finished" podID="b9ce2b59-c756-43bf-8114-9fe86a8c8cd9" containerID="f183abe1691c218366fbb4971c0ead0a606b827bc0d02d663dbb0e38e1b661fa" exitCode=0 Dec 11 10:14:15 crc kubenswrapper[4953]: I1211 10:14:15.517425 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-crtp9" event={"ID":"b9ce2b59-c756-43bf-8114-9fe86a8c8cd9","Type":"ContainerDied","Data":"f183abe1691c218366fbb4971c0ead0a606b827bc0d02d663dbb0e38e1b661fa"} Dec 11 10:14:15 crc kubenswrapper[4953]: I1211 10:14:15.583207 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2f46z" Dec 11 10:14:15 crc kubenswrapper[4953]: I1211 10:14:15.599912 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pxglb" Dec 11 10:14:15 crc kubenswrapper[4953]: I1211 10:14:15.642385 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kkp25" Dec 11 10:14:15 crc kubenswrapper[4953]: I1211 10:14:15.704840 4953 patch_prober.go:28] interesting pod/router-default-5444994796-v8699 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 11 10:14:15 crc kubenswrapper[4953]: [-]has-synced failed: reason withheld Dec 11 10:14:15 crc kubenswrapper[4953]: [+]process-running ok Dec 11 10:14:15 crc kubenswrapper[4953]: healthz check failed Dec 11 10:14:15 crc kubenswrapper[4953]: I1211 10:14:15.705290 4953 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-v8699" podUID="d16293c2-d5aa-41fe-859c-0cc5201b6f0b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 10:14:15 crc kubenswrapper[4953]: I1211 10:14:15.726093 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 11 10:14:16 crc kubenswrapper[4953]: I1211 10:14:16.045353 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-r99w9"] Dec 11 10:14:16 crc kubenswrapper[4953]: I1211 10:14:16.081493 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dbdlx"] Dec 11 10:14:16 crc kubenswrapper[4953]: I1211 10:14:16.406979 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gnxp9"] Dec 11 10:14:16 crc kubenswrapper[4953]: I1211 10:14:16.571922 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w2rvh" event={"ID":"bcb98d03-7cdb-49c6-baa8-c4aa9605a2d6","Type":"ContainerStarted","Data":"d578d3c4b9e1bad41906bc892faac751523d1c31943a33eed46560b9d04193e1"} Dec 11 10:14:16 crc kubenswrapper[4953]: I1211 10:14:16.603012 4953 generic.go:334] "Generic (PLEG): container finished" podID="3b099bc8-faec-451b-88a3-f03e46e3ad94" containerID="1130d1c0eb55095648ba931767502e4b391aab712a60ba66ca340b733199bcad" exitCode=0 Dec 11 10:14:16 crc kubenswrapper[4953]: I1211 10:14:16.603361 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l5pbm" event={"ID":"3b099bc8-faec-451b-88a3-f03e46e3ad94","Type":"ContainerDied","Data":"1130d1c0eb55095648ba931767502e4b391aab712a60ba66ca340b733199bcad"} Dec 11 10:14:16 crc kubenswrapper[4953]: I1211 10:14:16.603434 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l5pbm" event={"ID":"3b099bc8-faec-451b-88a3-f03e46e3ad94","Type":"ContainerStarted","Data":"8d51dc80d54a321ce08fbd2cafbceeba078a629c62f486aeb4aa1cef25da139d"} Dec 11 10:14:16 crc kubenswrapper[4953]: I1211 10:14:16.608122 4953 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 11 10:14:16 crc kubenswrapper[4953]: I1211 10:14:16.665179 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-9sdjn" event={"ID":"a7b5a1d1-788d-448e-b859-c29daecb9a9b","Type":"ContainerStarted","Data":"3efba71d2415008aed2fdc5f8873263496146d8e37b89aae2301939877375324"} Dec 11 10:14:16 crc kubenswrapper[4953]: I1211 10:14:16.687862 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-qm4mr" event={"ID":"86f65b63-32e0-49cc-bc96-272ecfb987ed","Type":"ContainerStarted","Data":"107afc85dd140e199f75dac0710723a2bd96391a542c0d95fb5491fa10d6c31e"} Dec 11 10:14:16 crc kubenswrapper[4953]: I1211 10:14:16.704629 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-cl6x8" Dec 11 10:14:16 crc kubenswrapper[4953]: I1211 10:14:16.705079 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"80d43f58-fcb7-4227-b9ef-9e302b7ee878","Type":"ContainerStarted","Data":"d4618308350ede9e8f514f8c6eda6ac75595869f8015a0c1c7d9e7f24163e6ae"} Dec 11 10:14:16 crc kubenswrapper[4953]: I1211 10:14:16.723639 4953 patch_prober.go:28] interesting pod/router-default-5444994796-v8699 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 11 10:14:16 crc kubenswrapper[4953]: [-]has-synced failed: reason withheld Dec 11 10:14:16 crc kubenswrapper[4953]: [+]process-running ok Dec 11 10:14:16 crc kubenswrapper[4953]: healthz check failed Dec 11 10:14:16 crc kubenswrapper[4953]: I1211 10:14:16.723731 4953 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-v8699" podUID="d16293c2-d5aa-41fe-859c-0cc5201b6f0b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 10:14:16 crc kubenswrapper[4953]: I1211 10:14:16.731157 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gnxp9" event={"ID":"fe9b2116-8ab4-4c4c-8c58-74e62f28893d","Type":"ContainerStarted","Data":"757d1eecf2c0afe79f6123bb78ee463e89fedb46d05f64216e1fed3e36c4a053"} Dec 11 10:14:16 crc kubenswrapper[4953]: I1211 10:14:16.747378 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dbdlx" event={"ID":"c6173b60-4d44-435b-a606-0b3836f71ad2","Type":"ContainerStarted","Data":"e68ea1847bc2e2a31cc0f4c60fdb0bc1f2cbef1d73d219d60610728fa7b5c25a"} Dec 11 10:14:16 crc kubenswrapper[4953]: I1211 10:14:16.759791 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-r99w9" event={"ID":"c1a4e773-6467-424c-935e-40ef82e5fa99","Type":"ContainerStarted","Data":"225b422f5717617bb95a223a7e97fe0fb54d44b880eb5910a89d074815223079"} Dec 11 10:14:16 crc kubenswrapper[4953]: I1211 10:14:16.761679 4953 generic.go:334] "Generic (PLEG): container finished" podID="63ca4931-8019-4e0d-ab43-ae5bd50b8d91" containerID="2fd19b1d3525293fe8a1689b91e17acf46c7fad4d58d6e03ed0463a14eac4aa9" exitCode=0 Dec 11 10:14:16 crc kubenswrapper[4953]: I1211 10:14:16.761741 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-9jt44" event={"ID":"63ca4931-8019-4e0d-ab43-ae5bd50b8d91","Type":"ContainerDied","Data":"2fd19b1d3525293fe8a1689b91e17acf46c7fad4d58d6e03ed0463a14eac4aa9"} Dec 11 10:14:16 crc kubenswrapper[4953]: I1211 10:14:16.771852 4953 generic.go:334] "Generic (PLEG): container finished" podID="88498e28-0a15-43a5-b157-5a3baccfaaaf" containerID="1d08248671906f09dfebb27a3caa1268bf31d38878e09af2bf48efe79e0f1eef" exitCode=0 Dec 11 10:14:16 crc kubenswrapper[4953]: I1211 10:14:16.779320 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=15.779295005 podStartE2EDuration="15.779295005s" podCreationTimestamp="2025-12-11 10:14:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:14:16.768475689 +0000 UTC m=+174.792334732" watchObservedRunningTime="2025-12-11 10:14:16.779295005 +0000 UTC m=+174.803154038" Dec 11 10:14:16 crc kubenswrapper[4953]: I1211 10:14:16.784323 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29424120-hdqwl" event={"ID":"88498e28-0a15-43a5-b157-5a3baccfaaaf","Type":"ContainerDied","Data":"1d08248671906f09dfebb27a3caa1268bf31d38878e09af2bf48efe79e0f1eef"} Dec 11 10:14:16 crc kubenswrapper[4953]: I1211 10:14:16.784446 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cf6dd" event={"ID":"cef0b6d3-40d2-4981-894b-962df1304c36","Type":"ContainerStarted","Data":"e2c8787925d5def8e22f73d88f92ec15384e786a2aa4fff8f468d97d79d51ac1"} Dec 11 10:14:16 crc kubenswrapper[4953]: I1211 10:14:16.829992 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kkp25"] Dec 11 10:14:16 crc kubenswrapper[4953]: I1211 10:14:16.883102 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6jmjq" Dec 11 10:14:16 crc kubenswrapper[4953]: I1211 10:14:16.884932 4953 patch_prober.go:28] interesting pod/apiserver-76f77b778f-j88r5 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Dec 11 10:14:16 crc kubenswrapper[4953]: [+]log ok Dec 11 10:14:16 crc kubenswrapper[4953]: [+]etcd ok Dec 11 10:14:16 crc kubenswrapper[4953]: [+]poststarthook/start-apiserver-admission-initializer ok Dec 11 10:14:16 crc kubenswrapper[4953]: [+]poststarthook/generic-apiserver-start-informers ok Dec 11 10:14:16 crc kubenswrapper[4953]: [+]poststarthook/max-in-flight-filter ok Dec 11 10:14:16 crc kubenswrapper[4953]: [+]poststarthook/storage-object-count-tracker-hook ok Dec 11 10:14:16 crc kubenswrapper[4953]: [+]poststarthook/image.openshift.io-apiserver-caches ok Dec 11 10:14:16 crc kubenswrapper[4953]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Dec 11 10:14:16 crc kubenswrapper[4953]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Dec 11 10:14:16 crc kubenswrapper[4953]: [+]poststarthook/project.openshift.io-projectcache ok Dec 11 10:14:16 crc kubenswrapper[4953]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Dec 11 10:14:16 crc kubenswrapper[4953]: [+]poststarthook/openshift.io-startinformers ok Dec 11 10:14:16 crc kubenswrapper[4953]: [+]poststarthook/openshift.io-restmapperupdater ok Dec 11 10:14:16 crc kubenswrapper[4953]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Dec 11 10:14:16 crc kubenswrapper[4953]: livez check failed Dec 11 10:14:16 crc kubenswrapper[4953]: I1211 10:14:16.885012 4953 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-j88r5" podUID="35703302-61e8-4383-9d13-0449584419e4" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 10:14:16 crc kubenswrapper[4953]: I1211 10:14:16.894157 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pxglb"] Dec 11 10:14:16 crc kubenswrapper[4953]: I1211 10:14:16.901069 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2f46z"] Dec 11 10:14:17 crc kubenswrapper[4953]: I1211 10:14:17.066142 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 11 10:14:17 crc kubenswrapper[4953]: I1211 10:14:17.698917 4953 patch_prober.go:28] interesting pod/router-default-5444994796-v8699 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 11 10:14:17 crc kubenswrapper[4953]: [-]has-synced failed: reason withheld Dec 11 10:14:17 crc kubenswrapper[4953]: [+]process-running ok Dec 11 10:14:17 crc kubenswrapper[4953]: healthz check failed Dec 11 10:14:17 crc kubenswrapper[4953]: I1211 10:14:17.699306 4953 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-v8699" podUID="d16293c2-d5aa-41fe-859c-0cc5201b6f0b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 10:14:17 crc kubenswrapper[4953]: I1211 10:14:17.796850 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2f46z" event={"ID":"2f406ece-016a-43bc-92c9-473b85ad0ca9","Type":"ContainerStarted","Data":"23ac89f6dbb4662b081e959175284cf76f8339415386db03d6bfecb78d45b86c"} Dec 11 10:14:17 crc kubenswrapper[4953]: I1211 10:14:17.798704 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"498719f2-a1f1-4214-b357-37160f0eabb2","Type":"ContainerStarted","Data":"6780abae2a6dfab12b05bd648eede87260743a1dc69d96eab6b58bf95b813470"} Dec 11 10:14:17 crc kubenswrapper[4953]: I1211 10:14:17.800431 4953 generic.go:334] "Generic (PLEG): container finished" podID="80d43f58-fcb7-4227-b9ef-9e302b7ee878" containerID="d4618308350ede9e8f514f8c6eda6ac75595869f8015a0c1c7d9e7f24163e6ae" exitCode=0 Dec 11 10:14:17 crc kubenswrapper[4953]: I1211 10:14:17.800494 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"80d43f58-fcb7-4227-b9ef-9e302b7ee878","Type":"ContainerDied","Data":"d4618308350ede9e8f514f8c6eda6ac75595869f8015a0c1c7d9e7f24163e6ae"} Dec 11 10:14:17 crc kubenswrapper[4953]: I1211 10:14:17.802346 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kkp25" event={"ID":"4468c58a-3cfc-4197-bf1b-8afc67dfda5e","Type":"ContainerStarted","Data":"968362bf145af4e8c6daf3916ed7c71f6368d6575126703378a2fd58e15ddee4"} Dec 11 10:14:17 crc kubenswrapper[4953]: I1211 10:14:17.805121 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pxglb" event={"ID":"46f197d9-de5c-42c2-9781-47ed42389e11","Type":"ContainerStarted","Data":"0932ecd86015570542fe1c1f5aadeed270563025cdf37df7353931fac3a61db9"} Dec 11 10:14:17 crc kubenswrapper[4953]: I1211 10:14:17.868855 4953 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-crtp9 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: connect: connection refused" start-of-body= Dec 11 10:14:17 crc kubenswrapper[4953]: I1211 10:14:17.868899 4953 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-crtp9" podUID="b9ce2b59-c756-43bf-8114-9fe86a8c8cd9" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: connect: connection refused" Dec 11 10:14:18 crc kubenswrapper[4953]: I1211 10:14:18.194236 4953 patch_prober.go:28] interesting pod/machine-config-daemon-q2898 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 10:14:18 crc kubenswrapper[4953]: I1211 10:14:18.194300 4953 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 10:14:18 crc kubenswrapper[4953]: I1211 10:14:18.792256 4953 patch_prober.go:28] interesting pod/router-default-5444994796-v8699 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 11 10:14:18 crc kubenswrapper[4953]: [-]has-synced failed: reason withheld Dec 11 10:14:18 crc kubenswrapper[4953]: [+]process-running ok Dec 11 10:14:18 crc kubenswrapper[4953]: healthz check failed Dec 11 10:14:18 crc kubenswrapper[4953]: I1211 10:14:18.792311 4953 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-v8699" podUID="d16293c2-d5aa-41fe-859c-0cc5201b6f0b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 10:14:19 crc kubenswrapper[4953]: I1211 10:14:19.134159 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-crtp9" event={"ID":"b9ce2b59-c756-43bf-8114-9fe86a8c8cd9","Type":"ContainerStarted","Data":"e42d47dfa60e19bf9dd1f21c54305395f37a42478efbaec5e62fc3c076031ad2"} Dec 11 10:14:19 crc kubenswrapper[4953]: I1211 10:14:19.135190 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-crtp9" Dec 11 10:14:19 crc kubenswrapper[4953]: I1211 10:14:19.137345 4953 generic.go:334] "Generic (PLEG): container finished" podID="cef0b6d3-40d2-4981-894b-962df1304c36" containerID="de7282db68c7e4cd525175aaf5bfef924902be60be49e4f2e490bf6f4e88f9c3" exitCode=0 Dec 11 10:14:19 crc kubenswrapper[4953]: I1211 10:14:19.137605 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cf6dd" event={"ID":"cef0b6d3-40d2-4981-894b-962df1304c36","Type":"ContainerDied","Data":"de7282db68c7e4cd525175aaf5bfef924902be60be49e4f2e490bf6f4e88f9c3"} Dec 11 10:14:19 crc kubenswrapper[4953]: I1211 10:14:19.153170 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-qm4mr" event={"ID":"86f65b63-32e0-49cc-bc96-272ecfb987ed","Type":"ContainerStarted","Data":"b6d69f7550c5f5ac607155e3c80897e957678f2b9279044b51b6101f6a5b9252"} Dec 11 10:14:19 crc kubenswrapper[4953]: I1211 10:14:19.162002 4953 generic.go:334] "Generic (PLEG): container finished" podID="c6173b60-4d44-435b-a606-0b3836f71ad2" containerID="d1ac1fd5867034994d70b5ae73677052750ff6f8984aedc59ed92adf8343ae99" exitCode=0 Dec 11 10:14:19 crc kubenswrapper[4953]: I1211 10:14:19.162100 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dbdlx" event={"ID":"c6173b60-4d44-435b-a606-0b3836f71ad2","Type":"ContainerDied","Data":"d1ac1fd5867034994d70b5ae73677052750ff6f8984aedc59ed92adf8343ae99"} Dec 11 10:14:19 crc kubenswrapper[4953]: I1211 10:14:19.172533 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-9jt44" event={"ID":"63ca4931-8019-4e0d-ab43-ae5bd50b8d91","Type":"ContainerStarted","Data":"2717f1e9fdf436293416ce41be46b5fe65b4af119b04143f9063146a03ab5772"} Dec 11 10:14:19 crc kubenswrapper[4953]: I1211 10:14:19.174641 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-9jt44" Dec 11 10:14:19 crc kubenswrapper[4953]: I1211 10:14:19.177123 4953 patch_prober.go:28] interesting pod/downloads-7954f5f757-9jt44 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Dec 11 10:14:19 crc kubenswrapper[4953]: I1211 10:14:19.177200 4953 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-9jt44" podUID="63ca4931-8019-4e0d-ab43-ae5bd50b8d91" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Dec 11 10:14:19 crc kubenswrapper[4953]: I1211 10:14:19.200328 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-9sdjn" event={"ID":"a7b5a1d1-788d-448e-b859-c29daecb9a9b","Type":"ContainerStarted","Data":"4007aa883e4a80067244bc2feafb53dcf2e493876434f6a904e2b4dd9098d4e3"} Dec 11 10:14:19 crc kubenswrapper[4953]: I1211 10:14:19.214913 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"498719f2-a1f1-4214-b357-37160f0eabb2","Type":"ContainerStarted","Data":"394744bd0ee95d0cea87d324af61db8e9178192d9947a2d3da0f9747a612f363"} Dec 11 10:14:19 crc kubenswrapper[4953]: I1211 10:14:19.215697 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424120-hdqwl" Dec 11 10:14:19 crc kubenswrapper[4953]: I1211 10:14:19.217068 4953 generic.go:334] "Generic (PLEG): container finished" podID="fe9b2116-8ab4-4c4c-8c58-74e62f28893d" containerID="f659b208df32224758aaf8c62286bec90a053c1ef5705d12d4cc2b605c64f1d0" exitCode=0 Dec 11 10:14:19 crc kubenswrapper[4953]: I1211 10:14:19.217827 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gnxp9" event={"ID":"fe9b2116-8ab4-4c4c-8c58-74e62f28893d","Type":"ContainerDied","Data":"f659b208df32224758aaf8c62286bec90a053c1ef5705d12d4cc2b605c64f1d0"} Dec 11 10:14:19 crc kubenswrapper[4953]: I1211 10:14:19.224660 4953 generic.go:334] "Generic (PLEG): container finished" podID="46f197d9-de5c-42c2-9781-47ed42389e11" containerID="db03f21c27567a57fd340d4410c247316b54cd4b0a32d0b44758e0041a8b16f9" exitCode=0 Dec 11 10:14:19 crc kubenswrapper[4953]: I1211 10:14:19.224949 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pxglb" event={"ID":"46f197d9-de5c-42c2-9781-47ed42389e11","Type":"ContainerDied","Data":"db03f21c27567a57fd340d4410c247316b54cd4b0a32d0b44758e0041a8b16f9"} Dec 11 10:14:19 crc kubenswrapper[4953]: I1211 10:14:19.234900 4953 generic.go:334] "Generic (PLEG): container finished" podID="bcb98d03-7cdb-49c6-baa8-c4aa9605a2d6" containerID="1771c954424aecc637c957b97da9788db4fdf5b8c7ce9bd839dfb771c6515e1f" exitCode=0 Dec 11 10:14:19 crc kubenswrapper[4953]: I1211 10:14:19.236114 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w2rvh" event={"ID":"bcb98d03-7cdb-49c6-baa8-c4aa9605a2d6","Type":"ContainerDied","Data":"1771c954424aecc637c957b97da9788db4fdf5b8c7ce9bd839dfb771c6515e1f"} Dec 11 10:14:19 crc kubenswrapper[4953]: I1211 10:14:19.242349 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-r99w9" event={"ID":"c1a4e773-6467-424c-935e-40ef82e5fa99","Type":"ContainerStarted","Data":"7900dc2ff05af76712d5c26ccf8f4c4c0c180a0a9e1fb8896d7d2ec165f1c25f"} Dec 11 10:14:19 crc kubenswrapper[4953]: I1211 10:14:19.242402 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-r99w9" Dec 11 10:14:19 crc kubenswrapper[4953]: I1211 10:14:19.244658 4953 generic.go:334] "Generic (PLEG): container finished" podID="4468c58a-3cfc-4197-bf1b-8afc67dfda5e" containerID="8366f85b8b45f4b94f8bd1c365c7ccbf506331536c88f6ec8c38d7ebcc9650f9" exitCode=0 Dec 11 10:14:19 crc kubenswrapper[4953]: I1211 10:14:19.244729 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kkp25" event={"ID":"4468c58a-3cfc-4197-bf1b-8afc67dfda5e","Type":"ContainerDied","Data":"8366f85b8b45f4b94f8bd1c365c7ccbf506331536c88f6ec8c38d7ebcc9650f9"} Dec 11 10:14:19 crc kubenswrapper[4953]: I1211 10:14:19.246757 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424120-hdqwl" Dec 11 10:14:19 crc kubenswrapper[4953]: I1211 10:14:19.246746 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29424120-hdqwl" event={"ID":"88498e28-0a15-43a5-b157-5a3baccfaaaf","Type":"ContainerDied","Data":"0663c1f7bcd738da2d586b7682bdf7a4dd951c70c5ea8c8362f97f69e222c90b"} Dec 11 10:14:19 crc kubenswrapper[4953]: I1211 10:14:19.246922 4953 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0663c1f7bcd738da2d586b7682bdf7a4dd951c70c5ea8c8362f97f69e222c90b" Dec 11 10:14:19 crc kubenswrapper[4953]: I1211 10:14:19.250978 4953 generic.go:334] "Generic (PLEG): container finished" podID="2f406ece-016a-43bc-92c9-473b85ad0ca9" containerID="1c00015e89efc7e4605224d26c62dfa4087a8777726e92860ab2a27d46ffbbb1" exitCode=0 Dec 11 10:14:19 crc kubenswrapper[4953]: I1211 10:14:19.251460 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2f46z" event={"ID":"2f406ece-016a-43bc-92c9-473b85ad0ca9","Type":"ContainerDied","Data":"1c00015e89efc7e4605224d26c62dfa4087a8777726e92860ab2a27d46ffbbb1"} Dec 11 10:14:19 crc kubenswrapper[4953]: I1211 10:14:19.320900 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-9sdjn" podStartSLOduration=37.32088073 podStartE2EDuration="37.32088073s" podCreationTimestamp="2025-12-11 10:13:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:14:19.315753791 +0000 UTC m=+177.339612844" watchObservedRunningTime="2025-12-11 10:14:19.32088073 +0000 UTC m=+177.344739763" Dec 11 10:14:19 crc kubenswrapper[4953]: I1211 10:14:19.377566 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=7.377537215 podStartE2EDuration="7.377537215s" podCreationTimestamp="2025-12-11 10:14:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:14:19.376226692 +0000 UTC m=+177.400085735" watchObservedRunningTime="2025-12-11 10:14:19.377537215 +0000 UTC m=+177.401396258" Dec 11 10:14:19 crc kubenswrapper[4953]: I1211 10:14:19.396183 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jpksk\" (UniqueName: \"kubernetes.io/projected/88498e28-0a15-43a5-b157-5a3baccfaaaf-kube-api-access-jpksk\") pod \"88498e28-0a15-43a5-b157-5a3baccfaaaf\" (UID: \"88498e28-0a15-43a5-b157-5a3baccfaaaf\") " Dec 11 10:14:19 crc kubenswrapper[4953]: I1211 10:14:19.396333 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/88498e28-0a15-43a5-b157-5a3baccfaaaf-secret-volume\") pod \"88498e28-0a15-43a5-b157-5a3baccfaaaf\" (UID: \"88498e28-0a15-43a5-b157-5a3baccfaaaf\") " Dec 11 10:14:19 crc kubenswrapper[4953]: I1211 10:14:19.396404 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/88498e28-0a15-43a5-b157-5a3baccfaaaf-config-volume\") pod \"88498e28-0a15-43a5-b157-5a3baccfaaaf\" (UID: \"88498e28-0a15-43a5-b157-5a3baccfaaaf\") " Dec 11 10:14:19 crc kubenswrapper[4953]: I1211 10:14:19.399354 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88498e28-0a15-43a5-b157-5a3baccfaaaf-config-volume" (OuterVolumeSpecName: "config-volume") pod "88498e28-0a15-43a5-b157-5a3baccfaaaf" (UID: "88498e28-0a15-43a5-b157-5a3baccfaaaf"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:14:19 crc kubenswrapper[4953]: I1211 10:14:19.491891 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88498e28-0a15-43a5-b157-5a3baccfaaaf-kube-api-access-jpksk" (OuterVolumeSpecName: "kube-api-access-jpksk") pod "88498e28-0a15-43a5-b157-5a3baccfaaaf" (UID: "88498e28-0a15-43a5-b157-5a3baccfaaaf"). InnerVolumeSpecName "kube-api-access-jpksk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:14:19 crc kubenswrapper[4953]: I1211 10:14:19.492372 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88498e28-0a15-43a5-b157-5a3baccfaaaf-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "88498e28-0a15-43a5-b157-5a3baccfaaaf" (UID: "88498e28-0a15-43a5-b157-5a3baccfaaaf"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:14:19 crc kubenswrapper[4953]: I1211 10:14:19.498449 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jpksk\" (UniqueName: \"kubernetes.io/projected/88498e28-0a15-43a5-b157-5a3baccfaaaf-kube-api-access-jpksk\") on node \"crc\" DevicePath \"\"" Dec 11 10:14:19 crc kubenswrapper[4953]: I1211 10:14:19.498494 4953 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/88498e28-0a15-43a5-b157-5a3baccfaaaf-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 11 10:14:19 crc kubenswrapper[4953]: I1211 10:14:19.498505 4953 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/88498e28-0a15-43a5-b157-5a3baccfaaaf-config-volume\") on node \"crc\" DevicePath \"\"" Dec 11 10:14:19 crc kubenswrapper[4953]: I1211 10:14:19.555589 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-r99w9" podStartSLOduration=154.555557697 podStartE2EDuration="2m34.555557697s" podCreationTimestamp="2025-12-11 10:11:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:14:19.528226847 +0000 UTC m=+177.552085880" watchObservedRunningTime="2025-12-11 10:14:19.555557697 +0000 UTC m=+177.579416730" Dec 11 10:14:19 crc kubenswrapper[4953]: I1211 10:14:19.694695 4953 patch_prober.go:28] interesting pod/router-default-5444994796-v8699 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 11 10:14:19 crc kubenswrapper[4953]: [-]has-synced failed: reason withheld Dec 11 10:14:19 crc kubenswrapper[4953]: [+]process-running ok Dec 11 10:14:19 crc kubenswrapper[4953]: healthz check failed Dec 11 10:14:19 crc kubenswrapper[4953]: I1211 10:14:19.694756 4953 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-v8699" podUID="d16293c2-d5aa-41fe-859c-0cc5201b6f0b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 10:14:19 crc kubenswrapper[4953]: I1211 10:14:19.736276 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 11 10:14:19 crc kubenswrapper[4953]: I1211 10:14:19.903762 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/80d43f58-fcb7-4227-b9ef-9e302b7ee878-kube-api-access\") pod \"80d43f58-fcb7-4227-b9ef-9e302b7ee878\" (UID: \"80d43f58-fcb7-4227-b9ef-9e302b7ee878\") " Dec 11 10:14:19 crc kubenswrapper[4953]: I1211 10:14:19.903872 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/80d43f58-fcb7-4227-b9ef-9e302b7ee878-kubelet-dir\") pod \"80d43f58-fcb7-4227-b9ef-9e302b7ee878\" (UID: \"80d43f58-fcb7-4227-b9ef-9e302b7ee878\") " Dec 11 10:14:19 crc kubenswrapper[4953]: I1211 10:14:19.904299 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/80d43f58-fcb7-4227-b9ef-9e302b7ee878-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "80d43f58-fcb7-4227-b9ef-9e302b7ee878" (UID: "80d43f58-fcb7-4227-b9ef-9e302b7ee878"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 10:14:19 crc kubenswrapper[4953]: I1211 10:14:19.917522 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80d43f58-fcb7-4227-b9ef-9e302b7ee878-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "80d43f58-fcb7-4227-b9ef-9e302b7ee878" (UID: "80d43f58-fcb7-4227-b9ef-9e302b7ee878"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:14:20 crc kubenswrapper[4953]: I1211 10:14:20.004946 4953 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/80d43f58-fcb7-4227-b9ef-9e302b7ee878-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 11 10:14:20 crc kubenswrapper[4953]: I1211 10:14:20.004978 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/80d43f58-fcb7-4227-b9ef-9e302b7ee878-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 11 10:14:20 crc kubenswrapper[4953]: I1211 10:14:20.256821 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-qm4mr" event={"ID":"86f65b63-32e0-49cc-bc96-272ecfb987ed","Type":"ContainerStarted","Data":"8381a17fd6aa4fd123b50646115a776023a68201494d9a11c7da67c3d694c05d"} Dec 11 10:14:20 crc kubenswrapper[4953]: I1211 10:14:20.260069 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 11 10:14:20 crc kubenswrapper[4953]: I1211 10:14:20.269107 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"80d43f58-fcb7-4227-b9ef-9e302b7ee878","Type":"ContainerDied","Data":"891603bc4c59ef5cd58ff77d0fbf1509692e742e2c324db988fe95f45cd0dcd3"} Dec 11 10:14:20 crc kubenswrapper[4953]: I1211 10:14:20.269132 4953 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="891603bc4c59ef5cd58ff77d0fbf1509692e742e2c324db988fe95f45cd0dcd3" Dec 11 10:14:20 crc kubenswrapper[4953]: I1211 10:14:20.271231 4953 patch_prober.go:28] interesting pod/downloads-7954f5f757-9jt44 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Dec 11 10:14:20 crc kubenswrapper[4953]: I1211 10:14:20.271261 4953 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-9jt44" podUID="63ca4931-8019-4e0d-ab43-ae5bd50b8d91" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Dec 11 10:14:20 crc kubenswrapper[4953]: I1211 10:14:20.277468 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-j88r5" Dec 11 10:14:20 crc kubenswrapper[4953]: I1211 10:14:20.281181 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-j88r5" Dec 11 10:14:20 crc kubenswrapper[4953]: I1211 10:14:20.312854 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-crtp9" Dec 11 10:14:20 crc kubenswrapper[4953]: I1211 10:14:20.327186 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-qm4mr" podStartSLOduration=156.327163354 podStartE2EDuration="2m36.327163354s" podCreationTimestamp="2025-12-11 10:11:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:14:20.32584388 +0000 UTC m=+178.349702923" watchObservedRunningTime="2025-12-11 10:14:20.327163354 +0000 UTC m=+178.351022387" Dec 11 10:14:20 crc kubenswrapper[4953]: I1211 10:14:20.867642 4953 patch_prober.go:28] interesting pod/router-default-5444994796-v8699 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 11 10:14:20 crc kubenswrapper[4953]: [-]has-synced failed: reason withheld Dec 11 10:14:20 crc kubenswrapper[4953]: [+]process-running ok Dec 11 10:14:20 crc kubenswrapper[4953]: healthz check failed Dec 11 10:14:20 crc kubenswrapper[4953]: I1211 10:14:20.870106 4953 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-v8699" podUID="d16293c2-d5aa-41fe-859c-0cc5201b6f0b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 10:14:21 crc kubenswrapper[4953]: I1211 10:14:21.354646 4953 patch_prober.go:28] interesting pod/downloads-7954f5f757-9jt44 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Dec 11 10:14:21 crc kubenswrapper[4953]: I1211 10:14:21.355045 4953 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-9jt44" podUID="63ca4931-8019-4e0d-ab43-ae5bd50b8d91" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Dec 11 10:14:21 crc kubenswrapper[4953]: I1211 10:14:21.690526 4953 patch_prober.go:28] interesting pod/router-default-5444994796-v8699 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 11 10:14:21 crc kubenswrapper[4953]: [-]has-synced failed: reason withheld Dec 11 10:14:21 crc kubenswrapper[4953]: [+]process-running ok Dec 11 10:14:21 crc kubenswrapper[4953]: healthz check failed Dec 11 10:14:21 crc kubenswrapper[4953]: I1211 10:14:21.690592 4953 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-v8699" podUID="d16293c2-d5aa-41fe-859c-0cc5201b6f0b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 10:14:22 crc kubenswrapper[4953]: I1211 10:14:22.443402 4953 generic.go:334] "Generic (PLEG): container finished" podID="498719f2-a1f1-4214-b357-37160f0eabb2" containerID="394744bd0ee95d0cea87d324af61db8e9178192d9947a2d3da0f9747a612f363" exitCode=0 Dec 11 10:14:22 crc kubenswrapper[4953]: I1211 10:14:22.443670 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"498719f2-a1f1-4214-b357-37160f0eabb2","Type":"ContainerDied","Data":"394744bd0ee95d0cea87d324af61db8e9178192d9947a2d3da0f9747a612f363"} Dec 11 10:14:22 crc kubenswrapper[4953]: I1211 10:14:22.709582 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-v8699" Dec 11 10:14:22 crc kubenswrapper[4953]: I1211 10:14:22.758065 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-v8699" Dec 11 10:14:23 crc kubenswrapper[4953]: I1211 10:14:23.993696 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 11 10:14:24 crc kubenswrapper[4953]: I1211 10:14:24.159938 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/498719f2-a1f1-4214-b357-37160f0eabb2-kube-api-access\") pod \"498719f2-a1f1-4214-b357-37160f0eabb2\" (UID: \"498719f2-a1f1-4214-b357-37160f0eabb2\") " Dec 11 10:14:24 crc kubenswrapper[4953]: I1211 10:14:24.160106 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/498719f2-a1f1-4214-b357-37160f0eabb2-kubelet-dir\") pod \"498719f2-a1f1-4214-b357-37160f0eabb2\" (UID: \"498719f2-a1f1-4214-b357-37160f0eabb2\") " Dec 11 10:14:24 crc kubenswrapper[4953]: I1211 10:14:24.160529 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/498719f2-a1f1-4214-b357-37160f0eabb2-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "498719f2-a1f1-4214-b357-37160f0eabb2" (UID: "498719f2-a1f1-4214-b357-37160f0eabb2"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 10:14:24 crc kubenswrapper[4953]: I1211 10:14:24.217678 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/498719f2-a1f1-4214-b357-37160f0eabb2-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "498719f2-a1f1-4214-b357-37160f0eabb2" (UID: "498719f2-a1f1-4214-b357-37160f0eabb2"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:14:24 crc kubenswrapper[4953]: I1211 10:14:24.268558 4953 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/498719f2-a1f1-4214-b357-37160f0eabb2-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 11 10:14:24 crc kubenswrapper[4953]: I1211 10:14:24.268622 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/498719f2-a1f1-4214-b357-37160f0eabb2-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 11 10:14:24 crc kubenswrapper[4953]: I1211 10:14:24.469503 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"498719f2-a1f1-4214-b357-37160f0eabb2","Type":"ContainerDied","Data":"6780abae2a6dfab12b05bd648eede87260743a1dc69d96eab6b58bf95b813470"} Dec 11 10:14:24 crc kubenswrapper[4953]: I1211 10:14:24.469550 4953 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6780abae2a6dfab12b05bd648eede87260743a1dc69d96eab6b58bf95b813470" Dec 11 10:14:24 crc kubenswrapper[4953]: I1211 10:14:24.469566 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 11 10:14:25 crc kubenswrapper[4953]: I1211 10:14:25.035776 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-wfrqd" Dec 11 10:14:25 crc kubenswrapper[4953]: I1211 10:14:25.045845 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-wfrqd" Dec 11 10:14:25 crc kubenswrapper[4953]: I1211 10:14:25.243724 4953 patch_prober.go:28] interesting pod/downloads-7954f5f757-9jt44 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Dec 11 10:14:25 crc kubenswrapper[4953]: I1211 10:14:25.243796 4953 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-9jt44" podUID="63ca4931-8019-4e0d-ab43-ae5bd50b8d91" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Dec 11 10:14:25 crc kubenswrapper[4953]: I1211 10:14:25.244104 4953 patch_prober.go:28] interesting pod/downloads-7954f5f757-9jt44 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Dec 11 10:14:25 crc kubenswrapper[4953]: I1211 10:14:25.244123 4953 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-9jt44" podUID="63ca4931-8019-4e0d-ab43-ae5bd50b8d91" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Dec 11 10:14:34 crc kubenswrapper[4953]: I1211 10:14:34.465418 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-r99w9" Dec 11 10:14:35 crc kubenswrapper[4953]: I1211 10:14:35.245048 4953 patch_prober.go:28] interesting pod/downloads-7954f5f757-9jt44 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Dec 11 10:14:35 crc kubenswrapper[4953]: I1211 10:14:35.245083 4953 patch_prober.go:28] interesting pod/downloads-7954f5f757-9jt44 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Dec 11 10:14:35 crc kubenswrapper[4953]: I1211 10:14:35.245127 4953 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-9jt44" podUID="63ca4931-8019-4e0d-ab43-ae5bd50b8d91" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Dec 11 10:14:35 crc kubenswrapper[4953]: I1211 10:14:35.245151 4953 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-9jt44" podUID="63ca4931-8019-4e0d-ab43-ae5bd50b8d91" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Dec 11 10:14:36 crc kubenswrapper[4953]: I1211 10:14:36.683131 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-b82cx" Dec 11 10:14:41 crc kubenswrapper[4953]: I1211 10:14:41.946986 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 10:14:45 crc kubenswrapper[4953]: I1211 10:14:45.242664 4953 patch_prober.go:28] interesting pod/downloads-7954f5f757-9jt44 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Dec 11 10:14:45 crc kubenswrapper[4953]: I1211 10:14:45.243169 4953 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-9jt44" podUID="63ca4931-8019-4e0d-ab43-ae5bd50b8d91" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Dec 11 10:14:45 crc kubenswrapper[4953]: I1211 10:14:45.243270 4953 patch_prober.go:28] interesting pod/downloads-7954f5f757-9jt44 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Dec 11 10:14:45 crc kubenswrapper[4953]: I1211 10:14:45.243325 4953 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-9jt44" podUID="63ca4931-8019-4e0d-ab43-ae5bd50b8d91" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Dec 11 10:14:45 crc kubenswrapper[4953]: I1211 10:14:45.243379 4953 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-7954f5f757-9jt44" Dec 11 10:14:45 crc kubenswrapper[4953]: I1211 10:14:45.243866 4953 patch_prober.go:28] interesting pod/downloads-7954f5f757-9jt44 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Dec 11 10:14:45 crc kubenswrapper[4953]: I1211 10:14:45.243897 4953 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-9jt44" podUID="63ca4931-8019-4e0d-ab43-ae5bd50b8d91" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Dec 11 10:14:45 crc kubenswrapper[4953]: I1211 10:14:45.244176 4953 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"2717f1e9fdf436293416ce41be46b5fe65b4af119b04143f9063146a03ab5772"} pod="openshift-console/downloads-7954f5f757-9jt44" containerMessage="Container download-server failed liveness probe, will be restarted" Dec 11 10:14:45 crc kubenswrapper[4953]: I1211 10:14:45.244234 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/downloads-7954f5f757-9jt44" podUID="63ca4931-8019-4e0d-ab43-ae5bd50b8d91" containerName="download-server" containerID="cri-o://2717f1e9fdf436293416ce41be46b5fe65b4af119b04143f9063146a03ab5772" gracePeriod=2 Dec 11 10:14:46 crc kubenswrapper[4953]: I1211 10:14:46.664865 4953 generic.go:334] "Generic (PLEG): container finished" podID="63ca4931-8019-4e0d-ab43-ae5bd50b8d91" containerID="2717f1e9fdf436293416ce41be46b5fe65b4af119b04143f9063146a03ab5772" exitCode=0 Dec 11 10:14:46 crc kubenswrapper[4953]: I1211 10:14:46.664917 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-9jt44" event={"ID":"63ca4931-8019-4e0d-ab43-ae5bd50b8d91","Type":"ContainerDied","Data":"2717f1e9fdf436293416ce41be46b5fe65b4af119b04143f9063146a03ab5772"} Dec 11 10:14:46 crc kubenswrapper[4953]: I1211 10:14:46.664970 4953 scope.go:117] "RemoveContainer" containerID="2fd19b1d3525293fe8a1689b91e17acf46c7fad4d58d6e03ed0463a14eac4aa9" Dec 11 10:14:48 crc kubenswrapper[4953]: I1211 10:14:48.194333 4953 patch_prober.go:28] interesting pod/machine-config-daemon-q2898 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 10:14:48 crc kubenswrapper[4953]: I1211 10:14:48.194403 4953 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 10:14:48 crc kubenswrapper[4953]: I1211 10:14:48.194458 4953 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q2898" Dec 11 10:14:48 crc kubenswrapper[4953]: I1211 10:14:48.195172 4953 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bd6810974250266a6a2efbea13db5cb6f52a4bbdec05955f7b9f58e55d7a8c4a"} pod="openshift-machine-config-operator/machine-config-daemon-q2898" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 11 10:14:48 crc kubenswrapper[4953]: I1211 10:14:48.195225 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" containerName="machine-config-daemon" containerID="cri-o://bd6810974250266a6a2efbea13db5cb6f52a4bbdec05955f7b9f58e55d7a8c4a" gracePeriod=600 Dec 11 10:14:48 crc kubenswrapper[4953]: I1211 10:14:48.236400 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 11 10:14:48 crc kubenswrapper[4953]: E1211 10:14:48.236757 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88498e28-0a15-43a5-b157-5a3baccfaaaf" containerName="collect-profiles" Dec 11 10:14:48 crc kubenswrapper[4953]: I1211 10:14:48.236786 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="88498e28-0a15-43a5-b157-5a3baccfaaaf" containerName="collect-profiles" Dec 11 10:14:48 crc kubenswrapper[4953]: E1211 10:14:48.236800 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80d43f58-fcb7-4227-b9ef-9e302b7ee878" containerName="pruner" Dec 11 10:14:48 crc kubenswrapper[4953]: I1211 10:14:48.236806 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="80d43f58-fcb7-4227-b9ef-9e302b7ee878" containerName="pruner" Dec 11 10:14:48 crc kubenswrapper[4953]: E1211 10:14:48.236837 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="498719f2-a1f1-4214-b357-37160f0eabb2" containerName="pruner" Dec 11 10:14:48 crc kubenswrapper[4953]: I1211 10:14:48.236844 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="498719f2-a1f1-4214-b357-37160f0eabb2" containerName="pruner" Dec 11 10:14:48 crc kubenswrapper[4953]: I1211 10:14:48.236954 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="80d43f58-fcb7-4227-b9ef-9e302b7ee878" containerName="pruner" Dec 11 10:14:48 crc kubenswrapper[4953]: I1211 10:14:48.236966 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="498719f2-a1f1-4214-b357-37160f0eabb2" containerName="pruner" Dec 11 10:14:48 crc kubenswrapper[4953]: I1211 10:14:48.236991 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="88498e28-0a15-43a5-b157-5a3baccfaaaf" containerName="collect-profiles" Dec 11 10:14:48 crc kubenswrapper[4953]: I1211 10:14:48.237494 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 11 10:14:48 crc kubenswrapper[4953]: I1211 10:14:48.242485 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Dec 11 10:14:48 crc kubenswrapper[4953]: I1211 10:14:48.242694 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Dec 11 10:14:48 crc kubenswrapper[4953]: I1211 10:14:48.252585 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 11 10:14:48 crc kubenswrapper[4953]: I1211 10:14:48.276747 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f8e19eaf-c57f-4e51-9f61-cf1d320bae3c-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"f8e19eaf-c57f-4e51-9f61-cf1d320bae3c\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 11 10:14:48 crc kubenswrapper[4953]: I1211 10:14:48.276804 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f8e19eaf-c57f-4e51-9f61-cf1d320bae3c-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"f8e19eaf-c57f-4e51-9f61-cf1d320bae3c\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 11 10:14:48 crc kubenswrapper[4953]: I1211 10:14:48.377879 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f8e19eaf-c57f-4e51-9f61-cf1d320bae3c-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"f8e19eaf-c57f-4e51-9f61-cf1d320bae3c\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 11 10:14:48 crc kubenswrapper[4953]: I1211 10:14:48.378289 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f8e19eaf-c57f-4e51-9f61-cf1d320bae3c-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"f8e19eaf-c57f-4e51-9f61-cf1d320bae3c\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 11 10:14:48 crc kubenswrapper[4953]: I1211 10:14:48.377991 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f8e19eaf-c57f-4e51-9f61-cf1d320bae3c-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"f8e19eaf-c57f-4e51-9f61-cf1d320bae3c\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 11 10:14:48 crc kubenswrapper[4953]: I1211 10:14:48.397912 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f8e19eaf-c57f-4e51-9f61-cf1d320bae3c-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"f8e19eaf-c57f-4e51-9f61-cf1d320bae3c\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 11 10:14:48 crc kubenswrapper[4953]: I1211 10:14:48.564213 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 11 10:14:52 crc kubenswrapper[4953]: I1211 10:14:52.702216 4953 generic.go:334] "Generic (PLEG): container finished" podID="ed741fb7-1326-48b7-a713-17c9f0243eac" containerID="bd6810974250266a6a2efbea13db5cb6f52a4bbdec05955f7b9f58e55d7a8c4a" exitCode=0 Dec 11 10:14:52 crc kubenswrapper[4953]: I1211 10:14:52.702292 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q2898" event={"ID":"ed741fb7-1326-48b7-a713-17c9f0243eac","Type":"ContainerDied","Data":"bd6810974250266a6a2efbea13db5cb6f52a4bbdec05955f7b9f58e55d7a8c4a"} Dec 11 10:14:53 crc kubenswrapper[4953]: I1211 10:14:53.239769 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 11 10:14:53 crc kubenswrapper[4953]: I1211 10:14:53.241655 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 11 10:14:53 crc kubenswrapper[4953]: I1211 10:14:53.242318 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/277dd1b3-16e0-4ca6-9ceb-dc3f7ddc0c7e-kube-api-access\") pod \"installer-9-crc\" (UID: \"277dd1b3-16e0-4ca6-9ceb-dc3f7ddc0c7e\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 11 10:14:53 crc kubenswrapper[4953]: I1211 10:14:53.242398 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/277dd1b3-16e0-4ca6-9ceb-dc3f7ddc0c7e-kubelet-dir\") pod \"installer-9-crc\" (UID: \"277dd1b3-16e0-4ca6-9ceb-dc3f7ddc0c7e\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 11 10:14:53 crc kubenswrapper[4953]: I1211 10:14:53.242453 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/277dd1b3-16e0-4ca6-9ceb-dc3f7ddc0c7e-var-lock\") pod \"installer-9-crc\" (UID: \"277dd1b3-16e0-4ca6-9ceb-dc3f7ddc0c7e\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 11 10:14:53 crc kubenswrapper[4953]: I1211 10:14:53.247459 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 11 10:14:53 crc kubenswrapper[4953]: I1211 10:14:53.343376 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/277dd1b3-16e0-4ca6-9ceb-dc3f7ddc0c7e-kube-api-access\") pod \"installer-9-crc\" (UID: \"277dd1b3-16e0-4ca6-9ceb-dc3f7ddc0c7e\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 11 10:14:53 crc kubenswrapper[4953]: I1211 10:14:53.343495 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/277dd1b3-16e0-4ca6-9ceb-dc3f7ddc0c7e-kubelet-dir\") pod \"installer-9-crc\" (UID: \"277dd1b3-16e0-4ca6-9ceb-dc3f7ddc0c7e\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 11 10:14:53 crc kubenswrapper[4953]: I1211 10:14:53.343548 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/277dd1b3-16e0-4ca6-9ceb-dc3f7ddc0c7e-var-lock\") pod \"installer-9-crc\" (UID: \"277dd1b3-16e0-4ca6-9ceb-dc3f7ddc0c7e\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 11 10:14:53 crc kubenswrapper[4953]: I1211 10:14:53.343650 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/277dd1b3-16e0-4ca6-9ceb-dc3f7ddc0c7e-kubelet-dir\") pod \"installer-9-crc\" (UID: \"277dd1b3-16e0-4ca6-9ceb-dc3f7ddc0c7e\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 11 10:14:53 crc kubenswrapper[4953]: I1211 10:14:53.343655 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/277dd1b3-16e0-4ca6-9ceb-dc3f7ddc0c7e-var-lock\") pod \"installer-9-crc\" (UID: \"277dd1b3-16e0-4ca6-9ceb-dc3f7ddc0c7e\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 11 10:14:53 crc kubenswrapper[4953]: I1211 10:14:53.366005 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/277dd1b3-16e0-4ca6-9ceb-dc3f7ddc0c7e-kube-api-access\") pod \"installer-9-crc\" (UID: \"277dd1b3-16e0-4ca6-9ceb-dc3f7ddc0c7e\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 11 10:14:53 crc kubenswrapper[4953]: I1211 10:14:53.559395 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 11 10:14:55 crc kubenswrapper[4953]: I1211 10:14:55.240820 4953 patch_prober.go:28] interesting pod/downloads-7954f5f757-9jt44 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Dec 11 10:14:55 crc kubenswrapper[4953]: I1211 10:14:55.241210 4953 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-9jt44" podUID="63ca4931-8019-4e0d-ab43-ae5bd50b8d91" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Dec 11 10:15:00 crc kubenswrapper[4953]: I1211 10:15:00.146485 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424135-wrj49"] Dec 11 10:15:00 crc kubenswrapper[4953]: I1211 10:15:00.148078 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424135-wrj49" Dec 11 10:15:00 crc kubenswrapper[4953]: I1211 10:15:00.151394 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 11 10:15:00 crc kubenswrapper[4953]: I1211 10:15:00.151538 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 11 10:15:00 crc kubenswrapper[4953]: I1211 10:15:00.160945 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424135-wrj49"] Dec 11 10:15:00 crc kubenswrapper[4953]: I1211 10:15:00.286029 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a45ae5ef-f18c-4275-b9f8-36afd1d25451-config-volume\") pod \"collect-profiles-29424135-wrj49\" (UID: \"a45ae5ef-f18c-4275-b9f8-36afd1d25451\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424135-wrj49" Dec 11 10:15:00 crc kubenswrapper[4953]: I1211 10:15:00.286117 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njkk2\" (UniqueName: \"kubernetes.io/projected/a45ae5ef-f18c-4275-b9f8-36afd1d25451-kube-api-access-njkk2\") pod \"collect-profiles-29424135-wrj49\" (UID: \"a45ae5ef-f18c-4275-b9f8-36afd1d25451\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424135-wrj49" Dec 11 10:15:00 crc kubenswrapper[4953]: I1211 10:15:00.286154 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a45ae5ef-f18c-4275-b9f8-36afd1d25451-secret-volume\") pod \"collect-profiles-29424135-wrj49\" (UID: \"a45ae5ef-f18c-4275-b9f8-36afd1d25451\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424135-wrj49" Dec 11 10:15:00 crc kubenswrapper[4953]: I1211 10:15:00.406997 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a45ae5ef-f18c-4275-b9f8-36afd1d25451-config-volume\") pod \"collect-profiles-29424135-wrj49\" (UID: \"a45ae5ef-f18c-4275-b9f8-36afd1d25451\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424135-wrj49" Dec 11 10:15:00 crc kubenswrapper[4953]: I1211 10:15:00.407070 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njkk2\" (UniqueName: \"kubernetes.io/projected/a45ae5ef-f18c-4275-b9f8-36afd1d25451-kube-api-access-njkk2\") pod \"collect-profiles-29424135-wrj49\" (UID: \"a45ae5ef-f18c-4275-b9f8-36afd1d25451\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424135-wrj49" Dec 11 10:15:00 crc kubenswrapper[4953]: I1211 10:15:00.407109 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a45ae5ef-f18c-4275-b9f8-36afd1d25451-secret-volume\") pod \"collect-profiles-29424135-wrj49\" (UID: \"a45ae5ef-f18c-4275-b9f8-36afd1d25451\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424135-wrj49" Dec 11 10:15:00 crc kubenswrapper[4953]: I1211 10:15:00.408440 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a45ae5ef-f18c-4275-b9f8-36afd1d25451-config-volume\") pod \"collect-profiles-29424135-wrj49\" (UID: \"a45ae5ef-f18c-4275-b9f8-36afd1d25451\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424135-wrj49" Dec 11 10:15:00 crc kubenswrapper[4953]: I1211 10:15:00.426286 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a45ae5ef-f18c-4275-b9f8-36afd1d25451-secret-volume\") pod \"collect-profiles-29424135-wrj49\" (UID: \"a45ae5ef-f18c-4275-b9f8-36afd1d25451\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424135-wrj49" Dec 11 10:15:00 crc kubenswrapper[4953]: I1211 10:15:00.428156 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njkk2\" (UniqueName: \"kubernetes.io/projected/a45ae5ef-f18c-4275-b9f8-36afd1d25451-kube-api-access-njkk2\") pod \"collect-profiles-29424135-wrj49\" (UID: \"a45ae5ef-f18c-4275-b9f8-36afd1d25451\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424135-wrj49" Dec 11 10:15:00 crc kubenswrapper[4953]: I1211 10:15:00.528741 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424135-wrj49" Dec 11 10:15:05 crc kubenswrapper[4953]: I1211 10:15:05.427246 4953 patch_prober.go:28] interesting pod/downloads-7954f5f757-9jt44 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Dec 11 10:15:05 crc kubenswrapper[4953]: I1211 10:15:05.427723 4953 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-9jt44" podUID="63ca4931-8019-4e0d-ab43-ae5bd50b8d91" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Dec 11 10:15:15 crc kubenswrapper[4953]: I1211 10:15:15.241249 4953 patch_prober.go:28] interesting pod/downloads-7954f5f757-9jt44 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Dec 11 10:15:15 crc kubenswrapper[4953]: I1211 10:15:15.242177 4953 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-9jt44" podUID="63ca4931-8019-4e0d-ab43-ae5bd50b8d91" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Dec 11 10:15:21 crc kubenswrapper[4953]: E1211 10:15:21.426127 4953 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Dec 11 10:15:21 crc kubenswrapper[4953]: E1211 10:15:21.426844 4953 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zvzfq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-dbdlx_openshift-marketplace(c6173b60-4d44-435b-a606-0b3836f71ad2): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 11 10:15:21 crc kubenswrapper[4953]: E1211 10:15:21.429060 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-dbdlx" podUID="c6173b60-4d44-435b-a606-0b3836f71ad2" Dec 11 10:15:21 crc kubenswrapper[4953]: E1211 10:15:21.729223 4953 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Dec 11 10:15:21 crc kubenswrapper[4953]: E1211 10:15:21.729498 4953 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-67vlb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-l5pbm_openshift-marketplace(3b099bc8-faec-451b-88a3-f03e46e3ad94): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 11 10:15:21 crc kubenswrapper[4953]: E1211 10:15:21.736954 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-l5pbm" podUID="3b099bc8-faec-451b-88a3-f03e46e3ad94" Dec 11 10:15:24 crc kubenswrapper[4953]: E1211 10:15:24.108080 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-l5pbm" podUID="3b099bc8-faec-451b-88a3-f03e46e3ad94" Dec 11 10:15:24 crc kubenswrapper[4953]: E1211 10:15:24.108091 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-dbdlx" podUID="c6173b60-4d44-435b-a606-0b3836f71ad2" Dec 11 10:15:24 crc kubenswrapper[4953]: E1211 10:15:24.138661 4953 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Dec 11 10:15:24 crc kubenswrapper[4953]: E1211 10:15:24.138904 4953 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ldkqc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-cf6dd_openshift-marketplace(cef0b6d3-40d2-4981-894b-962df1304c36): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 11 10:15:24 crc kubenswrapper[4953]: E1211 10:15:24.140203 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-cf6dd" podUID="cef0b6d3-40d2-4981-894b-962df1304c36" Dec 11 10:15:25 crc kubenswrapper[4953]: I1211 10:15:25.241511 4953 patch_prober.go:28] interesting pod/downloads-7954f5f757-9jt44 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Dec 11 10:15:25 crc kubenswrapper[4953]: I1211 10:15:25.242144 4953 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-9jt44" podUID="63ca4931-8019-4e0d-ab43-ae5bd50b8d91" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Dec 11 10:15:29 crc kubenswrapper[4953]: E1211 10:15:29.721547 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-cf6dd" podUID="cef0b6d3-40d2-4981-894b-962df1304c36" Dec 11 10:15:35 crc kubenswrapper[4953]: I1211 10:15:35.243922 4953 patch_prober.go:28] interesting pod/downloads-7954f5f757-9jt44 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Dec 11 10:15:35 crc kubenswrapper[4953]: I1211 10:15:35.244949 4953 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-9jt44" podUID="63ca4931-8019-4e0d-ab43-ae5bd50b8d91" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Dec 11 10:15:36 crc kubenswrapper[4953]: I1211 10:15:36.621256 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-8s4mq"] Dec 11 10:15:39 crc kubenswrapper[4953]: E1211 10:15:39.742807 4953 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Dec 11 10:15:39 crc kubenswrapper[4953]: E1211 10:15:39.743245 4953 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8h45m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-pxglb_openshift-marketplace(46f197d9-de5c-42c2-9781-47ed42389e11): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 11 10:15:39 crc kubenswrapper[4953]: E1211 10:15:39.744500 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-pxglb" podUID="46f197d9-de5c-42c2-9781-47ed42389e11" Dec 11 10:15:39 crc kubenswrapper[4953]: E1211 10:15:39.763242 4953 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Dec 11 10:15:39 crc kubenswrapper[4953]: E1211 10:15:39.763409 4953 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6n47f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-2f46z_openshift-marketplace(2f406ece-016a-43bc-92c9-473b85ad0ca9): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 11 10:15:39 crc kubenswrapper[4953]: E1211 10:15:39.764900 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-2f46z" podUID="2f406ece-016a-43bc-92c9-473b85ad0ca9" Dec 11 10:15:39 crc kubenswrapper[4953]: E1211 10:15:39.800944 4953 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Dec 11 10:15:39 crc kubenswrapper[4953]: E1211 10:15:39.801146 4953 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x6xx8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-w2rvh_openshift-marketplace(bcb98d03-7cdb-49c6-baa8-c4aa9605a2d6): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 11 10:15:39 crc kubenswrapper[4953]: E1211 10:15:39.802373 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-w2rvh" podUID="bcb98d03-7cdb-49c6-baa8-c4aa9605a2d6" Dec 11 10:15:41 crc kubenswrapper[4953]: E1211 10:15:41.707484 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-2f46z" podUID="2f406ece-016a-43bc-92c9-473b85ad0ca9" Dec 11 10:15:41 crc kubenswrapper[4953]: E1211 10:15:41.708170 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-w2rvh" podUID="bcb98d03-7cdb-49c6-baa8-c4aa9605a2d6" Dec 11 10:15:41 crc kubenswrapper[4953]: E1211 10:15:41.708271 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pxglb" podUID="46f197d9-de5c-42c2-9781-47ed42389e11" Dec 11 10:15:42 crc kubenswrapper[4953]: I1211 10:15:42.283316 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 11 10:15:42 crc kubenswrapper[4953]: W1211 10:15:42.289708 4953 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podf8e19eaf_c57f_4e51_9f61_cf1d320bae3c.slice/crio-7ea238a9671965cef1a1e057d78c47daa983a8a63cb562f71388efa45ad93a45 WatchSource:0}: Error finding container 7ea238a9671965cef1a1e057d78c47daa983a8a63cb562f71388efa45ad93a45: Status 404 returned error can't find the container with id 7ea238a9671965cef1a1e057d78c47daa983a8a63cb562f71388efa45ad93a45 Dec 11 10:15:42 crc kubenswrapper[4953]: I1211 10:15:42.352872 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 11 10:15:42 crc kubenswrapper[4953]: I1211 10:15:42.361487 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424135-wrj49"] Dec 11 10:15:42 crc kubenswrapper[4953]: W1211 10:15:42.384942 4953 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda45ae5ef_f18c_4275_b9f8_36afd1d25451.slice/crio-4abc75b010b33f949f867e477a6e468274f5212b8b074e2e188e9124bd02b8c1 WatchSource:0}: Error finding container 4abc75b010b33f949f867e477a6e468274f5212b8b074e2e188e9124bd02b8c1: Status 404 returned error can't find the container with id 4abc75b010b33f949f867e477a6e468274f5212b8b074e2e188e9124bd02b8c1 Dec 11 10:15:42 crc kubenswrapper[4953]: I1211 10:15:42.912875 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"f8e19eaf-c57f-4e51-9f61-cf1d320bae3c","Type":"ContainerStarted","Data":"7ea238a9671965cef1a1e057d78c47daa983a8a63cb562f71388efa45ad93a45"} Dec 11 10:15:42 crc kubenswrapper[4953]: I1211 10:15:42.914178 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29424135-wrj49" event={"ID":"a45ae5ef-f18c-4275-b9f8-36afd1d25451","Type":"ContainerStarted","Data":"4abc75b010b33f949f867e477a6e468274f5212b8b074e2e188e9124bd02b8c1"} Dec 11 10:15:42 crc kubenswrapper[4953]: I1211 10:15:42.915198 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"277dd1b3-16e0-4ca6-9ceb-dc3f7ddc0c7e","Type":"ContainerStarted","Data":"e4743f32241585f0c16adaead8468dc7f6273c8c24e13e91e6ee2bb0ee84c6a4"} Dec 11 10:15:43 crc kubenswrapper[4953]: I1211 10:15:43.923230 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q2898" event={"ID":"ed741fb7-1326-48b7-a713-17c9f0243eac","Type":"ContainerStarted","Data":"ebcfb015c8d0726744962a05ad3b02d7514b72b3db32d83919120d58d0255b97"} Dec 11 10:15:44 crc kubenswrapper[4953]: I1211 10:15:44.929289 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29424135-wrj49" event={"ID":"a45ae5ef-f18c-4275-b9f8-36afd1d25451","Type":"ContainerStarted","Data":"fc46e6df9641ac46850d4248dc07f77bdb522ee7c24c06377eb246d1986826f7"} Dec 11 10:15:44 crc kubenswrapper[4953]: I1211 10:15:44.931462 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-9jt44" event={"ID":"63ca4931-8019-4e0d-ab43-ae5bd50b8d91","Type":"ContainerStarted","Data":"47a35258a3f3a8b3b62aa8c80e6280c5d82e1f0f64479f6a13deebd1be3ba1f1"} Dec 11 10:15:45 crc kubenswrapper[4953]: I1211 10:15:45.243703 4953 patch_prober.go:28] interesting pod/downloads-7954f5f757-9jt44 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Dec 11 10:15:45 crc kubenswrapper[4953]: I1211 10:15:45.243763 4953 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-9jt44" podUID="63ca4931-8019-4e0d-ab43-ae5bd50b8d91" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Dec 11 10:15:45 crc kubenswrapper[4953]: I1211 10:15:45.939679 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"277dd1b3-16e0-4ca6-9ceb-dc3f7ddc0c7e","Type":"ContainerStarted","Data":"99fa93795555152e47f9bd1702407a33915579712d468f6f75dae69f46079254"} Dec 11 10:15:45 crc kubenswrapper[4953]: I1211 10:15:45.941850 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"f8e19eaf-c57f-4e51-9f61-cf1d320bae3c","Type":"ContainerStarted","Data":"7ebf090f39a5e7cbec83ffeb0967c185d1b641f5805c11ff81ddd12575bfdd0b"} Dec 11 10:15:45 crc kubenswrapper[4953]: I1211 10:15:45.942647 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-9jt44" Dec 11 10:15:45 crc kubenswrapper[4953]: I1211 10:15:45.942680 4953 patch_prober.go:28] interesting pod/downloads-7954f5f757-9jt44 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Dec 11 10:15:45 crc kubenswrapper[4953]: I1211 10:15:45.942716 4953 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-9jt44" podUID="63ca4931-8019-4e0d-ab43-ae5bd50b8d91" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Dec 11 10:15:45 crc kubenswrapper[4953]: I1211 10:15:45.990542 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29424135-wrj49" podStartSLOduration=45.990522396 podStartE2EDuration="45.990522396s" podCreationTimestamp="2025-12-11 10:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:15:45.988860201 +0000 UTC m=+264.012719244" watchObservedRunningTime="2025-12-11 10:15:45.990522396 +0000 UTC m=+264.014381439" Dec 11 10:15:46 crc kubenswrapper[4953]: E1211 10:15:46.122344 4953 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Dec 11 10:15:46 crc kubenswrapper[4953]: E1211 10:15:46.122956 4953 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zmd6v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-gnxp9_openshift-marketplace(fe9b2116-8ab4-4c4c-8c58-74e62f28893d): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 11 10:15:46 crc kubenswrapper[4953]: E1211 10:15:46.124157 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-gnxp9" podUID="fe9b2116-8ab4-4c4c-8c58-74e62f28893d" Dec 11 10:15:46 crc kubenswrapper[4953]: E1211 10:15:46.260473 4953 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Dec 11 10:15:46 crc kubenswrapper[4953]: E1211 10:15:46.260778 4953 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sl7ms,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-kkp25_openshift-marketplace(4468c58a-3cfc-4197-bf1b-8afc67dfda5e): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 11 10:15:46 crc kubenswrapper[4953]: E1211 10:15:46.262412 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-kkp25" podUID="4468c58a-3cfc-4197-bf1b-8afc67dfda5e" Dec 11 10:15:46 crc kubenswrapper[4953]: I1211 10:15:46.952526 4953 generic.go:334] "Generic (PLEG): container finished" podID="a45ae5ef-f18c-4275-b9f8-36afd1d25451" containerID="fc46e6df9641ac46850d4248dc07f77bdb522ee7c24c06377eb246d1986826f7" exitCode=0 Dec 11 10:15:46 crc kubenswrapper[4953]: I1211 10:15:46.952622 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29424135-wrj49" event={"ID":"a45ae5ef-f18c-4275-b9f8-36afd1d25451","Type":"ContainerDied","Data":"fc46e6df9641ac46850d4248dc07f77bdb522ee7c24c06377eb246d1986826f7"} Dec 11 10:15:46 crc kubenswrapper[4953]: I1211 10:15:46.955122 4953 generic.go:334] "Generic (PLEG): container finished" podID="f8e19eaf-c57f-4e51-9f61-cf1d320bae3c" containerID="7ebf090f39a5e7cbec83ffeb0967c185d1b641f5805c11ff81ddd12575bfdd0b" exitCode=0 Dec 11 10:15:46 crc kubenswrapper[4953]: I1211 10:15:46.955166 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"f8e19eaf-c57f-4e51-9f61-cf1d320bae3c","Type":"ContainerDied","Data":"7ebf090f39a5e7cbec83ffeb0967c185d1b641f5805c11ff81ddd12575bfdd0b"} Dec 11 10:15:46 crc kubenswrapper[4953]: I1211 10:15:46.956731 4953 patch_prober.go:28] interesting pod/downloads-7954f5f757-9jt44 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Dec 11 10:15:46 crc kubenswrapper[4953]: I1211 10:15:46.956797 4953 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-9jt44" podUID="63ca4931-8019-4e0d-ab43-ae5bd50b8d91" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Dec 11 10:15:46 crc kubenswrapper[4953]: E1211 10:15:46.957811 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-gnxp9" podUID="fe9b2116-8ab4-4c4c-8c58-74e62f28893d" Dec 11 10:15:46 crc kubenswrapper[4953]: E1211 10:15:46.957959 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-kkp25" podUID="4468c58a-3cfc-4197-bf1b-8afc67dfda5e" Dec 11 10:15:47 crc kubenswrapper[4953]: I1211 10:15:47.057071 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=54.057049813 podStartE2EDuration="54.057049813s" podCreationTimestamp="2025-12-11 10:14:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:15:47.053465584 +0000 UTC m=+265.077324617" watchObservedRunningTime="2025-12-11 10:15:47.057049813 +0000 UTC m=+265.080908846" Dec 11 10:15:47 crc kubenswrapper[4953]: I1211 10:15:47.967473 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l5pbm" event={"ID":"3b099bc8-faec-451b-88a3-f03e46e3ad94","Type":"ContainerStarted","Data":"40cfa2cd3768c6aaca6fb54e82e93fe3ba9a6a359f139c30a4c65bc21eb4799d"} Dec 11 10:15:47 crc kubenswrapper[4953]: I1211 10:15:47.971995 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dbdlx" event={"ID":"c6173b60-4d44-435b-a606-0b3836f71ad2","Type":"ContainerStarted","Data":"a272cf3abeed14d12fdd9add3d593b1d3c7ce4b25f42f1bf6bd4c11258c63414"} Dec 11 10:15:48 crc kubenswrapper[4953]: I1211 10:15:48.310694 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424135-wrj49" Dec 11 10:15:48 crc kubenswrapper[4953]: I1211 10:15:48.397657 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 11 10:15:48 crc kubenswrapper[4953]: I1211 10:15:48.509507 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-njkk2\" (UniqueName: \"kubernetes.io/projected/a45ae5ef-f18c-4275-b9f8-36afd1d25451-kube-api-access-njkk2\") pod \"a45ae5ef-f18c-4275-b9f8-36afd1d25451\" (UID: \"a45ae5ef-f18c-4275-b9f8-36afd1d25451\") " Dec 11 10:15:48 crc kubenswrapper[4953]: I1211 10:15:48.509625 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a45ae5ef-f18c-4275-b9f8-36afd1d25451-config-volume\") pod \"a45ae5ef-f18c-4275-b9f8-36afd1d25451\" (UID: \"a45ae5ef-f18c-4275-b9f8-36afd1d25451\") " Dec 11 10:15:48 crc kubenswrapper[4953]: I1211 10:15:48.509670 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a45ae5ef-f18c-4275-b9f8-36afd1d25451-secret-volume\") pod \"a45ae5ef-f18c-4275-b9f8-36afd1d25451\" (UID: \"a45ae5ef-f18c-4275-b9f8-36afd1d25451\") " Dec 11 10:15:48 crc kubenswrapper[4953]: I1211 10:15:48.509718 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f8e19eaf-c57f-4e51-9f61-cf1d320bae3c-kubelet-dir\") pod \"f8e19eaf-c57f-4e51-9f61-cf1d320bae3c\" (UID: \"f8e19eaf-c57f-4e51-9f61-cf1d320bae3c\") " Dec 11 10:15:48 crc kubenswrapper[4953]: I1211 10:15:48.509747 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f8e19eaf-c57f-4e51-9f61-cf1d320bae3c-kube-api-access\") pod \"f8e19eaf-c57f-4e51-9f61-cf1d320bae3c\" (UID: \"f8e19eaf-c57f-4e51-9f61-cf1d320bae3c\") " Dec 11 10:15:48 crc kubenswrapper[4953]: I1211 10:15:48.510318 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a45ae5ef-f18c-4275-b9f8-36afd1d25451-config-volume" (OuterVolumeSpecName: "config-volume") pod "a45ae5ef-f18c-4275-b9f8-36afd1d25451" (UID: "a45ae5ef-f18c-4275-b9f8-36afd1d25451"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:15:48 crc kubenswrapper[4953]: I1211 10:15:48.510811 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f8e19eaf-c57f-4e51-9f61-cf1d320bae3c-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "f8e19eaf-c57f-4e51-9f61-cf1d320bae3c" (UID: "f8e19eaf-c57f-4e51-9f61-cf1d320bae3c"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 10:15:48 crc kubenswrapper[4953]: I1211 10:15:48.515480 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a45ae5ef-f18c-4275-b9f8-36afd1d25451-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "a45ae5ef-f18c-4275-b9f8-36afd1d25451" (UID: "a45ae5ef-f18c-4275-b9f8-36afd1d25451"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:15:48 crc kubenswrapper[4953]: I1211 10:15:48.516431 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8e19eaf-c57f-4e51-9f61-cf1d320bae3c-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "f8e19eaf-c57f-4e51-9f61-cf1d320bae3c" (UID: "f8e19eaf-c57f-4e51-9f61-cf1d320bae3c"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:15:48 crc kubenswrapper[4953]: I1211 10:15:48.517431 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a45ae5ef-f18c-4275-b9f8-36afd1d25451-kube-api-access-njkk2" (OuterVolumeSpecName: "kube-api-access-njkk2") pod "a45ae5ef-f18c-4275-b9f8-36afd1d25451" (UID: "a45ae5ef-f18c-4275-b9f8-36afd1d25451"). InnerVolumeSpecName "kube-api-access-njkk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:15:48 crc kubenswrapper[4953]: I1211 10:15:48.612198 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-njkk2\" (UniqueName: \"kubernetes.io/projected/a45ae5ef-f18c-4275-b9f8-36afd1d25451-kube-api-access-njkk2\") on node \"crc\" DevicePath \"\"" Dec 11 10:15:48 crc kubenswrapper[4953]: I1211 10:15:48.612242 4953 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a45ae5ef-f18c-4275-b9f8-36afd1d25451-config-volume\") on node \"crc\" DevicePath \"\"" Dec 11 10:15:48 crc kubenswrapper[4953]: I1211 10:15:48.612251 4953 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a45ae5ef-f18c-4275-b9f8-36afd1d25451-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 11 10:15:48 crc kubenswrapper[4953]: I1211 10:15:48.612263 4953 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f8e19eaf-c57f-4e51-9f61-cf1d320bae3c-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 11 10:15:48 crc kubenswrapper[4953]: I1211 10:15:48.612272 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f8e19eaf-c57f-4e51-9f61-cf1d320bae3c-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 11 10:15:48 crc kubenswrapper[4953]: I1211 10:15:48.980371 4953 generic.go:334] "Generic (PLEG): container finished" podID="c6173b60-4d44-435b-a606-0b3836f71ad2" containerID="a272cf3abeed14d12fdd9add3d593b1d3c7ce4b25f42f1bf6bd4c11258c63414" exitCode=0 Dec 11 10:15:48 crc kubenswrapper[4953]: I1211 10:15:48.980464 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dbdlx" event={"ID":"c6173b60-4d44-435b-a606-0b3836f71ad2","Type":"ContainerDied","Data":"a272cf3abeed14d12fdd9add3d593b1d3c7ce4b25f42f1bf6bd4c11258c63414"} Dec 11 10:15:48 crc kubenswrapper[4953]: I1211 10:15:48.984395 4953 generic.go:334] "Generic (PLEG): container finished" podID="3b099bc8-faec-451b-88a3-f03e46e3ad94" containerID="40cfa2cd3768c6aaca6fb54e82e93fe3ba9a6a359f139c30a4c65bc21eb4799d" exitCode=0 Dec 11 10:15:48 crc kubenswrapper[4953]: I1211 10:15:48.984482 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l5pbm" event={"ID":"3b099bc8-faec-451b-88a3-f03e46e3ad94","Type":"ContainerDied","Data":"40cfa2cd3768c6aaca6fb54e82e93fe3ba9a6a359f139c30a4c65bc21eb4799d"} Dec 11 10:15:48 crc kubenswrapper[4953]: I1211 10:15:48.986640 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 11 10:15:48 crc kubenswrapper[4953]: I1211 10:15:48.986934 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"f8e19eaf-c57f-4e51-9f61-cf1d320bae3c","Type":"ContainerDied","Data":"7ea238a9671965cef1a1e057d78c47daa983a8a63cb562f71388efa45ad93a45"} Dec 11 10:15:48 crc kubenswrapper[4953]: I1211 10:15:48.986968 4953 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7ea238a9671965cef1a1e057d78c47daa983a8a63cb562f71388efa45ad93a45" Dec 11 10:15:48 crc kubenswrapper[4953]: I1211 10:15:48.992943 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29424135-wrj49" event={"ID":"a45ae5ef-f18c-4275-b9f8-36afd1d25451","Type":"ContainerDied","Data":"4abc75b010b33f949f867e477a6e468274f5212b8b074e2e188e9124bd02b8c1"} Dec 11 10:15:48 crc kubenswrapper[4953]: I1211 10:15:48.993011 4953 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4abc75b010b33f949f867e477a6e468274f5212b8b074e2e188e9124bd02b8c1" Dec 11 10:15:48 crc kubenswrapper[4953]: I1211 10:15:48.993090 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424135-wrj49" Dec 11 10:15:55 crc kubenswrapper[4953]: I1211 10:15:55.241158 4953 patch_prober.go:28] interesting pod/downloads-7954f5f757-9jt44 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Dec 11 10:15:55 crc kubenswrapper[4953]: I1211 10:15:55.242782 4953 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-9jt44" podUID="63ca4931-8019-4e0d-ab43-ae5bd50b8d91" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Dec 11 10:15:55 crc kubenswrapper[4953]: I1211 10:15:55.241511 4953 patch_prober.go:28] interesting pod/downloads-7954f5f757-9jt44 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Dec 11 10:15:55 crc kubenswrapper[4953]: I1211 10:15:55.243307 4953 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-9jt44" podUID="63ca4931-8019-4e0d-ab43-ae5bd50b8d91" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Dec 11 10:16:01 crc kubenswrapper[4953]: I1211 10:16:01.866916 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-8s4mq" podUID="f3b9e0de-9d50-4564-b075-9e56de0d6d20" containerName="oauth-openshift" containerID="cri-o://b61d2299ee3d2f27ab6d088e5b26241daa5026da83845ea59aed8f0b7d22afb2" gracePeriod=15 Dec 11 10:16:03 crc kubenswrapper[4953]: I1211 10:16:03.106707 4953 generic.go:334] "Generic (PLEG): container finished" podID="f3b9e0de-9d50-4564-b075-9e56de0d6d20" containerID="b61d2299ee3d2f27ab6d088e5b26241daa5026da83845ea59aed8f0b7d22afb2" exitCode=0 Dec 11 10:16:03 crc kubenswrapper[4953]: I1211 10:16:03.106826 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-8s4mq" event={"ID":"f3b9e0de-9d50-4564-b075-9e56de0d6d20","Type":"ContainerDied","Data":"b61d2299ee3d2f27ab6d088e5b26241daa5026da83845ea59aed8f0b7d22afb2"} Dec 11 10:16:04 crc kubenswrapper[4953]: I1211 10:16:04.985663 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-8s4mq" Dec 11 10:16:05 crc kubenswrapper[4953]: I1211 10:16:05.028707 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-5dcd86cbbd-78cmf"] Dec 11 10:16:05 crc kubenswrapper[4953]: E1211 10:16:05.029131 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3b9e0de-9d50-4564-b075-9e56de0d6d20" containerName="oauth-openshift" Dec 11 10:16:05 crc kubenswrapper[4953]: I1211 10:16:05.029163 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3b9e0de-9d50-4564-b075-9e56de0d6d20" containerName="oauth-openshift" Dec 11 10:16:05 crc kubenswrapper[4953]: E1211 10:16:05.029182 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a45ae5ef-f18c-4275-b9f8-36afd1d25451" containerName="collect-profiles" Dec 11 10:16:05 crc kubenswrapper[4953]: I1211 10:16:05.029195 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="a45ae5ef-f18c-4275-b9f8-36afd1d25451" containerName="collect-profiles" Dec 11 10:16:05 crc kubenswrapper[4953]: E1211 10:16:05.029220 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8e19eaf-c57f-4e51-9f61-cf1d320bae3c" containerName="pruner" Dec 11 10:16:05 crc kubenswrapper[4953]: I1211 10:16:05.029231 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8e19eaf-c57f-4e51-9f61-cf1d320bae3c" containerName="pruner" Dec 11 10:16:05 crc kubenswrapper[4953]: I1211 10:16:05.029373 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3b9e0de-9d50-4564-b075-9e56de0d6d20" containerName="oauth-openshift" Dec 11 10:16:05 crc kubenswrapper[4953]: I1211 10:16:05.029392 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="a45ae5ef-f18c-4275-b9f8-36afd1d25451" containerName="collect-profiles" Dec 11 10:16:05 crc kubenswrapper[4953]: I1211 10:16:05.029404 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8e19eaf-c57f-4e51-9f61-cf1d320bae3c" containerName="pruner" Dec 11 10:16:05 crc kubenswrapper[4953]: I1211 10:16:05.030024 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5dcd86cbbd-78cmf" Dec 11 10:16:05 crc kubenswrapper[4953]: I1211 10:16:05.052714 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-5dcd86cbbd-78cmf"] Dec 11 10:16:05 crc kubenswrapper[4953]: I1211 10:16:05.117663 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f3b9e0de-9d50-4564-b075-9e56de0d6d20-v4-0-config-system-serving-cert\") pod \"f3b9e0de-9d50-4564-b075-9e56de0d6d20\" (UID: \"f3b9e0de-9d50-4564-b075-9e56de0d6d20\") " Dec 11 10:16:05 crc kubenswrapper[4953]: I1211 10:16:05.117744 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f3b9e0de-9d50-4564-b075-9e56de0d6d20-v4-0-config-user-template-provider-selection\") pod \"f3b9e0de-9d50-4564-b075-9e56de0d6d20\" (UID: \"f3b9e0de-9d50-4564-b075-9e56de0d6d20\") " Dec 11 10:16:05 crc kubenswrapper[4953]: I1211 10:16:05.117786 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f3b9e0de-9d50-4564-b075-9e56de0d6d20-audit-dir\") pod \"f3b9e0de-9d50-4564-b075-9e56de0d6d20\" (UID: \"f3b9e0de-9d50-4564-b075-9e56de0d6d20\") " Dec 11 10:16:05 crc kubenswrapper[4953]: I1211 10:16:05.117857 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f3b9e0de-9d50-4564-b075-9e56de0d6d20-v4-0-config-user-template-error\") pod \"f3b9e0de-9d50-4564-b075-9e56de0d6d20\" (UID: \"f3b9e0de-9d50-4564-b075-9e56de0d6d20\") " Dec 11 10:16:05 crc kubenswrapper[4953]: I1211 10:16:05.117907 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f3b9e0de-9d50-4564-b075-9e56de0d6d20-v4-0-config-system-trusted-ca-bundle\") pod \"f3b9e0de-9d50-4564-b075-9e56de0d6d20\" (UID: \"f3b9e0de-9d50-4564-b075-9e56de0d6d20\") " Dec 11 10:16:05 crc kubenswrapper[4953]: I1211 10:16:05.117953 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f3b9e0de-9d50-4564-b075-9e56de0d6d20-v4-0-config-system-session\") pod \"f3b9e0de-9d50-4564-b075-9e56de0d6d20\" (UID: \"f3b9e0de-9d50-4564-b075-9e56de0d6d20\") " Dec 11 10:16:05 crc kubenswrapper[4953]: I1211 10:16:05.117943 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f3b9e0de-9d50-4564-b075-9e56de0d6d20-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f3b9e0de-9d50-4564-b075-9e56de0d6d20" (UID: "f3b9e0de-9d50-4564-b075-9e56de0d6d20"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 10:16:05 crc kubenswrapper[4953]: I1211 10:16:05.118006 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f3b9e0de-9d50-4564-b075-9e56de0d6d20-v4-0-config-system-service-ca\") pod \"f3b9e0de-9d50-4564-b075-9e56de0d6d20\" (UID: \"f3b9e0de-9d50-4564-b075-9e56de0d6d20\") " Dec 11 10:16:05 crc kubenswrapper[4953]: I1211 10:16:05.118034 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f3b9e0de-9d50-4564-b075-9e56de0d6d20-v4-0-config-user-template-login\") pod \"f3b9e0de-9d50-4564-b075-9e56de0d6d20\" (UID: \"f3b9e0de-9d50-4564-b075-9e56de0d6d20\") " Dec 11 10:16:05 crc kubenswrapper[4953]: I1211 10:16:05.118073 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f3b9e0de-9d50-4564-b075-9e56de0d6d20-v4-0-config-user-idp-0-file-data\") pod \"f3b9e0de-9d50-4564-b075-9e56de0d6d20\" (UID: \"f3b9e0de-9d50-4564-b075-9e56de0d6d20\") " Dec 11 10:16:05 crc kubenswrapper[4953]: I1211 10:16:05.118113 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f3b9e0de-9d50-4564-b075-9e56de0d6d20-audit-policies\") pod \"f3b9e0de-9d50-4564-b075-9e56de0d6d20\" (UID: \"f3b9e0de-9d50-4564-b075-9e56de0d6d20\") " Dec 11 10:16:05 crc kubenswrapper[4953]: I1211 10:16:05.118140 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f3b9e0de-9d50-4564-b075-9e56de0d6d20-v4-0-config-system-ocp-branding-template\") pod \"f3b9e0de-9d50-4564-b075-9e56de0d6d20\" (UID: \"f3b9e0de-9d50-4564-b075-9e56de0d6d20\") " Dec 11 10:16:05 crc kubenswrapper[4953]: I1211 10:16:05.118173 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f3b9e0de-9d50-4564-b075-9e56de0d6d20-v4-0-config-system-router-certs\") pod \"f3b9e0de-9d50-4564-b075-9e56de0d6d20\" (UID: \"f3b9e0de-9d50-4564-b075-9e56de0d6d20\") " Dec 11 10:16:05 crc kubenswrapper[4953]: I1211 10:16:05.118199 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qp6wb\" (UniqueName: \"kubernetes.io/projected/f3b9e0de-9d50-4564-b075-9e56de0d6d20-kube-api-access-qp6wb\") pod \"f3b9e0de-9d50-4564-b075-9e56de0d6d20\" (UID: \"f3b9e0de-9d50-4564-b075-9e56de0d6d20\") " Dec 11 10:16:05 crc kubenswrapper[4953]: I1211 10:16:05.118251 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f3b9e0de-9d50-4564-b075-9e56de0d6d20-v4-0-config-system-cliconfig\") pod \"f3b9e0de-9d50-4564-b075-9e56de0d6d20\" (UID: \"f3b9e0de-9d50-4564-b075-9e56de0d6d20\") " Dec 11 10:16:05 crc kubenswrapper[4953]: I1211 10:16:05.128809 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3b9e0de-9d50-4564-b075-9e56de0d6d20-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "f3b9e0de-9d50-4564-b075-9e56de0d6d20" (UID: "f3b9e0de-9d50-4564-b075-9e56de0d6d20"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:16:05 crc kubenswrapper[4953]: I1211 10:16:05.130926 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3b9e0de-9d50-4564-b075-9e56de0d6d20-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "f3b9e0de-9d50-4564-b075-9e56de0d6d20" (UID: "f3b9e0de-9d50-4564-b075-9e56de0d6d20"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:16:05 crc kubenswrapper[4953]: I1211 10:16:05.154263 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3b9e0de-9d50-4564-b075-9e56de0d6d20-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "f3b9e0de-9d50-4564-b075-9e56de0d6d20" (UID: "f3b9e0de-9d50-4564-b075-9e56de0d6d20"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:16:05 crc kubenswrapper[4953]: I1211 10:16:05.155301 4953 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f3b9e0de-9d50-4564-b075-9e56de0d6d20-audit-dir\") on node \"crc\" DevicePath \"\"" Dec 11 10:16:05 crc kubenswrapper[4953]: I1211 10:16:05.155334 4953 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f3b9e0de-9d50-4564-b075-9e56de0d6d20-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 10:16:05 crc kubenswrapper[4953]: I1211 10:16:05.155349 4953 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f3b9e0de-9d50-4564-b075-9e56de0d6d20-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Dec 11 10:16:05 crc kubenswrapper[4953]: I1211 10:16:05.155372 4953 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f3b9e0de-9d50-4564-b075-9e56de0d6d20-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Dec 11 10:16:05 crc kubenswrapper[4953]: I1211 10:16:05.156690 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3b9e0de-9d50-4564-b075-9e56de0d6d20-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "f3b9e0de-9d50-4564-b075-9e56de0d6d20" (UID: "f3b9e0de-9d50-4564-b075-9e56de0d6d20"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:16:05 crc kubenswrapper[4953]: I1211 10:16:05.168466 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cf6dd" event={"ID":"cef0b6d3-40d2-4981-894b-962df1304c36","Type":"ContainerStarted","Data":"9658dbfb5befa8e7bc2c911bd90c45f78fa16822dc206266d4071f5e6712292b"} Dec 11 10:16:05 crc kubenswrapper[4953]: I1211 10:16:05.170965 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l5pbm" event={"ID":"3b099bc8-faec-451b-88a3-f03e46e3ad94","Type":"ContainerStarted","Data":"712dc190de17abed413e4e7eadcec31160c952c72a60dc5438de29e84c8d93ed"} Dec 11 10:16:05 crc kubenswrapper[4953]: I1211 10:16:05.172745 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-8s4mq" event={"ID":"f3b9e0de-9d50-4564-b075-9e56de0d6d20","Type":"ContainerDied","Data":"fb0238f0017e9236b1a4c2b5762dacff0701152f9cfbaa2a148e8686e2f14ecd"} Dec 11 10:16:05 crc kubenswrapper[4953]: I1211 10:16:05.172820 4953 scope.go:117] "RemoveContainer" containerID="b61d2299ee3d2f27ab6d088e5b26241daa5026da83845ea59aed8f0b7d22afb2" Dec 11 10:16:05 crc kubenswrapper[4953]: I1211 10:16:05.173218 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-8s4mq" Dec 11 10:16:05 crc kubenswrapper[4953]: I1211 10:16:05.175564 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dbdlx" event={"ID":"c6173b60-4d44-435b-a606-0b3836f71ad2","Type":"ContainerStarted","Data":"f103ed42f555aae32606085b73eb4a626cae9f1cf8a84f3dfa1837a4c4519ddc"} Dec 11 10:16:05 crc kubenswrapper[4953]: I1211 10:16:05.182654 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3b9e0de-9d50-4564-b075-9e56de0d6d20-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "f3b9e0de-9d50-4564-b075-9e56de0d6d20" (UID: "f3b9e0de-9d50-4564-b075-9e56de0d6d20"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:16:05 crc kubenswrapper[4953]: I1211 10:16:05.184053 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3b9e0de-9d50-4564-b075-9e56de0d6d20-kube-api-access-qp6wb" (OuterVolumeSpecName: "kube-api-access-qp6wb") pod "f3b9e0de-9d50-4564-b075-9e56de0d6d20" (UID: "f3b9e0de-9d50-4564-b075-9e56de0d6d20"). InnerVolumeSpecName "kube-api-access-qp6wb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:16:05 crc kubenswrapper[4953]: I1211 10:16:05.185327 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3b9e0de-9d50-4564-b075-9e56de0d6d20-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "f3b9e0de-9d50-4564-b075-9e56de0d6d20" (UID: "f3b9e0de-9d50-4564-b075-9e56de0d6d20"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:16:05 crc kubenswrapper[4953]: I1211 10:16:05.185360 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3b9e0de-9d50-4564-b075-9e56de0d6d20-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "f3b9e0de-9d50-4564-b075-9e56de0d6d20" (UID: "f3b9e0de-9d50-4564-b075-9e56de0d6d20"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:16:05 crc kubenswrapper[4953]: I1211 10:16:05.185765 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3b9e0de-9d50-4564-b075-9e56de0d6d20-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "f3b9e0de-9d50-4564-b075-9e56de0d6d20" (UID: "f3b9e0de-9d50-4564-b075-9e56de0d6d20"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:16:05 crc kubenswrapper[4953]: I1211 10:16:05.256775 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/569c3a66-eccc-4cda-953c-e4119c1e4cee-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5dcd86cbbd-78cmf\" (UID: \"569c3a66-eccc-4cda-953c-e4119c1e4cee\") " pod="openshift-authentication/oauth-openshift-5dcd86cbbd-78cmf" Dec 11 10:16:05 crc kubenswrapper[4953]: I1211 10:16:05.256896 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/569c3a66-eccc-4cda-953c-e4119c1e4cee-v4-0-config-user-template-login\") pod \"oauth-openshift-5dcd86cbbd-78cmf\" (UID: \"569c3a66-eccc-4cda-953c-e4119c1e4cee\") " pod="openshift-authentication/oauth-openshift-5dcd86cbbd-78cmf" Dec 11 10:16:05 crc kubenswrapper[4953]: I1211 10:16:05.256946 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/569c3a66-eccc-4cda-953c-e4119c1e4cee-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5dcd86cbbd-78cmf\" (UID: \"569c3a66-eccc-4cda-953c-e4119c1e4cee\") " pod="openshift-authentication/oauth-openshift-5dcd86cbbd-78cmf" Dec 11 10:16:05 crc kubenswrapper[4953]: I1211 10:16:05.256988 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/569c3a66-eccc-4cda-953c-e4119c1e4cee-v4-0-config-system-service-ca\") pod \"oauth-openshift-5dcd86cbbd-78cmf\" (UID: \"569c3a66-eccc-4cda-953c-e4119c1e4cee\") " pod="openshift-authentication/oauth-openshift-5dcd86cbbd-78cmf" Dec 11 10:16:05 crc kubenswrapper[4953]: I1211 10:16:05.257040 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/569c3a66-eccc-4cda-953c-e4119c1e4cee-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5dcd86cbbd-78cmf\" (UID: \"569c3a66-eccc-4cda-953c-e4119c1e4cee\") " pod="openshift-authentication/oauth-openshift-5dcd86cbbd-78cmf" Dec 11 10:16:05 crc kubenswrapper[4953]: I1211 10:16:05.257060 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/569c3a66-eccc-4cda-953c-e4119c1e4cee-audit-policies\") pod \"oauth-openshift-5dcd86cbbd-78cmf\" (UID: \"569c3a66-eccc-4cda-953c-e4119c1e4cee\") " pod="openshift-authentication/oauth-openshift-5dcd86cbbd-78cmf" Dec 11 10:16:05 crc kubenswrapper[4953]: I1211 10:16:05.257081 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/569c3a66-eccc-4cda-953c-e4119c1e4cee-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5dcd86cbbd-78cmf\" (UID: \"569c3a66-eccc-4cda-953c-e4119c1e4cee\") " pod="openshift-authentication/oauth-openshift-5dcd86cbbd-78cmf" Dec 11 10:16:05 crc kubenswrapper[4953]: I1211 10:16:05.257100 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/569c3a66-eccc-4cda-953c-e4119c1e4cee-audit-dir\") pod \"oauth-openshift-5dcd86cbbd-78cmf\" (UID: \"569c3a66-eccc-4cda-953c-e4119c1e4cee\") " pod="openshift-authentication/oauth-openshift-5dcd86cbbd-78cmf" Dec 11 10:16:05 crc kubenswrapper[4953]: I1211 10:16:05.257128 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/569c3a66-eccc-4cda-953c-e4119c1e4cee-v4-0-config-system-router-certs\") pod \"oauth-openshift-5dcd86cbbd-78cmf\" (UID: \"569c3a66-eccc-4cda-953c-e4119c1e4cee\") " pod="openshift-authentication/oauth-openshift-5dcd86cbbd-78cmf" Dec 11 10:16:05 crc kubenswrapper[4953]: I1211 10:16:05.257162 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5thbx\" (UniqueName: \"kubernetes.io/projected/569c3a66-eccc-4cda-953c-e4119c1e4cee-kube-api-access-5thbx\") pod \"oauth-openshift-5dcd86cbbd-78cmf\" (UID: \"569c3a66-eccc-4cda-953c-e4119c1e4cee\") " pod="openshift-authentication/oauth-openshift-5dcd86cbbd-78cmf" Dec 11 10:16:05 crc kubenswrapper[4953]: I1211 10:16:05.257195 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/569c3a66-eccc-4cda-953c-e4119c1e4cee-v4-0-config-system-session\") pod \"oauth-openshift-5dcd86cbbd-78cmf\" (UID: \"569c3a66-eccc-4cda-953c-e4119c1e4cee\") " pod="openshift-authentication/oauth-openshift-5dcd86cbbd-78cmf" Dec 11 10:16:05 crc kubenswrapper[4953]: I1211 10:16:05.257216 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/569c3a66-eccc-4cda-953c-e4119c1e4cee-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5dcd86cbbd-78cmf\" (UID: \"569c3a66-eccc-4cda-953c-e4119c1e4cee\") " pod="openshift-authentication/oauth-openshift-5dcd86cbbd-78cmf" Dec 11 10:16:05 crc kubenswrapper[4953]: I1211 10:16:05.257258 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/569c3a66-eccc-4cda-953c-e4119c1e4cee-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5dcd86cbbd-78cmf\" (UID: \"569c3a66-eccc-4cda-953c-e4119c1e4cee\") " pod="openshift-authentication/oauth-openshift-5dcd86cbbd-78cmf" Dec 11 10:16:05 crc kubenswrapper[4953]: I1211 10:16:05.257282 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/569c3a66-eccc-4cda-953c-e4119c1e4cee-v4-0-config-user-template-error\") pod \"oauth-openshift-5dcd86cbbd-78cmf\" (UID: \"569c3a66-eccc-4cda-953c-e4119c1e4cee\") " pod="openshift-authentication/oauth-openshift-5dcd86cbbd-78cmf" Dec 11 10:16:05 crc kubenswrapper[4953]: I1211 10:16:05.257328 4953 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f3b9e0de-9d50-4564-b075-9e56de0d6d20-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Dec 11 10:16:05 crc kubenswrapper[4953]: I1211 10:16:05.257340 4953 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f3b9e0de-9d50-4564-b075-9e56de0d6d20-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 11 10:16:05 crc kubenswrapper[4953]: I1211 10:16:05.257357 4953 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f3b9e0de-9d50-4564-b075-9e56de0d6d20-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Dec 11 10:16:05 crc kubenswrapper[4953]: I1211 10:16:05.257371 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qp6wb\" (UniqueName: \"kubernetes.io/projected/f3b9e0de-9d50-4564-b075-9e56de0d6d20-kube-api-access-qp6wb\") on node \"crc\" DevicePath \"\"" Dec 11 10:16:05 crc kubenswrapper[4953]: I1211 10:16:05.257390 4953 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f3b9e0de-9d50-4564-b075-9e56de0d6d20-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Dec 11 10:16:05 crc kubenswrapper[4953]: I1211 10:16:05.257399 4953 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f3b9e0de-9d50-4564-b075-9e56de0d6d20-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Dec 11 10:16:05 crc kubenswrapper[4953]: I1211 10:16:05.338904 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3b9e0de-9d50-4564-b075-9e56de0d6d20-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "f3b9e0de-9d50-4564-b075-9e56de0d6d20" (UID: "f3b9e0de-9d50-4564-b075-9e56de0d6d20"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:16:05 crc kubenswrapper[4953]: I1211 10:16:05.358111 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/569c3a66-eccc-4cda-953c-e4119c1e4cee-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5dcd86cbbd-78cmf\" (UID: \"569c3a66-eccc-4cda-953c-e4119c1e4cee\") " pod="openshift-authentication/oauth-openshift-5dcd86cbbd-78cmf" Dec 11 10:16:05 crc kubenswrapper[4953]: I1211 10:16:05.358175 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/569c3a66-eccc-4cda-953c-e4119c1e4cee-v4-0-config-user-template-error\") pod \"oauth-openshift-5dcd86cbbd-78cmf\" (UID: \"569c3a66-eccc-4cda-953c-e4119c1e4cee\") " pod="openshift-authentication/oauth-openshift-5dcd86cbbd-78cmf" Dec 11 10:16:05 crc kubenswrapper[4953]: I1211 10:16:05.358214 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/569c3a66-eccc-4cda-953c-e4119c1e4cee-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5dcd86cbbd-78cmf\" (UID: \"569c3a66-eccc-4cda-953c-e4119c1e4cee\") " pod="openshift-authentication/oauth-openshift-5dcd86cbbd-78cmf" Dec 11 10:16:05 crc kubenswrapper[4953]: I1211 10:16:05.358250 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/569c3a66-eccc-4cda-953c-e4119c1e4cee-v4-0-config-user-template-login\") pod \"oauth-openshift-5dcd86cbbd-78cmf\" (UID: \"569c3a66-eccc-4cda-953c-e4119c1e4cee\") " pod="openshift-authentication/oauth-openshift-5dcd86cbbd-78cmf" Dec 11 10:16:05 crc kubenswrapper[4953]: I1211 10:16:05.358285 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/569c3a66-eccc-4cda-953c-e4119c1e4cee-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5dcd86cbbd-78cmf\" (UID: \"569c3a66-eccc-4cda-953c-e4119c1e4cee\") " pod="openshift-authentication/oauth-openshift-5dcd86cbbd-78cmf" Dec 11 10:16:05 crc kubenswrapper[4953]: I1211 10:16:05.358320 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/569c3a66-eccc-4cda-953c-e4119c1e4cee-v4-0-config-system-service-ca\") pod \"oauth-openshift-5dcd86cbbd-78cmf\" (UID: \"569c3a66-eccc-4cda-953c-e4119c1e4cee\") " pod="openshift-authentication/oauth-openshift-5dcd86cbbd-78cmf" Dec 11 10:16:05 crc kubenswrapper[4953]: I1211 10:16:05.358354 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/569c3a66-eccc-4cda-953c-e4119c1e4cee-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5dcd86cbbd-78cmf\" (UID: \"569c3a66-eccc-4cda-953c-e4119c1e4cee\") " pod="openshift-authentication/oauth-openshift-5dcd86cbbd-78cmf" Dec 11 10:16:05 crc kubenswrapper[4953]: I1211 10:16:05.358387 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/569c3a66-eccc-4cda-953c-e4119c1e4cee-audit-policies\") pod \"oauth-openshift-5dcd86cbbd-78cmf\" (UID: \"569c3a66-eccc-4cda-953c-e4119c1e4cee\") " pod="openshift-authentication/oauth-openshift-5dcd86cbbd-78cmf" Dec 11 10:16:05 crc kubenswrapper[4953]: I1211 10:16:05.358422 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/569c3a66-eccc-4cda-953c-e4119c1e4cee-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5dcd86cbbd-78cmf\" (UID: \"569c3a66-eccc-4cda-953c-e4119c1e4cee\") " pod="openshift-authentication/oauth-openshift-5dcd86cbbd-78cmf" Dec 11 10:16:05 crc kubenswrapper[4953]: I1211 10:16:05.358465 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/569c3a66-eccc-4cda-953c-e4119c1e4cee-audit-dir\") pod \"oauth-openshift-5dcd86cbbd-78cmf\" (UID: \"569c3a66-eccc-4cda-953c-e4119c1e4cee\") " pod="openshift-authentication/oauth-openshift-5dcd86cbbd-78cmf" Dec 11 10:16:05 crc kubenswrapper[4953]: I1211 10:16:05.358506 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/569c3a66-eccc-4cda-953c-e4119c1e4cee-v4-0-config-system-router-certs\") pod \"oauth-openshift-5dcd86cbbd-78cmf\" (UID: \"569c3a66-eccc-4cda-953c-e4119c1e4cee\") " pod="openshift-authentication/oauth-openshift-5dcd86cbbd-78cmf" Dec 11 10:16:05 crc kubenswrapper[4953]: I1211 10:16:05.358537 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5thbx\" (UniqueName: \"kubernetes.io/projected/569c3a66-eccc-4cda-953c-e4119c1e4cee-kube-api-access-5thbx\") pod \"oauth-openshift-5dcd86cbbd-78cmf\" (UID: \"569c3a66-eccc-4cda-953c-e4119c1e4cee\") " pod="openshift-authentication/oauth-openshift-5dcd86cbbd-78cmf" Dec 11 10:16:05 crc kubenswrapper[4953]: I1211 10:16:05.358593 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/569c3a66-eccc-4cda-953c-e4119c1e4cee-v4-0-config-system-session\") pod \"oauth-openshift-5dcd86cbbd-78cmf\" (UID: \"569c3a66-eccc-4cda-953c-e4119c1e4cee\") " pod="openshift-authentication/oauth-openshift-5dcd86cbbd-78cmf" Dec 11 10:16:05 crc kubenswrapper[4953]: I1211 10:16:05.358618 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/569c3a66-eccc-4cda-953c-e4119c1e4cee-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5dcd86cbbd-78cmf\" (UID: \"569c3a66-eccc-4cda-953c-e4119c1e4cee\") " pod="openshift-authentication/oauth-openshift-5dcd86cbbd-78cmf" Dec 11 10:16:05 crc kubenswrapper[4953]: I1211 10:16:05.358682 4953 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f3b9e0de-9d50-4564-b075-9e56de0d6d20-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Dec 11 10:16:05 crc kubenswrapper[4953]: I1211 10:16:05.359426 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/569c3a66-eccc-4cda-953c-e4119c1e4cee-audit-policies\") pod \"oauth-openshift-5dcd86cbbd-78cmf\" (UID: \"569c3a66-eccc-4cda-953c-e4119c1e4cee\") " pod="openshift-authentication/oauth-openshift-5dcd86cbbd-78cmf" Dec 11 10:16:05 crc kubenswrapper[4953]: I1211 10:16:05.359464 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/569c3a66-eccc-4cda-953c-e4119c1e4cee-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5dcd86cbbd-78cmf\" (UID: \"569c3a66-eccc-4cda-953c-e4119c1e4cee\") " pod="openshift-authentication/oauth-openshift-5dcd86cbbd-78cmf" Dec 11 10:16:05 crc kubenswrapper[4953]: I1211 10:16:05.359421 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3b9e0de-9d50-4564-b075-9e56de0d6d20-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "f3b9e0de-9d50-4564-b075-9e56de0d6d20" (UID: "f3b9e0de-9d50-4564-b075-9e56de0d6d20"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:16:05 crc kubenswrapper[4953]: I1211 10:16:05.359975 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/569c3a66-eccc-4cda-953c-e4119c1e4cee-v4-0-config-system-service-ca\") pod \"oauth-openshift-5dcd86cbbd-78cmf\" (UID: \"569c3a66-eccc-4cda-953c-e4119c1e4cee\") " pod="openshift-authentication/oauth-openshift-5dcd86cbbd-78cmf" Dec 11 10:16:05 crc kubenswrapper[4953]: I1211 10:16:05.360018 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3b9e0de-9d50-4564-b075-9e56de0d6d20-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "f3b9e0de-9d50-4564-b075-9e56de0d6d20" (UID: "f3b9e0de-9d50-4564-b075-9e56de0d6d20"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:16:05 crc kubenswrapper[4953]: I1211 10:16:05.360409 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/569c3a66-eccc-4cda-953c-e4119c1e4cee-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5dcd86cbbd-78cmf\" (UID: \"569c3a66-eccc-4cda-953c-e4119c1e4cee\") " pod="openshift-authentication/oauth-openshift-5dcd86cbbd-78cmf" Dec 11 10:16:05 crc kubenswrapper[4953]: I1211 10:16:05.360463 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/569c3a66-eccc-4cda-953c-e4119c1e4cee-audit-dir\") pod \"oauth-openshift-5dcd86cbbd-78cmf\" (UID: \"569c3a66-eccc-4cda-953c-e4119c1e4cee\") " pod="openshift-authentication/oauth-openshift-5dcd86cbbd-78cmf" Dec 11 10:16:05 crc kubenswrapper[4953]: I1211 10:16:05.361397 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3b9e0de-9d50-4564-b075-9e56de0d6d20-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "f3b9e0de-9d50-4564-b075-9e56de0d6d20" (UID: "f3b9e0de-9d50-4564-b075-9e56de0d6d20"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:16:05 crc kubenswrapper[4953]: I1211 10:16:05.362440 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/569c3a66-eccc-4cda-953c-e4119c1e4cee-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5dcd86cbbd-78cmf\" (UID: \"569c3a66-eccc-4cda-953c-e4119c1e4cee\") " pod="openshift-authentication/oauth-openshift-5dcd86cbbd-78cmf" Dec 11 10:16:05 crc kubenswrapper[4953]: I1211 10:16:05.362996 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/569c3a66-eccc-4cda-953c-e4119c1e4cee-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5dcd86cbbd-78cmf\" (UID: \"569c3a66-eccc-4cda-953c-e4119c1e4cee\") " pod="openshift-authentication/oauth-openshift-5dcd86cbbd-78cmf" Dec 11 10:16:05 crc kubenswrapper[4953]: I1211 10:16:05.363429 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/569c3a66-eccc-4cda-953c-e4119c1e4cee-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5dcd86cbbd-78cmf\" (UID: \"569c3a66-eccc-4cda-953c-e4119c1e4cee\") " pod="openshift-authentication/oauth-openshift-5dcd86cbbd-78cmf" Dec 11 10:16:05 crc kubenswrapper[4953]: I1211 10:16:05.364862 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/569c3a66-eccc-4cda-953c-e4119c1e4cee-v4-0-config-user-template-error\") pod \"oauth-openshift-5dcd86cbbd-78cmf\" (UID: \"569c3a66-eccc-4cda-953c-e4119c1e4cee\") " pod="openshift-authentication/oauth-openshift-5dcd86cbbd-78cmf" Dec 11 10:16:05 crc kubenswrapper[4953]: I1211 10:16:05.364952 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/569c3a66-eccc-4cda-953c-e4119c1e4cee-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5dcd86cbbd-78cmf\" (UID: \"569c3a66-eccc-4cda-953c-e4119c1e4cee\") " pod="openshift-authentication/oauth-openshift-5dcd86cbbd-78cmf" Dec 11 10:16:05 crc kubenswrapper[4953]: I1211 10:16:05.367862 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/569c3a66-eccc-4cda-953c-e4119c1e4cee-v4-0-config-system-router-certs\") pod \"oauth-openshift-5dcd86cbbd-78cmf\" (UID: \"569c3a66-eccc-4cda-953c-e4119c1e4cee\") " pod="openshift-authentication/oauth-openshift-5dcd86cbbd-78cmf" Dec 11 10:16:05 crc kubenswrapper[4953]: I1211 10:16:05.368343 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/569c3a66-eccc-4cda-953c-e4119c1e4cee-v4-0-config-user-template-login\") pod \"oauth-openshift-5dcd86cbbd-78cmf\" (UID: \"569c3a66-eccc-4cda-953c-e4119c1e4cee\") " pod="openshift-authentication/oauth-openshift-5dcd86cbbd-78cmf" Dec 11 10:16:05 crc kubenswrapper[4953]: I1211 10:16:05.372810 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-9jt44" Dec 11 10:16:05 crc kubenswrapper[4953]: I1211 10:16:05.491859 4953 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f3b9e0de-9d50-4564-b075-9e56de0d6d20-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Dec 11 10:16:05 crc kubenswrapper[4953]: I1211 10:16:05.491903 4953 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f3b9e0de-9d50-4564-b075-9e56de0d6d20-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Dec 11 10:16:05 crc kubenswrapper[4953]: I1211 10:16:05.491917 4953 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f3b9e0de-9d50-4564-b075-9e56de0d6d20-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 10:16:05 crc kubenswrapper[4953]: I1211 10:16:05.504379 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/569c3a66-eccc-4cda-953c-e4119c1e4cee-v4-0-config-system-session\") pod \"oauth-openshift-5dcd86cbbd-78cmf\" (UID: \"569c3a66-eccc-4cda-953c-e4119c1e4cee\") " pod="openshift-authentication/oauth-openshift-5dcd86cbbd-78cmf" Dec 11 10:16:05 crc kubenswrapper[4953]: I1211 10:16:05.508062 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5thbx\" (UniqueName: \"kubernetes.io/projected/569c3a66-eccc-4cda-953c-e4119c1e4cee-kube-api-access-5thbx\") pod \"oauth-openshift-5dcd86cbbd-78cmf\" (UID: \"569c3a66-eccc-4cda-953c-e4119c1e4cee\") " pod="openshift-authentication/oauth-openshift-5dcd86cbbd-78cmf" Dec 11 10:16:05 crc kubenswrapper[4953]: I1211 10:16:05.541207 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-8s4mq"] Dec 11 10:16:05 crc kubenswrapper[4953]: I1211 10:16:05.549918 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-8s4mq"] Dec 11 10:16:05 crc kubenswrapper[4953]: I1211 10:16:05.681281 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5dcd86cbbd-78cmf" Dec 11 10:16:06 crc kubenswrapper[4953]: I1211 10:16:06.193088 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w2rvh" event={"ID":"bcb98d03-7cdb-49c6-baa8-c4aa9605a2d6","Type":"ContainerStarted","Data":"716c7bc31ebb28ad0a2637286a01f7c8cc6b1fb54a5615922d5a2021ae0d4ef5"} Dec 11 10:16:06 crc kubenswrapper[4953]: I1211 10:16:06.300663 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-l5pbm" podStartSLOduration=12.820292676 podStartE2EDuration="1m59.300633574s" podCreationTimestamp="2025-12-11 10:14:07 +0000 UTC" firstStartedPulling="2025-12-11 10:14:16.607741467 +0000 UTC m=+174.631600500" lastFinishedPulling="2025-12-11 10:16:03.088082365 +0000 UTC m=+281.111941398" observedRunningTime="2025-12-11 10:16:06.286021739 +0000 UTC m=+284.309880772" watchObservedRunningTime="2025-12-11 10:16:06.300633574 +0000 UTC m=+284.324492607" Dec 11 10:16:06 crc kubenswrapper[4953]: I1211 10:16:06.347907 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-dbdlx" podStartSLOduration=16.366599451 podStartE2EDuration="1m59.347891432s" podCreationTimestamp="2025-12-11 10:14:07 +0000 UTC" firstStartedPulling="2025-12-11 10:14:19.164142879 +0000 UTC m=+177.188001912" lastFinishedPulling="2025-12-11 10:16:02.14543486 +0000 UTC m=+280.169293893" observedRunningTime="2025-12-11 10:16:06.34749665 +0000 UTC m=+284.371355683" watchObservedRunningTime="2025-12-11 10:16:06.347891432 +0000 UTC m=+284.371750465" Dec 11 10:16:06 crc kubenswrapper[4953]: I1211 10:16:06.462398 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-5dcd86cbbd-78cmf"] Dec 11 10:16:06 crc kubenswrapper[4953]: W1211 10:16:06.471726 4953 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod569c3a66_eccc_4cda_953c_e4119c1e4cee.slice/crio-3e1798acc9b07f86d643e7c1a34c07ca98a741589b155a3e01e82e9f6c470671 WatchSource:0}: Error finding container 3e1798acc9b07f86d643e7c1a34c07ca98a741589b155a3e01e82e9f6c470671: Status 404 returned error can't find the container with id 3e1798acc9b07f86d643e7c1a34c07ca98a741589b155a3e01e82e9f6c470671 Dec 11 10:16:06 crc kubenswrapper[4953]: I1211 10:16:06.502756 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3b9e0de-9d50-4564-b075-9e56de0d6d20" path="/var/lib/kubelet/pods/f3b9e0de-9d50-4564-b075-9e56de0d6d20/volumes" Dec 11 10:16:07 crc kubenswrapper[4953]: I1211 10:16:07.210199 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5dcd86cbbd-78cmf" event={"ID":"569c3a66-eccc-4cda-953c-e4119c1e4cee","Type":"ContainerStarted","Data":"3e1798acc9b07f86d643e7c1a34c07ca98a741589b155a3e01e82e9f6c470671"} Dec 11 10:16:07 crc kubenswrapper[4953]: I1211 10:16:07.212292 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pxglb" event={"ID":"46f197d9-de5c-42c2-9781-47ed42389e11","Type":"ContainerStarted","Data":"8e496bb06849616e21236ee40f86b43a8f15a8473596a3beef266909bfda57b5"} Dec 11 10:16:07 crc kubenswrapper[4953]: I1211 10:16:07.214425 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2f46z" event={"ID":"2f406ece-016a-43bc-92c9-473b85ad0ca9","Type":"ContainerStarted","Data":"4972be10e78ae7134e8bf3613f5b433589964dcdeebc193ba0d6e7ff302ed133"} Dec 11 10:16:08 crc kubenswrapper[4953]: I1211 10:16:08.249163 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kkp25" event={"ID":"4468c58a-3cfc-4197-bf1b-8afc67dfda5e","Type":"ContainerStarted","Data":"e18411fd4726cf01f45d745aeb9e324e72883ea1dd3a39e4beac0742646f2dae"} Dec 11 10:16:08 crc kubenswrapper[4953]: I1211 10:16:08.252214 4953 generic.go:334] "Generic (PLEG): container finished" podID="cef0b6d3-40d2-4981-894b-962df1304c36" containerID="9658dbfb5befa8e7bc2c911bd90c45f78fa16822dc206266d4071f5e6712292b" exitCode=0 Dec 11 10:16:08 crc kubenswrapper[4953]: I1211 10:16:08.252273 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cf6dd" event={"ID":"cef0b6d3-40d2-4981-894b-962df1304c36","Type":"ContainerDied","Data":"9658dbfb5befa8e7bc2c911bd90c45f78fa16822dc206266d4071f5e6712292b"} Dec 11 10:16:08 crc kubenswrapper[4953]: I1211 10:16:08.255668 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5dcd86cbbd-78cmf" event={"ID":"569c3a66-eccc-4cda-953c-e4119c1e4cee","Type":"ContainerStarted","Data":"5a5d3ff48fd5a74eee2d1e58b3304686e6b6036b793a2fa59cbe9843d098320d"} Dec 11 10:16:08 crc kubenswrapper[4953]: I1211 10:16:08.256505 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-5dcd86cbbd-78cmf" Dec 11 10:16:08 crc kubenswrapper[4953]: I1211 10:16:08.350937 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-5dcd86cbbd-78cmf" podStartSLOduration=32.350918459 podStartE2EDuration="32.350918459s" podCreationTimestamp="2025-12-11 10:15:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:16:08.348941064 +0000 UTC m=+286.372800097" watchObservedRunningTime="2025-12-11 10:16:08.350918459 +0000 UTC m=+286.374777512" Dec 11 10:16:09 crc kubenswrapper[4953]: I1211 10:16:08.673884 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-l5pbm" Dec 11 10:16:09 crc kubenswrapper[4953]: I1211 10:16:08.675697 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-l5pbm" Dec 11 10:16:09 crc kubenswrapper[4953]: I1211 10:16:08.868631 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-5dcd86cbbd-78cmf" Dec 11 10:16:10 crc kubenswrapper[4953]: I1211 10:16:10.272543 4953 generic.go:334] "Generic (PLEG): container finished" podID="4468c58a-3cfc-4197-bf1b-8afc67dfda5e" containerID="e18411fd4726cf01f45d745aeb9e324e72883ea1dd3a39e4beac0742646f2dae" exitCode=0 Dec 11 10:16:10 crc kubenswrapper[4953]: I1211 10:16:10.272951 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kkp25" event={"ID":"4468c58a-3cfc-4197-bf1b-8afc67dfda5e","Type":"ContainerDied","Data":"e18411fd4726cf01f45d745aeb9e324e72883ea1dd3a39e4beac0742646f2dae"} Dec 11 10:16:10 crc kubenswrapper[4953]: I1211 10:16:10.279147 4953 generic.go:334] "Generic (PLEG): container finished" podID="bcb98d03-7cdb-49c6-baa8-c4aa9605a2d6" containerID="716c7bc31ebb28ad0a2637286a01f7c8cc6b1fb54a5615922d5a2021ae0d4ef5" exitCode=0 Dec 11 10:16:10 crc kubenswrapper[4953]: I1211 10:16:10.279893 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w2rvh" event={"ID":"bcb98d03-7cdb-49c6-baa8-c4aa9605a2d6","Type":"ContainerDied","Data":"716c7bc31ebb28ad0a2637286a01f7c8cc6b1fb54a5615922d5a2021ae0d4ef5"} Dec 11 10:16:10 crc kubenswrapper[4953]: I1211 10:16:10.482421 4953 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-l5pbm" podUID="3b099bc8-faec-451b-88a3-f03e46e3ad94" containerName="registry-server" probeResult="failure" output=< Dec 11 10:16:10 crc kubenswrapper[4953]: timeout: failed to connect service ":50051" within 1s Dec 11 10:16:10 crc kubenswrapper[4953]: > Dec 11 10:16:12 crc kubenswrapper[4953]: I1211 10:16:12.296123 4953 generic.go:334] "Generic (PLEG): container finished" podID="46f197d9-de5c-42c2-9781-47ed42389e11" containerID="8e496bb06849616e21236ee40f86b43a8f15a8473596a3beef266909bfda57b5" exitCode=0 Dec 11 10:16:12 crc kubenswrapper[4953]: I1211 10:16:12.296208 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pxglb" event={"ID":"46f197d9-de5c-42c2-9781-47ed42389e11","Type":"ContainerDied","Data":"8e496bb06849616e21236ee40f86b43a8f15a8473596a3beef266909bfda57b5"} Dec 11 10:16:12 crc kubenswrapper[4953]: I1211 10:16:12.299655 4953 generic.go:334] "Generic (PLEG): container finished" podID="2f406ece-016a-43bc-92c9-473b85ad0ca9" containerID="4972be10e78ae7134e8bf3613f5b433589964dcdeebc193ba0d6e7ff302ed133" exitCode=0 Dec 11 10:16:12 crc kubenswrapper[4953]: I1211 10:16:12.299711 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2f46z" event={"ID":"2f406ece-016a-43bc-92c9-473b85ad0ca9","Type":"ContainerDied","Data":"4972be10e78ae7134e8bf3613f5b433589964dcdeebc193ba0d6e7ff302ed133"} Dec 11 10:16:14 crc kubenswrapper[4953]: I1211 10:16:14.404427 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-dbdlx" Dec 11 10:16:14 crc kubenswrapper[4953]: I1211 10:16:14.404891 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-dbdlx" Dec 11 10:16:14 crc kubenswrapper[4953]: I1211 10:16:14.575384 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-dbdlx" Dec 11 10:16:15 crc kubenswrapper[4953]: I1211 10:16:15.319113 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cf6dd" event={"ID":"cef0b6d3-40d2-4981-894b-962df1304c36","Type":"ContainerStarted","Data":"108fd85f0f1ab0a5ec256d6544f68f5afc24a7358df6435f00e3e39642eff318"} Dec 11 10:16:15 crc kubenswrapper[4953]: I1211 10:16:15.341950 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-cf6dd" podStartSLOduration=13.147735758 podStartE2EDuration="2m8.341931638s" podCreationTimestamp="2025-12-11 10:14:07 +0000 UTC" firstStartedPulling="2025-12-11 10:14:19.145460554 +0000 UTC m=+177.169319587" lastFinishedPulling="2025-12-11 10:16:14.339656414 +0000 UTC m=+292.363515467" observedRunningTime="2025-12-11 10:16:15.339538929 +0000 UTC m=+293.363397962" watchObservedRunningTime="2025-12-11 10:16:15.341931638 +0000 UTC m=+293.365790661" Dec 11 10:16:15 crc kubenswrapper[4953]: I1211 10:16:15.379421 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-dbdlx" Dec 11 10:16:18 crc kubenswrapper[4953]: I1211 10:16:18.483096 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dbdlx"] Dec 11 10:16:18 crc kubenswrapper[4953]: I1211 10:16:18.483746 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-dbdlx" podUID="c6173b60-4d44-435b-a606-0b3836f71ad2" containerName="registry-server" containerID="cri-o://f103ed42f555aae32606085b73eb4a626cae9f1cf8a84f3dfa1837a4c4519ddc" gracePeriod=2 Dec 11 10:16:18 crc kubenswrapper[4953]: I1211 10:16:18.727499 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-l5pbm" Dec 11 10:16:19 crc kubenswrapper[4953]: I1211 10:16:19.034130 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-l5pbm" Dec 11 10:16:19 crc kubenswrapper[4953]: I1211 10:16:19.860111 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-cf6dd" Dec 11 10:16:19 crc kubenswrapper[4953]: I1211 10:16:19.860496 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-cf6dd" Dec 11 10:16:19 crc kubenswrapper[4953]: I1211 10:16:19.965354 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-cf6dd" Dec 11 10:16:20 crc kubenswrapper[4953]: I1211 10:16:20.400219 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-cf6dd" Dec 11 10:16:21 crc kubenswrapper[4953]: I1211 10:16:21.564256 4953 generic.go:334] "Generic (PLEG): container finished" podID="c6173b60-4d44-435b-a606-0b3836f71ad2" containerID="f103ed42f555aae32606085b73eb4a626cae9f1cf8a84f3dfa1837a4c4519ddc" exitCode=0 Dec 11 10:16:21 crc kubenswrapper[4953]: I1211 10:16:21.564332 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dbdlx" event={"ID":"c6173b60-4d44-435b-a606-0b3836f71ad2","Type":"ContainerDied","Data":"f103ed42f555aae32606085b73eb4a626cae9f1cf8a84f3dfa1837a4c4519ddc"} Dec 11 10:16:22 crc kubenswrapper[4953]: I1211 10:16:22.075365 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cf6dd"] Dec 11 10:16:22 crc kubenswrapper[4953]: I1211 10:16:22.579685 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-cf6dd" podUID="cef0b6d3-40d2-4981-894b-962df1304c36" containerName="registry-server" containerID="cri-o://108fd85f0f1ab0a5ec256d6544f68f5afc24a7358df6435f00e3e39642eff318" gracePeriod=2 Dec 11 10:16:22 crc kubenswrapper[4953]: E1211 10:16:22.951041 4953 file.go:109] "Unable to process watch event" err="can't process config file \"/etc/kubernetes/manifests/kube-apiserver-startup-monitor-pod.yaml\": /etc/kubernetes/manifests/kube-apiserver-startup-monitor-pod.yaml: couldn't parse as pod(Object 'Kind' is missing in 'null'), please check config file" Dec 11 10:16:22 crc kubenswrapper[4953]: I1211 10:16:22.952980 4953 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 11 10:16:22 crc kubenswrapper[4953]: I1211 10:16:22.953986 4953 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 11 10:16:22 crc kubenswrapper[4953]: I1211 10:16:22.954122 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 10:16:22 crc kubenswrapper[4953]: I1211 10:16:22.954335 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://afbf1d478a1ccbd17c29483adf2e39e60be93dfde72d96dd4c45ee2b81c7db7f" gracePeriod=15 Dec 11 10:16:22 crc kubenswrapper[4953]: I1211 10:16:22.954483 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://d2348bd7a336966cd91aa6ba1cf71771e7fd111085acbb0481adee82d7a6e109" gracePeriod=15 Dec 11 10:16:22 crc kubenswrapper[4953]: I1211 10:16:22.954506 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://6b38e6fc7946d99ff7570627e9bfd01e9f5e029ad3f3e2cda276461f222d7950" gracePeriod=15 Dec 11 10:16:22 crc kubenswrapper[4953]: I1211 10:16:22.954550 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://89487ecc0b25583d92a2adb537e660618a1f0477d9b0ca805c7d5cc120a38ef5" gracePeriod=15 Dec 11 10:16:22 crc kubenswrapper[4953]: I1211 10:16:22.954613 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://5850c59617cbc5cbf3d86246bfb8d7645964fdb32f406648e47de3d2e1dcca39" gracePeriod=15 Dec 11 10:16:22 crc kubenswrapper[4953]: I1211 10:16:22.959383 4953 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 11 10:16:22 crc kubenswrapper[4953]: E1211 10:16:22.960153 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 11 10:16:22 crc kubenswrapper[4953]: I1211 10:16:22.960179 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 11 10:16:22 crc kubenswrapper[4953]: E1211 10:16:22.960202 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 11 10:16:22 crc kubenswrapper[4953]: I1211 10:16:22.960209 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 11 10:16:22 crc kubenswrapper[4953]: E1211 10:16:22.960220 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 11 10:16:22 crc kubenswrapper[4953]: I1211 10:16:22.960229 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 11 10:16:22 crc kubenswrapper[4953]: E1211 10:16:22.960240 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 11 10:16:22 crc kubenswrapper[4953]: I1211 10:16:22.960248 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 11 10:16:22 crc kubenswrapper[4953]: E1211 10:16:22.960260 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 11 10:16:22 crc kubenswrapper[4953]: I1211 10:16:22.960267 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 11 10:16:22 crc kubenswrapper[4953]: E1211 10:16:22.960275 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 11 10:16:22 crc kubenswrapper[4953]: I1211 10:16:22.960282 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 11 10:16:22 crc kubenswrapper[4953]: E1211 10:16:22.960293 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Dec 11 10:16:22 crc kubenswrapper[4953]: I1211 10:16:22.960300 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Dec 11 10:16:22 crc kubenswrapper[4953]: I1211 10:16:22.960456 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 11 10:16:22 crc kubenswrapper[4953]: I1211 10:16:22.960471 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 11 10:16:22 crc kubenswrapper[4953]: I1211 10:16:22.960484 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 11 10:16:22 crc kubenswrapper[4953]: I1211 10:16:22.960494 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 11 10:16:22 crc kubenswrapper[4953]: I1211 10:16:22.960506 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 11 10:16:22 crc kubenswrapper[4953]: I1211 10:16:22.960515 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 11 10:16:22 crc kubenswrapper[4953]: I1211 10:16:22.988469 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 10:16:22 crc kubenswrapper[4953]: I1211 10:16:22.988590 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 10:16:22 crc kubenswrapper[4953]: I1211 10:16:22.988668 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 10:16:22 crc kubenswrapper[4953]: I1211 10:16:22.988702 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 10:16:22 crc kubenswrapper[4953]: I1211 10:16:22.988872 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 10:16:22 crc kubenswrapper[4953]: I1211 10:16:22.989022 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 10:16:22 crc kubenswrapper[4953]: I1211 10:16:22.989091 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 10:16:22 crc kubenswrapper[4953]: I1211 10:16:22.989147 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 10:16:23 crc kubenswrapper[4953]: I1211 10:16:23.011350 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 11 10:16:23 crc kubenswrapper[4953]: I1211 10:16:23.277546 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 10:16:23 crc kubenswrapper[4953]: I1211 10:16:23.277670 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 10:16:23 crc kubenswrapper[4953]: I1211 10:16:23.277726 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 10:16:23 crc kubenswrapper[4953]: I1211 10:16:23.277795 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 10:16:23 crc kubenswrapper[4953]: I1211 10:16:23.277838 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 10:16:23 crc kubenswrapper[4953]: I1211 10:16:23.277906 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 10:16:23 crc kubenswrapper[4953]: I1211 10:16:23.277967 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 10:16:23 crc kubenswrapper[4953]: I1211 10:16:23.278037 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 10:16:23 crc kubenswrapper[4953]: I1211 10:16:23.278097 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 10:16:23 crc kubenswrapper[4953]: I1211 10:16:23.278190 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 10:16:23 crc kubenswrapper[4953]: I1211 10:16:23.278243 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 10:16:23 crc kubenswrapper[4953]: I1211 10:16:23.278302 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 10:16:23 crc kubenswrapper[4953]: I1211 10:16:23.278329 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 10:16:23 crc kubenswrapper[4953]: I1211 10:16:23.278390 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 10:16:23 crc kubenswrapper[4953]: I1211 10:16:23.278416 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 10:16:23 crc kubenswrapper[4953]: I1211 10:16:23.278247 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 10:16:23 crc kubenswrapper[4953]: I1211 10:16:23.313362 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 10:16:23 crc kubenswrapper[4953]: E1211 10:16:23.385847 4953 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.129.56.134:6443: connect: connection refused" event="&Event{ObjectMeta:{redhat-marketplace-gnxp9.188021c4df0f9304 openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:redhat-marketplace-gnxp9,UID:fe9b2116-8ab4-4c4c-8c58-74e62f28893d,APIVersion:v1,ResourceVersion:28292,FieldPath:spec.initContainers{extract-content},},Reason:Pulled,Message:Successfully pulled image \"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\" in 21.223s (21.223s including waiting). Image size: 1154573130 bytes.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 10:16:23.3688809 +0000 UTC m=+301.392739933,LastTimestamp:2025-12-11 10:16:23.3688809 +0000 UTC m=+301.392739933,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 10:16:23 crc kubenswrapper[4953]: I1211 10:16:23.434697 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dbdlx" Dec 11 10:16:23 crc kubenswrapper[4953]: I1211 10:16:23.435507 4953 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:23 crc kubenswrapper[4953]: I1211 10:16:23.437631 4953 status_manager.go:851] "Failed to get status for pod" podUID="c6173b60-4d44-435b-a606-0b3836f71ad2" pod="openshift-marketplace/community-operators-dbdlx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-dbdlx\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:23 crc kubenswrapper[4953]: I1211 10:16:23.438015 4953 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:23 crc kubenswrapper[4953]: I1211 10:16:23.534090 4953 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:6443/readyz\": dial tcp 192.168.126.11:6443: connect: connection refused" start-of-body= Dec 11 10:16:23 crc kubenswrapper[4953]: I1211 10:16:23.534168 4953 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/readyz\": dial tcp 192.168.126.11:6443: connect: connection refused" Dec 11 10:16:23 crc kubenswrapper[4953]: I1211 10:16:23.583096 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zvzfq\" (UniqueName: \"kubernetes.io/projected/c6173b60-4d44-435b-a606-0b3836f71ad2-kube-api-access-zvzfq\") pod \"c6173b60-4d44-435b-a606-0b3836f71ad2\" (UID: \"c6173b60-4d44-435b-a606-0b3836f71ad2\") " Dec 11 10:16:23 crc kubenswrapper[4953]: I1211 10:16:23.583234 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6173b60-4d44-435b-a606-0b3836f71ad2-utilities\") pod \"c6173b60-4d44-435b-a606-0b3836f71ad2\" (UID: \"c6173b60-4d44-435b-a606-0b3836f71ad2\") " Dec 11 10:16:23 crc kubenswrapper[4953]: I1211 10:16:23.583273 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6173b60-4d44-435b-a606-0b3836f71ad2-catalog-content\") pod \"c6173b60-4d44-435b-a606-0b3836f71ad2\" (UID: \"c6173b60-4d44-435b-a606-0b3836f71ad2\") " Dec 11 10:16:23 crc kubenswrapper[4953]: I1211 10:16:23.585361 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6173b60-4d44-435b-a606-0b3836f71ad2-utilities" (OuterVolumeSpecName: "utilities") pod "c6173b60-4d44-435b-a606-0b3836f71ad2" (UID: "c6173b60-4d44-435b-a606-0b3836f71ad2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:16:23 crc kubenswrapper[4953]: I1211 10:16:23.588072 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 11 10:16:23 crc kubenswrapper[4953]: I1211 10:16:23.589546 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 11 10:16:23 crc kubenswrapper[4953]: I1211 10:16:23.590252 4953 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="5850c59617cbc5cbf3d86246bfb8d7645964fdb32f406648e47de3d2e1dcca39" exitCode=2 Dec 11 10:16:23 crc kubenswrapper[4953]: I1211 10:16:23.592393 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dbdlx" event={"ID":"c6173b60-4d44-435b-a606-0b3836f71ad2","Type":"ContainerDied","Data":"e68ea1847bc2e2a31cc0f4c60fdb0bc1f2cbef1d73d219d60610728fa7b5c25a"} Dec 11 10:16:23 crc kubenswrapper[4953]: I1211 10:16:23.592513 4953 scope.go:117] "RemoveContainer" containerID="f103ed42f555aae32606085b73eb4a626cae9f1cf8a84f3dfa1837a4c4519ddc" Dec 11 10:16:23 crc kubenswrapper[4953]: I1211 10:16:23.592757 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dbdlx" Dec 11 10:16:23 crc kubenswrapper[4953]: I1211 10:16:23.594096 4953 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:23 crc kubenswrapper[4953]: I1211 10:16:23.594378 4953 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:23 crc kubenswrapper[4953]: I1211 10:16:23.594635 4953 status_manager.go:851] "Failed to get status for pod" podUID="c6173b60-4d44-435b-a606-0b3836f71ad2" pod="openshift-marketplace/community-operators-dbdlx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-dbdlx\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:23 crc kubenswrapper[4953]: I1211 10:16:23.601088 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6173b60-4d44-435b-a606-0b3836f71ad2-kube-api-access-zvzfq" (OuterVolumeSpecName: "kube-api-access-zvzfq") pod "c6173b60-4d44-435b-a606-0b3836f71ad2" (UID: "c6173b60-4d44-435b-a606-0b3836f71ad2"). InnerVolumeSpecName "kube-api-access-zvzfq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:16:23 crc kubenswrapper[4953]: I1211 10:16:23.685185 4953 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6173b60-4d44-435b-a606-0b3836f71ad2-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 10:16:23 crc kubenswrapper[4953]: I1211 10:16:23.685223 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zvzfq\" (UniqueName: \"kubernetes.io/projected/c6173b60-4d44-435b-a606-0b3836f71ad2-kube-api-access-zvzfq\") on node \"crc\" DevicePath \"\"" Dec 11 10:16:23 crc kubenswrapper[4953]: I1211 10:16:23.763204 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6173b60-4d44-435b-a606-0b3836f71ad2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c6173b60-4d44-435b-a606-0b3836f71ad2" (UID: "c6173b60-4d44-435b-a606-0b3836f71ad2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:16:23 crc kubenswrapper[4953]: I1211 10:16:23.785961 4953 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6173b60-4d44-435b-a606-0b3836f71ad2-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 10:16:23 crc kubenswrapper[4953]: I1211 10:16:23.912422 4953 status_manager.go:851] "Failed to get status for pod" podUID="c6173b60-4d44-435b-a606-0b3836f71ad2" pod="openshift-marketplace/community-operators-dbdlx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-dbdlx\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:23 crc kubenswrapper[4953]: I1211 10:16:23.912876 4953 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:23 crc kubenswrapper[4953]: I1211 10:16:23.913261 4953 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:24 crc kubenswrapper[4953]: I1211 10:16:24.613203 4953 generic.go:334] "Generic (PLEG): container finished" podID="cef0b6d3-40d2-4981-894b-962df1304c36" containerID="108fd85f0f1ab0a5ec256d6544f68f5afc24a7358df6435f00e3e39642eff318" exitCode=0 Dec 11 10:16:24 crc kubenswrapper[4953]: I1211 10:16:24.613319 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cf6dd" event={"ID":"cef0b6d3-40d2-4981-894b-962df1304c36","Type":"ContainerDied","Data":"108fd85f0f1ab0a5ec256d6544f68f5afc24a7358df6435f00e3e39642eff318"} Dec 11 10:16:26 crc kubenswrapper[4953]: E1211 10:16:26.033807 4953 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:26 crc kubenswrapper[4953]: E1211 10:16:26.034522 4953 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:26 crc kubenswrapper[4953]: E1211 10:16:26.034897 4953 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:26 crc kubenswrapper[4953]: E1211 10:16:26.035140 4953 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:26 crc kubenswrapper[4953]: E1211 10:16:26.035552 4953 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:26 crc kubenswrapper[4953]: I1211 10:16:26.035622 4953 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Dec 11 10:16:26 crc kubenswrapper[4953]: E1211 10:16:26.036055 4953 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.134:6443: connect: connection refused" interval="200ms" Dec 11 10:16:26 crc kubenswrapper[4953]: E1211 10:16:26.237210 4953 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.134:6443: connect: connection refused" interval="400ms" Dec 11 10:16:26 crc kubenswrapper[4953]: E1211 10:16:26.638001 4953 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.134:6443: connect: connection refused" interval="800ms" Dec 11 10:16:27 crc kubenswrapper[4953]: E1211 10:16:27.439140 4953 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.134:6443: connect: connection refused" interval="1.6s" Dec 11 10:16:28 crc kubenswrapper[4953]: I1211 10:16:28.705923 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 11 10:16:28 crc kubenswrapper[4953]: I1211 10:16:28.708793 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 11 10:16:28 crc kubenswrapper[4953]: I1211 10:16:28.710220 4953 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="6b38e6fc7946d99ff7570627e9bfd01e9f5e029ad3f3e2cda276461f222d7950" exitCode=0 Dec 11 10:16:28 crc kubenswrapper[4953]: I1211 10:16:28.710247 4953 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="d2348bd7a336966cd91aa6ba1cf71771e7fd111085acbb0481adee82d7a6e109" exitCode=0 Dec 11 10:16:28 crc kubenswrapper[4953]: I1211 10:16:28.710255 4953 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="89487ecc0b25583d92a2adb537e660618a1f0477d9b0ca805c7d5cc120a38ef5" exitCode=0 Dec 11 10:16:28 crc kubenswrapper[4953]: I1211 10:16:28.710262 4953 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="afbf1d478a1ccbd17c29483adf2e39e60be93dfde72d96dd4c45ee2b81c7db7f" exitCode=0 Dec 11 10:16:29 crc kubenswrapper[4953]: E1211 10:16:29.040254 4953 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.134:6443: connect: connection refused" interval="3.2s" Dec 11 10:16:29 crc kubenswrapper[4953]: I1211 10:16:29.719005 4953 generic.go:334] "Generic (PLEG): container finished" podID="277dd1b3-16e0-4ca6-9ceb-dc3f7ddc0c7e" containerID="99fa93795555152e47f9bd1702407a33915579712d468f6f75dae69f46079254" exitCode=0 Dec 11 10:16:29 crc kubenswrapper[4953]: I1211 10:16:29.719074 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"277dd1b3-16e0-4ca6-9ceb-dc3f7ddc0c7e","Type":"ContainerDied","Data":"99fa93795555152e47f9bd1702407a33915579712d468f6f75dae69f46079254"} Dec 11 10:16:29 crc kubenswrapper[4953]: I1211 10:16:29.719988 4953 status_manager.go:851] "Failed to get status for pod" podUID="c6173b60-4d44-435b-a606-0b3836f71ad2" pod="openshift-marketplace/community-operators-dbdlx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-dbdlx\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:29 crc kubenswrapper[4953]: I1211 10:16:29.720353 4953 status_manager.go:851] "Failed to get status for pod" podUID="277dd1b3-16e0-4ca6-9ceb-dc3f7ddc0c7e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:29 crc kubenswrapper[4953]: I1211 10:16:29.721229 4953 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:29 crc kubenswrapper[4953]: E1211 10:16:29.860559 4953 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 108fd85f0f1ab0a5ec256d6544f68f5afc24a7358df6435f00e3e39642eff318 is running failed: container process not found" containerID="108fd85f0f1ab0a5ec256d6544f68f5afc24a7358df6435f00e3e39642eff318" cmd=["grpc_health_probe","-addr=:50051"] Dec 11 10:16:29 crc kubenswrapper[4953]: E1211 10:16:29.860938 4953 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 108fd85f0f1ab0a5ec256d6544f68f5afc24a7358df6435f00e3e39642eff318 is running failed: container process not found" containerID="108fd85f0f1ab0a5ec256d6544f68f5afc24a7358df6435f00e3e39642eff318" cmd=["grpc_health_probe","-addr=:50051"] Dec 11 10:16:29 crc kubenswrapper[4953]: E1211 10:16:29.861205 4953 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 108fd85f0f1ab0a5ec256d6544f68f5afc24a7358df6435f00e3e39642eff318 is running failed: container process not found" containerID="108fd85f0f1ab0a5ec256d6544f68f5afc24a7358df6435f00e3e39642eff318" cmd=["grpc_health_probe","-addr=:50051"] Dec 11 10:16:29 crc kubenswrapper[4953]: E1211 10:16:29.861245 4953 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 108fd85f0f1ab0a5ec256d6544f68f5afc24a7358df6435f00e3e39642eff318 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-cf6dd" podUID="cef0b6d3-40d2-4981-894b-962df1304c36" containerName="registry-server" Dec 11 10:16:30 crc kubenswrapper[4953]: E1211 10:16:30.802761 4953 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.129.56.134:6443: connect: connection refused" event="&Event{ObjectMeta:{redhat-marketplace-gnxp9.188021c4df0f9304 openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:redhat-marketplace-gnxp9,UID:fe9b2116-8ab4-4c4c-8c58-74e62f28893d,APIVersion:v1,ResourceVersion:28292,FieldPath:spec.initContainers{extract-content},},Reason:Pulled,Message:Successfully pulled image \"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\" in 21.223s (21.223s including waiting). Image size: 1154573130 bytes.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 10:16:23.3688809 +0000 UTC m=+301.392739933,LastTimestamp:2025-12-11 10:16:23.3688809 +0000 UTC m=+301.392739933,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 10:16:32 crc kubenswrapper[4953]: E1211 10:16:32.242140 4953 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.134:6443: connect: connection refused" interval="6.4s" Dec 11 10:16:32 crc kubenswrapper[4953]: I1211 10:16:32.476484 4953 status_manager.go:851] "Failed to get status for pod" podUID="277dd1b3-16e0-4ca6-9ceb-dc3f7ddc0c7e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:32 crc kubenswrapper[4953]: I1211 10:16:32.477114 4953 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:32 crc kubenswrapper[4953]: I1211 10:16:32.477518 4953 status_manager.go:851] "Failed to get status for pod" podUID="c6173b60-4d44-435b-a606-0b3836f71ad2" pod="openshift-marketplace/community-operators-dbdlx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-dbdlx\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:32 crc kubenswrapper[4953]: I1211 10:16:32.693920 4953 scope.go:117] "RemoveContainer" containerID="a272cf3abeed14d12fdd9add3d593b1d3c7ce4b25f42f1bf6bd4c11258c63414" Dec 11 10:16:32 crc kubenswrapper[4953]: I1211 10:16:32.753497 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"277dd1b3-16e0-4ca6-9ceb-dc3f7ddc0c7e","Type":"ContainerDied","Data":"e4743f32241585f0c16adaead8468dc7f6273c8c24e13e91e6ee2bb0ee84c6a4"} Dec 11 10:16:32 crc kubenswrapper[4953]: I1211 10:16:32.753545 4953 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e4743f32241585f0c16adaead8468dc7f6273c8c24e13e91e6ee2bb0ee84c6a4" Dec 11 10:16:32 crc kubenswrapper[4953]: I1211 10:16:32.758289 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 11 10:16:32 crc kubenswrapper[4953]: I1211 10:16:32.760551 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 11 10:16:32 crc kubenswrapper[4953]: I1211 10:16:32.761253 4953 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e3ec2334e02d67d8a914755f1fadf87b4b4858f1f5400fe2249a290a0ee22d70" Dec 11 10:16:32 crc kubenswrapper[4953]: I1211 10:16:32.770701 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cf6dd" event={"ID":"cef0b6d3-40d2-4981-894b-962df1304c36","Type":"ContainerDied","Data":"e2c8787925d5def8e22f73d88f92ec15384e786a2aa4fff8f468d97d79d51ac1"} Dec 11 10:16:32 crc kubenswrapper[4953]: I1211 10:16:32.770753 4953 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e2c8787925d5def8e22f73d88f92ec15384e786a2aa4fff8f468d97d79d51ac1" Dec 11 10:16:32 crc kubenswrapper[4953]: I1211 10:16:32.798115 4953 scope.go:117] "RemoveContainer" containerID="d1ac1fd5867034994d70b5ae73677052750ff6f8984aedc59ed92adf8343ae99" Dec 11 10:16:32 crc kubenswrapper[4953]: I1211 10:16:32.803734 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cf6dd" Dec 11 10:16:32 crc kubenswrapper[4953]: I1211 10:16:32.804160 4953 status_manager.go:851] "Failed to get status for pod" podUID="cef0b6d3-40d2-4981-894b-962df1304c36" pod="openshift-marketplace/certified-operators-cf6dd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-cf6dd\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:32 crc kubenswrapper[4953]: I1211 10:16:32.804432 4953 status_manager.go:851] "Failed to get status for pod" podUID="277dd1b3-16e0-4ca6-9ceb-dc3f7ddc0c7e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:32 crc kubenswrapper[4953]: I1211 10:16:32.804800 4953 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:32 crc kubenswrapper[4953]: I1211 10:16:32.805238 4953 status_manager.go:851] "Failed to get status for pod" podUID="c6173b60-4d44-435b-a606-0b3836f71ad2" pod="openshift-marketplace/community-operators-dbdlx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-dbdlx\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:32 crc kubenswrapper[4953]: I1211 10:16:32.830445 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 11 10:16:32 crc kubenswrapper[4953]: I1211 10:16:32.831969 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 11 10:16:32 crc kubenswrapper[4953]: I1211 10:16:32.833422 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 10:16:32 crc kubenswrapper[4953]: I1211 10:16:32.834166 4953 status_manager.go:851] "Failed to get status for pod" podUID="c6173b60-4d44-435b-a606-0b3836f71ad2" pod="openshift-marketplace/community-operators-dbdlx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-dbdlx\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:32 crc kubenswrapper[4953]: I1211 10:16:32.834728 4953 status_manager.go:851] "Failed to get status for pod" podUID="cef0b6d3-40d2-4981-894b-962df1304c36" pod="openshift-marketplace/certified-operators-cf6dd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-cf6dd\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:32 crc kubenswrapper[4953]: I1211 10:16:32.835022 4953 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:32 crc kubenswrapper[4953]: I1211 10:16:32.835283 4953 status_manager.go:851] "Failed to get status for pod" podUID="277dd1b3-16e0-4ca6-9ceb-dc3f7ddc0c7e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:32 crc kubenswrapper[4953]: I1211 10:16:32.835541 4953 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:32 crc kubenswrapper[4953]: I1211 10:16:32.854768 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 11 10:16:32 crc kubenswrapper[4953]: I1211 10:16:32.855407 4953 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:32 crc kubenswrapper[4953]: I1211 10:16:32.855682 4953 status_manager.go:851] "Failed to get status for pod" podUID="277dd1b3-16e0-4ca6-9ceb-dc3f7ddc0c7e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:32 crc kubenswrapper[4953]: I1211 10:16:32.856202 4953 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:32 crc kubenswrapper[4953]: I1211 10:16:32.856714 4953 status_manager.go:851] "Failed to get status for pod" podUID="c6173b60-4d44-435b-a606-0b3836f71ad2" pod="openshift-marketplace/community-operators-dbdlx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-dbdlx\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:32 crc kubenswrapper[4953]: I1211 10:16:32.856972 4953 status_manager.go:851] "Failed to get status for pod" podUID="cef0b6d3-40d2-4981-894b-962df1304c36" pod="openshift-marketplace/certified-operators-cf6dd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-cf6dd\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:32 crc kubenswrapper[4953]: I1211 10:16:32.862162 4953 scope.go:117] "RemoveContainer" containerID="a91255550d88dd1963fef1112d90d2c1e779fc3e2dd1e7c824640879b8c6a58e" Dec 11 10:16:33 crc kubenswrapper[4953]: I1211 10:16:33.007927 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/277dd1b3-16e0-4ca6-9ceb-dc3f7ddc0c7e-kubelet-dir\") pod \"277dd1b3-16e0-4ca6-9ceb-dc3f7ddc0c7e\" (UID: \"277dd1b3-16e0-4ca6-9ceb-dc3f7ddc0c7e\") " Dec 11 10:16:33 crc kubenswrapper[4953]: I1211 10:16:33.007988 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/277dd1b3-16e0-4ca6-9ceb-dc3f7ddc0c7e-var-lock\") pod \"277dd1b3-16e0-4ca6-9ceb-dc3f7ddc0c7e\" (UID: \"277dd1b3-16e0-4ca6-9ceb-dc3f7ddc0c7e\") " Dec 11 10:16:33 crc kubenswrapper[4953]: I1211 10:16:33.008012 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 11 10:16:33 crc kubenswrapper[4953]: I1211 10:16:33.008028 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/277dd1b3-16e0-4ca6-9ceb-dc3f7ddc0c7e-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "277dd1b3-16e0-4ca6-9ceb-dc3f7ddc0c7e" (UID: "277dd1b3-16e0-4ca6-9ceb-dc3f7ddc0c7e"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 10:16:33 crc kubenswrapper[4953]: I1211 10:16:33.008050 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cef0b6d3-40d2-4981-894b-962df1304c36-catalog-content\") pod \"cef0b6d3-40d2-4981-894b-962df1304c36\" (UID: \"cef0b6d3-40d2-4981-894b-962df1304c36\") " Dec 11 10:16:33 crc kubenswrapper[4953]: I1211 10:16:33.008167 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cef0b6d3-40d2-4981-894b-962df1304c36-utilities\") pod \"cef0b6d3-40d2-4981-894b-962df1304c36\" (UID: \"cef0b6d3-40d2-4981-894b-962df1304c36\") " Dec 11 10:16:33 crc kubenswrapper[4953]: I1211 10:16:33.008243 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ldkqc\" (UniqueName: \"kubernetes.io/projected/cef0b6d3-40d2-4981-894b-962df1304c36-kube-api-access-ldkqc\") pod \"cef0b6d3-40d2-4981-894b-962df1304c36\" (UID: \"cef0b6d3-40d2-4981-894b-962df1304c36\") " Dec 11 10:16:33 crc kubenswrapper[4953]: I1211 10:16:33.008273 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 11 10:16:33 crc kubenswrapper[4953]: I1211 10:16:33.008323 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/277dd1b3-16e0-4ca6-9ceb-dc3f7ddc0c7e-kube-api-access\") pod \"277dd1b3-16e0-4ca6-9ceb-dc3f7ddc0c7e\" (UID: \"277dd1b3-16e0-4ca6-9ceb-dc3f7ddc0c7e\") " Dec 11 10:16:33 crc kubenswrapper[4953]: I1211 10:16:33.008370 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 11 10:16:33 crc kubenswrapper[4953]: I1211 10:16:33.008485 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 10:16:33 crc kubenswrapper[4953]: I1211 10:16:33.008584 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 10:16:33 crc kubenswrapper[4953]: I1211 10:16:33.008739 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 10:16:33 crc kubenswrapper[4953]: I1211 10:16:33.009513 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cef0b6d3-40d2-4981-894b-962df1304c36-utilities" (OuterVolumeSpecName: "utilities") pod "cef0b6d3-40d2-4981-894b-962df1304c36" (UID: "cef0b6d3-40d2-4981-894b-962df1304c36"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:16:33 crc kubenswrapper[4953]: I1211 10:16:33.010520 4953 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Dec 11 10:16:33 crc kubenswrapper[4953]: I1211 10:16:33.010563 4953 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/277dd1b3-16e0-4ca6-9ceb-dc3f7ddc0c7e-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 11 10:16:33 crc kubenswrapper[4953]: I1211 10:16:33.010694 4953 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Dec 11 10:16:33 crc kubenswrapper[4953]: I1211 10:16:33.010726 4953 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cef0b6d3-40d2-4981-894b-962df1304c36-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 10:16:33 crc kubenswrapper[4953]: I1211 10:16:33.010757 4953 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 11 10:16:33 crc kubenswrapper[4953]: I1211 10:16:33.010438 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/277dd1b3-16e0-4ca6-9ceb-dc3f7ddc0c7e-var-lock" (OuterVolumeSpecName: "var-lock") pod "277dd1b3-16e0-4ca6-9ceb-dc3f7ddc0c7e" (UID: "277dd1b3-16e0-4ca6-9ceb-dc3f7ddc0c7e"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 10:16:33 crc kubenswrapper[4953]: I1211 10:16:33.013518 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/277dd1b3-16e0-4ca6-9ceb-dc3f7ddc0c7e-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "277dd1b3-16e0-4ca6-9ceb-dc3f7ddc0c7e" (UID: "277dd1b3-16e0-4ca6-9ceb-dc3f7ddc0c7e"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:16:33 crc kubenswrapper[4953]: I1211 10:16:33.013662 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cef0b6d3-40d2-4981-894b-962df1304c36-kube-api-access-ldkqc" (OuterVolumeSpecName: "kube-api-access-ldkqc") pod "cef0b6d3-40d2-4981-894b-962df1304c36" (UID: "cef0b6d3-40d2-4981-894b-962df1304c36"). InnerVolumeSpecName "kube-api-access-ldkqc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:16:33 crc kubenswrapper[4953]: I1211 10:16:33.086936 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cef0b6d3-40d2-4981-894b-962df1304c36-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cef0b6d3-40d2-4981-894b-962df1304c36" (UID: "cef0b6d3-40d2-4981-894b-962df1304c36"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:16:33 crc kubenswrapper[4953]: I1211 10:16:33.112019 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ldkqc\" (UniqueName: \"kubernetes.io/projected/cef0b6d3-40d2-4981-894b-962df1304c36-kube-api-access-ldkqc\") on node \"crc\" DevicePath \"\"" Dec 11 10:16:33 crc kubenswrapper[4953]: I1211 10:16:33.112051 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/277dd1b3-16e0-4ca6-9ceb-dc3f7ddc0c7e-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 11 10:16:33 crc kubenswrapper[4953]: I1211 10:16:33.112061 4953 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/277dd1b3-16e0-4ca6-9ceb-dc3f7ddc0c7e-var-lock\") on node \"crc\" DevicePath \"\"" Dec 11 10:16:33 crc kubenswrapper[4953]: I1211 10:16:33.112070 4953 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cef0b6d3-40d2-4981-894b-962df1304c36-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 10:16:33 crc kubenswrapper[4953]: I1211 10:16:33.779049 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kkp25" event={"ID":"4468c58a-3cfc-4197-bf1b-8afc67dfda5e","Type":"ContainerStarted","Data":"e28fbeee778975782d25ee5289ddcbdc17fdbaac5db330c2e81a70d501961dc9"} Dec 11 10:16:33 crc kubenswrapper[4953]: I1211 10:16:33.779837 4953 status_manager.go:851] "Failed to get status for pod" podUID="277dd1b3-16e0-4ca6-9ceb-dc3f7ddc0c7e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:33 crc kubenswrapper[4953]: I1211 10:16:33.780373 4953 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:33 crc kubenswrapper[4953]: I1211 10:16:33.780613 4953 status_manager.go:851] "Failed to get status for pod" podUID="c6173b60-4d44-435b-a606-0b3836f71ad2" pod="openshift-marketplace/community-operators-dbdlx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-dbdlx\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:33 crc kubenswrapper[4953]: I1211 10:16:33.780859 4953 status_manager.go:851] "Failed to get status for pod" podUID="cef0b6d3-40d2-4981-894b-962df1304c36" pod="openshift-marketplace/certified-operators-cf6dd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-cf6dd\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:33 crc kubenswrapper[4953]: I1211 10:16:33.781094 4953 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:33 crc kubenswrapper[4953]: I1211 10:16:33.781401 4953 status_manager.go:851] "Failed to get status for pod" podUID="4468c58a-3cfc-4197-bf1b-8afc67dfda5e" pod="openshift-marketplace/redhat-marketplace-kkp25" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kkp25\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:33 crc kubenswrapper[4953]: I1211 10:16:33.782109 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pxglb" event={"ID":"46f197d9-de5c-42c2-9781-47ed42389e11","Type":"ContainerStarted","Data":"a4cdf564d7724667a615ff95a4a62a06e3e554763478f3f962e6d4fc3bafb5f8"} Dec 11 10:16:33 crc kubenswrapper[4953]: I1211 10:16:33.782917 4953 status_manager.go:851] "Failed to get status for pod" podUID="c6173b60-4d44-435b-a606-0b3836f71ad2" pod="openshift-marketplace/community-operators-dbdlx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-dbdlx\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:33 crc kubenswrapper[4953]: I1211 10:16:33.783341 4953 status_manager.go:851] "Failed to get status for pod" podUID="cef0b6d3-40d2-4981-894b-962df1304c36" pod="openshift-marketplace/certified-operators-cf6dd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-cf6dd\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:33 crc kubenswrapper[4953]: I1211 10:16:33.783546 4953 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:33 crc kubenswrapper[4953]: I1211 10:16:33.783744 4953 status_manager.go:851] "Failed to get status for pod" podUID="4468c58a-3cfc-4197-bf1b-8afc67dfda5e" pod="openshift-marketplace/redhat-marketplace-kkp25" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kkp25\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:33 crc kubenswrapper[4953]: I1211 10:16:33.784023 4953 status_manager.go:851] "Failed to get status for pod" podUID="277dd1b3-16e0-4ca6-9ceb-dc3f7ddc0c7e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:33 crc kubenswrapper[4953]: I1211 10:16:33.784405 4953 status_manager.go:851] "Failed to get status for pod" podUID="46f197d9-de5c-42c2-9781-47ed42389e11" pod="openshift-marketplace/redhat-operators-pxglb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pxglb\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:33 crc kubenswrapper[4953]: I1211 10:16:33.784763 4953 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:33 crc kubenswrapper[4953]: I1211 10:16:33.784891 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w2rvh" event={"ID":"bcb98d03-7cdb-49c6-baa8-c4aa9605a2d6","Type":"ContainerStarted","Data":"21d5ba454bbbb7dd8c66a2b86d0764c525225c76a2b143c1aa1102d65d0d8bb3"} Dec 11 10:16:33 crc kubenswrapper[4953]: I1211 10:16:33.785811 4953 status_manager.go:851] "Failed to get status for pod" podUID="bcb98d03-7cdb-49c6-baa8-c4aa9605a2d6" pod="openshift-marketplace/certified-operators-w2rvh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-w2rvh\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:33 crc kubenswrapper[4953]: I1211 10:16:33.786318 4953 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:33 crc kubenswrapper[4953]: I1211 10:16:33.786987 4953 status_manager.go:851] "Failed to get status for pod" podUID="4468c58a-3cfc-4197-bf1b-8afc67dfda5e" pod="openshift-marketplace/redhat-marketplace-kkp25" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kkp25\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:33 crc kubenswrapper[4953]: I1211 10:16:33.787664 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2f46z" event={"ID":"2f406ece-016a-43bc-92c9-473b85ad0ca9","Type":"ContainerStarted","Data":"9acd0de46f0ffe055f7a961c8a6e5dc33e4dbd99bf269efa6a779bb23da8633a"} Dec 11 10:16:33 crc kubenswrapper[4953]: I1211 10:16:33.787778 4953 status_manager.go:851] "Failed to get status for pod" podUID="277dd1b3-16e0-4ca6-9ceb-dc3f7ddc0c7e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:33 crc kubenswrapper[4953]: I1211 10:16:33.789091 4953 status_manager.go:851] "Failed to get status for pod" podUID="46f197d9-de5c-42c2-9781-47ed42389e11" pod="openshift-marketplace/redhat-operators-pxglb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pxglb\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:33 crc kubenswrapper[4953]: I1211 10:16:33.790103 4953 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:33 crc kubenswrapper[4953]: I1211 10:16:33.790207 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"8342838c53b33171c8aa25456ff49154589485951fdf68f8d8b0f18798ef9384"} Dec 11 10:16:33 crc kubenswrapper[4953]: I1211 10:16:33.790241 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"6aeb841a337178ff2e3f327f644c99412d7fe8a543f6bfdbebb2961677d26c73"} Dec 11 10:16:33 crc kubenswrapper[4953]: I1211 10:16:33.792158 4953 status_manager.go:851] "Failed to get status for pod" podUID="c6173b60-4d44-435b-a606-0b3836f71ad2" pod="openshift-marketplace/community-operators-dbdlx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-dbdlx\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:33 crc kubenswrapper[4953]: I1211 10:16:33.793514 4953 status_manager.go:851] "Failed to get status for pod" podUID="cef0b6d3-40d2-4981-894b-962df1304c36" pod="openshift-marketplace/certified-operators-cf6dd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-cf6dd\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:33 crc kubenswrapper[4953]: I1211 10:16:33.794009 4953 status_manager.go:851] "Failed to get status for pod" podUID="2f406ece-016a-43bc-92c9-473b85ad0ca9" pod="openshift-marketplace/redhat-operators-2f46z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-2f46z\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:33 crc kubenswrapper[4953]: I1211 10:16:33.794383 4953 status_manager.go:851] "Failed to get status for pod" podUID="c6173b60-4d44-435b-a606-0b3836f71ad2" pod="openshift-marketplace/community-operators-dbdlx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-dbdlx\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:33 crc kubenswrapper[4953]: I1211 10:16:33.794620 4953 status_manager.go:851] "Failed to get status for pod" podUID="cef0b6d3-40d2-4981-894b-962df1304c36" pod="openshift-marketplace/certified-operators-cf6dd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-cf6dd\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:33 crc kubenswrapper[4953]: I1211 10:16:33.794829 4953 status_manager.go:851] "Failed to get status for pod" podUID="bcb98d03-7cdb-49c6-baa8-c4aa9605a2d6" pod="openshift-marketplace/certified-operators-w2rvh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-w2rvh\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:33 crc kubenswrapper[4953]: I1211 10:16:33.795054 4953 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:33 crc kubenswrapper[4953]: I1211 10:16:33.795262 4953 status_manager.go:851] "Failed to get status for pod" podUID="4468c58a-3cfc-4197-bf1b-8afc67dfda5e" pod="openshift-marketplace/redhat-marketplace-kkp25" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kkp25\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:33 crc kubenswrapper[4953]: I1211 10:16:33.795433 4953 status_manager.go:851] "Failed to get status for pod" podUID="277dd1b3-16e0-4ca6-9ceb-dc3f7ddc0c7e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:33 crc kubenswrapper[4953]: I1211 10:16:33.795799 4953 status_manager.go:851] "Failed to get status for pod" podUID="46f197d9-de5c-42c2-9781-47ed42389e11" pod="openshift-marketplace/redhat-operators-pxglb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pxglb\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:33 crc kubenswrapper[4953]: I1211 10:16:33.796023 4953 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:33 crc kubenswrapper[4953]: I1211 10:16:33.797689 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 11 10:16:33 crc kubenswrapper[4953]: I1211 10:16:33.798490 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 10:16:33 crc kubenswrapper[4953]: I1211 10:16:33.799187 4953 status_manager.go:851] "Failed to get status for pod" podUID="2f406ece-016a-43bc-92c9-473b85ad0ca9" pod="openshift-marketplace/redhat-operators-2f46z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-2f46z\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:33 crc kubenswrapper[4953]: I1211 10:16:33.799524 4953 status_manager.go:851] "Failed to get status for pod" podUID="c6173b60-4d44-435b-a606-0b3836f71ad2" pod="openshift-marketplace/community-operators-dbdlx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-dbdlx\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:33 crc kubenswrapper[4953]: I1211 10:16:33.799840 4953 status_manager.go:851] "Failed to get status for pod" podUID="cef0b6d3-40d2-4981-894b-962df1304c36" pod="openshift-marketplace/certified-operators-cf6dd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-cf6dd\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:33 crc kubenswrapper[4953]: I1211 10:16:33.800217 4953 status_manager.go:851] "Failed to get status for pod" podUID="bcb98d03-7cdb-49c6-baa8-c4aa9605a2d6" pod="openshift-marketplace/certified-operators-w2rvh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-w2rvh\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:33 crc kubenswrapper[4953]: I1211 10:16:33.800788 4953 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:33 crc kubenswrapper[4953]: I1211 10:16:33.801037 4953 generic.go:334] "Generic (PLEG): container finished" podID="fe9b2116-8ab4-4c4c-8c58-74e62f28893d" containerID="c858e9cf57114f1d8ef9dce55a3321c45e3383a8bec4f4abb3d69bbf946e7ca0" exitCode=0 Dec 11 10:16:33 crc kubenswrapper[4953]: I1211 10:16:33.801083 4953 status_manager.go:851] "Failed to get status for pod" podUID="4468c58a-3cfc-4197-bf1b-8afc67dfda5e" pod="openshift-marketplace/redhat-marketplace-kkp25" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kkp25\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:33 crc kubenswrapper[4953]: I1211 10:16:33.801107 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gnxp9" event={"ID":"fe9b2116-8ab4-4c4c-8c58-74e62f28893d","Type":"ContainerDied","Data":"c858e9cf57114f1d8ef9dce55a3321c45e3383a8bec4f4abb3d69bbf946e7ca0"} Dec 11 10:16:33 crc kubenswrapper[4953]: I1211 10:16:33.801235 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 11 10:16:33 crc kubenswrapper[4953]: I1211 10:16:33.801496 4953 status_manager.go:851] "Failed to get status for pod" podUID="277dd1b3-16e0-4ca6-9ceb-dc3f7ddc0c7e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:33 crc kubenswrapper[4953]: I1211 10:16:33.801352 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cf6dd" Dec 11 10:16:33 crc kubenswrapper[4953]: I1211 10:16:33.804812 4953 status_manager.go:851] "Failed to get status for pod" podUID="46f197d9-de5c-42c2-9781-47ed42389e11" pod="openshift-marketplace/redhat-operators-pxglb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pxglb\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:33 crc kubenswrapper[4953]: I1211 10:16:33.805030 4953 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:33 crc kubenswrapper[4953]: I1211 10:16:33.805315 4953 status_manager.go:851] "Failed to get status for pod" podUID="46f197d9-de5c-42c2-9781-47ed42389e11" pod="openshift-marketplace/redhat-operators-pxglb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pxglb\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:33 crc kubenswrapper[4953]: I1211 10:16:33.805484 4953 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:33 crc kubenswrapper[4953]: I1211 10:16:33.806716 4953 status_manager.go:851] "Failed to get status for pod" podUID="2f406ece-016a-43bc-92c9-473b85ad0ca9" pod="openshift-marketplace/redhat-operators-2f46z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-2f46z\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:33 crc kubenswrapper[4953]: I1211 10:16:33.808373 4953 status_manager.go:851] "Failed to get status for pod" podUID="c6173b60-4d44-435b-a606-0b3836f71ad2" pod="openshift-marketplace/community-operators-dbdlx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-dbdlx\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:33 crc kubenswrapper[4953]: I1211 10:16:33.808771 4953 status_manager.go:851] "Failed to get status for pod" podUID="cef0b6d3-40d2-4981-894b-962df1304c36" pod="openshift-marketplace/certified-operators-cf6dd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-cf6dd\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:33 crc kubenswrapper[4953]: I1211 10:16:33.809032 4953 status_manager.go:851] "Failed to get status for pod" podUID="fe9b2116-8ab4-4c4c-8c58-74e62f28893d" pod="openshift-marketplace/redhat-marketplace-gnxp9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-gnxp9\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:33 crc kubenswrapper[4953]: I1211 10:16:33.809265 4953 status_manager.go:851] "Failed to get status for pod" podUID="bcb98d03-7cdb-49c6-baa8-c4aa9605a2d6" pod="openshift-marketplace/certified-operators-w2rvh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-w2rvh\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:33 crc kubenswrapper[4953]: I1211 10:16:33.813813 4953 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:33 crc kubenswrapper[4953]: I1211 10:16:33.816757 4953 status_manager.go:851] "Failed to get status for pod" podUID="4468c58a-3cfc-4197-bf1b-8afc67dfda5e" pod="openshift-marketplace/redhat-marketplace-kkp25" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kkp25\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:33 crc kubenswrapper[4953]: I1211 10:16:33.817193 4953 status_manager.go:851] "Failed to get status for pod" podUID="277dd1b3-16e0-4ca6-9ceb-dc3f7ddc0c7e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:33 crc kubenswrapper[4953]: I1211 10:16:33.826402 4953 status_manager.go:851] "Failed to get status for pod" podUID="fe9b2116-8ab4-4c4c-8c58-74e62f28893d" pod="openshift-marketplace/redhat-marketplace-gnxp9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-gnxp9\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:33 crc kubenswrapper[4953]: I1211 10:16:33.827657 4953 status_manager.go:851] "Failed to get status for pod" podUID="cef0b6d3-40d2-4981-894b-962df1304c36" pod="openshift-marketplace/certified-operators-cf6dd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-cf6dd\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:33 crc kubenswrapper[4953]: I1211 10:16:33.828297 4953 status_manager.go:851] "Failed to get status for pod" podUID="bcb98d03-7cdb-49c6-baa8-c4aa9605a2d6" pod="openshift-marketplace/certified-operators-w2rvh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-w2rvh\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:33 crc kubenswrapper[4953]: I1211 10:16:33.828661 4953 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:33 crc kubenswrapper[4953]: I1211 10:16:33.829161 4953 status_manager.go:851] "Failed to get status for pod" podUID="4468c58a-3cfc-4197-bf1b-8afc67dfda5e" pod="openshift-marketplace/redhat-marketplace-kkp25" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kkp25\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:33 crc kubenswrapper[4953]: I1211 10:16:33.829911 4953 status_manager.go:851] "Failed to get status for pod" podUID="277dd1b3-16e0-4ca6-9ceb-dc3f7ddc0c7e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:33 crc kubenswrapper[4953]: I1211 10:16:33.830362 4953 status_manager.go:851] "Failed to get status for pod" podUID="46f197d9-de5c-42c2-9781-47ed42389e11" pod="openshift-marketplace/redhat-operators-pxglb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pxglb\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:33 crc kubenswrapper[4953]: I1211 10:16:33.831639 4953 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:33 crc kubenswrapper[4953]: I1211 10:16:33.832712 4953 status_manager.go:851] "Failed to get status for pod" podUID="2f406ece-016a-43bc-92c9-473b85ad0ca9" pod="openshift-marketplace/redhat-operators-2f46z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-2f46z\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:33 crc kubenswrapper[4953]: I1211 10:16:33.833019 4953 status_manager.go:851] "Failed to get status for pod" podUID="c6173b60-4d44-435b-a606-0b3836f71ad2" pod="openshift-marketplace/community-operators-dbdlx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-dbdlx\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:33 crc kubenswrapper[4953]: I1211 10:16:33.833564 4953 status_manager.go:851] "Failed to get status for pod" podUID="2f406ece-016a-43bc-92c9-473b85ad0ca9" pod="openshift-marketplace/redhat-operators-2f46z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-2f46z\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:33 crc kubenswrapper[4953]: I1211 10:16:33.834097 4953 status_manager.go:851] "Failed to get status for pod" podUID="c6173b60-4d44-435b-a606-0b3836f71ad2" pod="openshift-marketplace/community-operators-dbdlx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-dbdlx\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:33 crc kubenswrapper[4953]: I1211 10:16:33.834422 4953 status_manager.go:851] "Failed to get status for pod" podUID="cef0b6d3-40d2-4981-894b-962df1304c36" pod="openshift-marketplace/certified-operators-cf6dd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-cf6dd\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:33 crc kubenswrapper[4953]: I1211 10:16:33.834786 4953 status_manager.go:851] "Failed to get status for pod" podUID="fe9b2116-8ab4-4c4c-8c58-74e62f28893d" pod="openshift-marketplace/redhat-marketplace-gnxp9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-gnxp9\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:33 crc kubenswrapper[4953]: I1211 10:16:33.835043 4953 status_manager.go:851] "Failed to get status for pod" podUID="bcb98d03-7cdb-49c6-baa8-c4aa9605a2d6" pod="openshift-marketplace/certified-operators-w2rvh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-w2rvh\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:33 crc kubenswrapper[4953]: I1211 10:16:33.835282 4953 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:33 crc kubenswrapper[4953]: I1211 10:16:33.835543 4953 status_manager.go:851] "Failed to get status for pod" podUID="4468c58a-3cfc-4197-bf1b-8afc67dfda5e" pod="openshift-marketplace/redhat-marketplace-kkp25" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kkp25\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:33 crc kubenswrapper[4953]: I1211 10:16:33.835859 4953 status_manager.go:851] "Failed to get status for pod" podUID="277dd1b3-16e0-4ca6-9ceb-dc3f7ddc0c7e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:33 crc kubenswrapper[4953]: I1211 10:16:33.836918 4953 status_manager.go:851] "Failed to get status for pod" podUID="46f197d9-de5c-42c2-9781-47ed42389e11" pod="openshift-marketplace/redhat-operators-pxglb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pxglb\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:33 crc kubenswrapper[4953]: I1211 10:16:33.838784 4953 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:33 crc kubenswrapper[4953]: I1211 10:16:33.842738 4953 status_manager.go:851] "Failed to get status for pod" podUID="bcb98d03-7cdb-49c6-baa8-c4aa9605a2d6" pod="openshift-marketplace/certified-operators-w2rvh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-w2rvh\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:33 crc kubenswrapper[4953]: I1211 10:16:33.844277 4953 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:33 crc kubenswrapper[4953]: I1211 10:16:33.844689 4953 status_manager.go:851] "Failed to get status for pod" podUID="4468c58a-3cfc-4197-bf1b-8afc67dfda5e" pod="openshift-marketplace/redhat-marketplace-kkp25" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kkp25\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:33 crc kubenswrapper[4953]: I1211 10:16:33.845016 4953 status_manager.go:851] "Failed to get status for pod" podUID="277dd1b3-16e0-4ca6-9ceb-dc3f7ddc0c7e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:33 crc kubenswrapper[4953]: I1211 10:16:33.845327 4953 status_manager.go:851] "Failed to get status for pod" podUID="46f197d9-de5c-42c2-9781-47ed42389e11" pod="openshift-marketplace/redhat-operators-pxglb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pxglb\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:33 crc kubenswrapper[4953]: I1211 10:16:33.845671 4953 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:33 crc kubenswrapper[4953]: I1211 10:16:33.848123 4953 status_manager.go:851] "Failed to get status for pod" podUID="2f406ece-016a-43bc-92c9-473b85ad0ca9" pod="openshift-marketplace/redhat-operators-2f46z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-2f46z\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:33 crc kubenswrapper[4953]: I1211 10:16:33.848352 4953 status_manager.go:851] "Failed to get status for pod" podUID="c6173b60-4d44-435b-a606-0b3836f71ad2" pod="openshift-marketplace/community-operators-dbdlx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-dbdlx\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:33 crc kubenswrapper[4953]: I1211 10:16:33.848643 4953 status_manager.go:851] "Failed to get status for pod" podUID="fe9b2116-8ab4-4c4c-8c58-74e62f28893d" pod="openshift-marketplace/redhat-marketplace-gnxp9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-gnxp9\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:33 crc kubenswrapper[4953]: I1211 10:16:33.848899 4953 status_manager.go:851] "Failed to get status for pod" podUID="cef0b6d3-40d2-4981-894b-962df1304c36" pod="openshift-marketplace/certified-operators-cf6dd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-cf6dd\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:34 crc kubenswrapper[4953]: I1211 10:16:34.483228 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Dec 11 10:16:35 crc kubenswrapper[4953]: I1211 10:16:35.584007 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-2f46z" Dec 11 10:16:35 crc kubenswrapper[4953]: I1211 10:16:35.584370 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-2f46z" Dec 11 10:16:35 crc kubenswrapper[4953]: I1211 10:16:35.601054 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-pxglb" Dec 11 10:16:35 crc kubenswrapper[4953]: I1211 10:16:35.601099 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-pxglb" Dec 11 10:16:35 crc kubenswrapper[4953]: I1211 10:16:35.643264 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-kkp25" Dec 11 10:16:35 crc kubenswrapper[4953]: I1211 10:16:35.643461 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-kkp25" Dec 11 10:16:35 crc kubenswrapper[4953]: I1211 10:16:35.696395 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-kkp25" Dec 11 10:16:35 crc kubenswrapper[4953]: I1211 10:16:35.696951 4953 status_manager.go:851] "Failed to get status for pod" podUID="bcb98d03-7cdb-49c6-baa8-c4aa9605a2d6" pod="openshift-marketplace/certified-operators-w2rvh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-w2rvh\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:35 crc kubenswrapper[4953]: I1211 10:16:35.697160 4953 status_manager.go:851] "Failed to get status for pod" podUID="4468c58a-3cfc-4197-bf1b-8afc67dfda5e" pod="openshift-marketplace/redhat-marketplace-kkp25" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kkp25\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:35 crc kubenswrapper[4953]: I1211 10:16:35.697477 4953 status_manager.go:851] "Failed to get status for pod" podUID="277dd1b3-16e0-4ca6-9ceb-dc3f7ddc0c7e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:35 crc kubenswrapper[4953]: I1211 10:16:35.697962 4953 status_manager.go:851] "Failed to get status for pod" podUID="46f197d9-de5c-42c2-9781-47ed42389e11" pod="openshift-marketplace/redhat-operators-pxglb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pxglb\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:35 crc kubenswrapper[4953]: I1211 10:16:35.698242 4953 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:35 crc kubenswrapper[4953]: I1211 10:16:35.698462 4953 status_manager.go:851] "Failed to get status for pod" podUID="2f406ece-016a-43bc-92c9-473b85ad0ca9" pod="openshift-marketplace/redhat-operators-2f46z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-2f46z\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:35 crc kubenswrapper[4953]: I1211 10:16:35.698648 4953 status_manager.go:851] "Failed to get status for pod" podUID="c6173b60-4d44-435b-a606-0b3836f71ad2" pod="openshift-marketplace/community-operators-dbdlx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-dbdlx\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:35 crc kubenswrapper[4953]: I1211 10:16:35.699001 4953 status_manager.go:851] "Failed to get status for pod" podUID="cef0b6d3-40d2-4981-894b-962df1304c36" pod="openshift-marketplace/certified-operators-cf6dd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-cf6dd\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:35 crc kubenswrapper[4953]: I1211 10:16:35.699616 4953 status_manager.go:851] "Failed to get status for pod" podUID="fe9b2116-8ab4-4c4c-8c58-74e62f28893d" pod="openshift-marketplace/redhat-marketplace-gnxp9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-gnxp9\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:36 crc kubenswrapper[4953]: I1211 10:16:36.640599 4953 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-2f46z" podUID="2f406ece-016a-43bc-92c9-473b85ad0ca9" containerName="registry-server" probeResult="failure" output=< Dec 11 10:16:36 crc kubenswrapper[4953]: timeout: failed to connect service ":50051" within 1s Dec 11 10:16:36 crc kubenswrapper[4953]: > Dec 11 10:16:36 crc kubenswrapper[4953]: I1211 10:16:36.640651 4953 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-pxglb" podUID="46f197d9-de5c-42c2-9781-47ed42389e11" containerName="registry-server" probeResult="failure" output=< Dec 11 10:16:36 crc kubenswrapper[4953]: timeout: failed to connect service ":50051" within 1s Dec 11 10:16:36 crc kubenswrapper[4953]: > Dec 11 10:16:37 crc kubenswrapper[4953]: I1211 10:16:37.350959 4953 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Dec 11 10:16:37 crc kubenswrapper[4953]: I1211 10:16:37.351030 4953 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Dec 11 10:16:37 crc kubenswrapper[4953]: I1211 10:16:37.472768 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 10:16:37 crc kubenswrapper[4953]: I1211 10:16:37.473856 4953 status_manager.go:851] "Failed to get status for pod" podUID="c6173b60-4d44-435b-a606-0b3836f71ad2" pod="openshift-marketplace/community-operators-dbdlx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-dbdlx\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:37 crc kubenswrapper[4953]: I1211 10:16:37.474522 4953 status_manager.go:851] "Failed to get status for pod" podUID="cef0b6d3-40d2-4981-894b-962df1304c36" pod="openshift-marketplace/certified-operators-cf6dd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-cf6dd\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:37 crc kubenswrapper[4953]: I1211 10:16:37.475017 4953 status_manager.go:851] "Failed to get status for pod" podUID="fe9b2116-8ab4-4c4c-8c58-74e62f28893d" pod="openshift-marketplace/redhat-marketplace-gnxp9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-gnxp9\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:37 crc kubenswrapper[4953]: I1211 10:16:37.475265 4953 status_manager.go:851] "Failed to get status for pod" podUID="bcb98d03-7cdb-49c6-baa8-c4aa9605a2d6" pod="openshift-marketplace/certified-operators-w2rvh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-w2rvh\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:37 crc kubenswrapper[4953]: I1211 10:16:37.475647 4953 status_manager.go:851] "Failed to get status for pod" podUID="4468c58a-3cfc-4197-bf1b-8afc67dfda5e" pod="openshift-marketplace/redhat-marketplace-kkp25" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kkp25\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:37 crc kubenswrapper[4953]: I1211 10:16:37.475948 4953 status_manager.go:851] "Failed to get status for pod" podUID="277dd1b3-16e0-4ca6-9ceb-dc3f7ddc0c7e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:37 crc kubenswrapper[4953]: I1211 10:16:37.476199 4953 status_manager.go:851] "Failed to get status for pod" podUID="46f197d9-de5c-42c2-9781-47ed42389e11" pod="openshift-marketplace/redhat-operators-pxglb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pxglb\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:37 crc kubenswrapper[4953]: I1211 10:16:37.476545 4953 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:37 crc kubenswrapper[4953]: I1211 10:16:37.476956 4953 status_manager.go:851] "Failed to get status for pod" podUID="2f406ece-016a-43bc-92c9-473b85ad0ca9" pod="openshift-marketplace/redhat-operators-2f46z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-2f46z\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:37 crc kubenswrapper[4953]: I1211 10:16:37.490693 4953 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a7f8ca70-14ac-499f-9a73-c03f1cb9d3f5" Dec 11 10:16:37 crc kubenswrapper[4953]: I1211 10:16:37.490725 4953 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a7f8ca70-14ac-499f-9a73-c03f1cb9d3f5" Dec 11 10:16:37 crc kubenswrapper[4953]: E1211 10:16:37.491204 4953 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.134:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 10:16:37 crc kubenswrapper[4953]: I1211 10:16:37.491848 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 10:16:37 crc kubenswrapper[4953]: I1211 10:16:37.825065 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Dec 11 10:16:37 crc kubenswrapper[4953]: I1211 10:16:37.826177 4953 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="7453febb17d4aadef8c87c8d256a0339b441e2bed33a20a3f7cf88b4d0ce5a83" exitCode=1 Dec 11 10:16:37 crc kubenswrapper[4953]: I1211 10:16:37.826287 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"7453febb17d4aadef8c87c8d256a0339b441e2bed33a20a3f7cf88b4d0ce5a83"} Dec 11 10:16:37 crc kubenswrapper[4953]: I1211 10:16:37.827321 4953 scope.go:117] "RemoveContainer" containerID="7453febb17d4aadef8c87c8d256a0339b441e2bed33a20a3f7cf88b4d0ce5a83" Dec 11 10:16:37 crc kubenswrapper[4953]: I1211 10:16:37.827483 4953 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:37 crc kubenswrapper[4953]: I1211 10:16:37.827795 4953 status_manager.go:851] "Failed to get status for pod" podUID="4468c58a-3cfc-4197-bf1b-8afc67dfda5e" pod="openshift-marketplace/redhat-marketplace-kkp25" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kkp25\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:37 crc kubenswrapper[4953]: I1211 10:16:37.828090 4953 status_manager.go:851] "Failed to get status for pod" podUID="277dd1b3-16e0-4ca6-9ceb-dc3f7ddc0c7e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:37 crc kubenswrapper[4953]: I1211 10:16:37.828168 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"990adce63290f12f1c8b7f7bc02f9e16a1f44c78c37e253d6272ba1ab90d0a00"} Dec 11 10:16:37 crc kubenswrapper[4953]: I1211 10:16:37.828598 4953 status_manager.go:851] "Failed to get status for pod" podUID="46f197d9-de5c-42c2-9781-47ed42389e11" pod="openshift-marketplace/redhat-operators-pxglb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pxglb\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:37 crc kubenswrapper[4953]: I1211 10:16:37.828844 4953 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:37 crc kubenswrapper[4953]: I1211 10:16:37.829209 4953 status_manager.go:851] "Failed to get status for pod" podUID="2f406ece-016a-43bc-92c9-473b85ad0ca9" pod="openshift-marketplace/redhat-operators-2f46z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-2f46z\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:37 crc kubenswrapper[4953]: I1211 10:16:37.829412 4953 status_manager.go:851] "Failed to get status for pod" podUID="c6173b60-4d44-435b-a606-0b3836f71ad2" pod="openshift-marketplace/community-operators-dbdlx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-dbdlx\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:37 crc kubenswrapper[4953]: I1211 10:16:37.829650 4953 status_manager.go:851] "Failed to get status for pod" podUID="fe9b2116-8ab4-4c4c-8c58-74e62f28893d" pod="openshift-marketplace/redhat-marketplace-gnxp9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-gnxp9\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:37 crc kubenswrapper[4953]: I1211 10:16:37.829838 4953 status_manager.go:851] "Failed to get status for pod" podUID="cef0b6d3-40d2-4981-894b-962df1304c36" pod="openshift-marketplace/certified-operators-cf6dd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-cf6dd\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:37 crc kubenswrapper[4953]: I1211 10:16:37.830039 4953 status_manager.go:851] "Failed to get status for pod" podUID="bcb98d03-7cdb-49c6-baa8-c4aa9605a2d6" pod="openshift-marketplace/certified-operators-w2rvh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-w2rvh\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:37 crc kubenswrapper[4953]: I1211 10:16:37.831876 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gnxp9" event={"ID":"fe9b2116-8ab4-4c4c-8c58-74e62f28893d","Type":"ContainerStarted","Data":"18abbb2ed80b43bcc4babfdcf1fbb2913bf37a9cf81cba51f67100f33496a0e2"} Dec 11 10:16:37 crc kubenswrapper[4953]: I1211 10:16:37.833116 4953 status_manager.go:851] "Failed to get status for pod" podUID="2f406ece-016a-43bc-92c9-473b85ad0ca9" pod="openshift-marketplace/redhat-operators-2f46z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-2f46z\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:37 crc kubenswrapper[4953]: I1211 10:16:37.833419 4953 status_manager.go:851] "Failed to get status for pod" podUID="c6173b60-4d44-435b-a606-0b3836f71ad2" pod="openshift-marketplace/community-operators-dbdlx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-dbdlx\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:37 crc kubenswrapper[4953]: I1211 10:16:37.833873 4953 status_manager.go:851] "Failed to get status for pod" podUID="cef0b6d3-40d2-4981-894b-962df1304c36" pod="openshift-marketplace/certified-operators-cf6dd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-cf6dd\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:37 crc kubenswrapper[4953]: I1211 10:16:37.834069 4953 status_manager.go:851] "Failed to get status for pod" podUID="fe9b2116-8ab4-4c4c-8c58-74e62f28893d" pod="openshift-marketplace/redhat-marketplace-gnxp9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-gnxp9\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:37 crc kubenswrapper[4953]: I1211 10:16:37.834233 4953 status_manager.go:851] "Failed to get status for pod" podUID="bcb98d03-7cdb-49c6-baa8-c4aa9605a2d6" pod="openshift-marketplace/certified-operators-w2rvh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-w2rvh\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:37 crc kubenswrapper[4953]: I1211 10:16:37.834397 4953 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:37 crc kubenswrapper[4953]: I1211 10:16:37.834593 4953 status_manager.go:851] "Failed to get status for pod" podUID="4468c58a-3cfc-4197-bf1b-8afc67dfda5e" pod="openshift-marketplace/redhat-marketplace-kkp25" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kkp25\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:37 crc kubenswrapper[4953]: I1211 10:16:37.834762 4953 status_manager.go:851] "Failed to get status for pod" podUID="277dd1b3-16e0-4ca6-9ceb-dc3f7ddc0c7e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:37 crc kubenswrapper[4953]: I1211 10:16:37.834912 4953 status_manager.go:851] "Failed to get status for pod" podUID="46f197d9-de5c-42c2-9781-47ed42389e11" pod="openshift-marketplace/redhat-operators-pxglb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pxglb\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:37 crc kubenswrapper[4953]: I1211 10:16:37.835078 4953 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:38 crc kubenswrapper[4953]: I1211 10:16:38.632234 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-w2rvh" Dec 11 10:16:38 crc kubenswrapper[4953]: I1211 10:16:38.632807 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-w2rvh" Dec 11 10:16:38 crc kubenswrapper[4953]: E1211 10:16:38.644180 4953 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.134:6443: connect: connection refused" interval="7s" Dec 11 10:16:38 crc kubenswrapper[4953]: I1211 10:16:38.794412 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-w2rvh" Dec 11 10:16:38 crc kubenswrapper[4953]: I1211 10:16:38.795159 4953 status_manager.go:851] "Failed to get status for pod" podUID="c6173b60-4d44-435b-a606-0b3836f71ad2" pod="openshift-marketplace/community-operators-dbdlx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-dbdlx\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:38 crc kubenswrapper[4953]: I1211 10:16:38.795365 4953 status_manager.go:851] "Failed to get status for pod" podUID="cef0b6d3-40d2-4981-894b-962df1304c36" pod="openshift-marketplace/certified-operators-cf6dd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-cf6dd\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:38 crc kubenswrapper[4953]: I1211 10:16:38.795632 4953 status_manager.go:851] "Failed to get status for pod" podUID="fe9b2116-8ab4-4c4c-8c58-74e62f28893d" pod="openshift-marketplace/redhat-marketplace-gnxp9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-gnxp9\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:38 crc kubenswrapper[4953]: I1211 10:16:38.795831 4953 status_manager.go:851] "Failed to get status for pod" podUID="bcb98d03-7cdb-49c6-baa8-c4aa9605a2d6" pod="openshift-marketplace/certified-operators-w2rvh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-w2rvh\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:38 crc kubenswrapper[4953]: I1211 10:16:38.796037 4953 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:38 crc kubenswrapper[4953]: I1211 10:16:38.796235 4953 status_manager.go:851] "Failed to get status for pod" podUID="4468c58a-3cfc-4197-bf1b-8afc67dfda5e" pod="openshift-marketplace/redhat-marketplace-kkp25" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kkp25\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:38 crc kubenswrapper[4953]: I1211 10:16:38.796410 4953 status_manager.go:851] "Failed to get status for pod" podUID="277dd1b3-16e0-4ca6-9ceb-dc3f7ddc0c7e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:38 crc kubenswrapper[4953]: I1211 10:16:38.796595 4953 status_manager.go:851] "Failed to get status for pod" podUID="46f197d9-de5c-42c2-9781-47ed42389e11" pod="openshift-marketplace/redhat-operators-pxglb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pxglb\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:38 crc kubenswrapper[4953]: I1211 10:16:38.796757 4953 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:38 crc kubenswrapper[4953]: I1211 10:16:38.796922 4953 status_manager.go:851] "Failed to get status for pod" podUID="2f406ece-016a-43bc-92c9-473b85ad0ca9" pod="openshift-marketplace/redhat-operators-2f46z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-2f46z\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:38 crc kubenswrapper[4953]: I1211 10:16:38.838028 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"9eab272951f08007b4ca625718ef769c44392b4daad94417d2c9ead36db7e0ea"} Dec 11 10:16:38 crc kubenswrapper[4953]: I1211 10:16:38.882773 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-w2rvh" Dec 11 10:16:38 crc kubenswrapper[4953]: I1211 10:16:38.883329 4953 status_manager.go:851] "Failed to get status for pod" podUID="2f406ece-016a-43bc-92c9-473b85ad0ca9" pod="openshift-marketplace/redhat-operators-2f46z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-2f46z\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:38 crc kubenswrapper[4953]: I1211 10:16:38.883547 4953 status_manager.go:851] "Failed to get status for pod" podUID="c6173b60-4d44-435b-a606-0b3836f71ad2" pod="openshift-marketplace/community-operators-dbdlx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-dbdlx\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:38 crc kubenswrapper[4953]: I1211 10:16:38.883961 4953 status_manager.go:851] "Failed to get status for pod" podUID="cef0b6d3-40d2-4981-894b-962df1304c36" pod="openshift-marketplace/certified-operators-cf6dd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-cf6dd\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:38 crc kubenswrapper[4953]: I1211 10:16:38.884492 4953 status_manager.go:851] "Failed to get status for pod" podUID="fe9b2116-8ab4-4c4c-8c58-74e62f28893d" pod="openshift-marketplace/redhat-marketplace-gnxp9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-gnxp9\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:38 crc kubenswrapper[4953]: I1211 10:16:38.884743 4953 status_manager.go:851] "Failed to get status for pod" podUID="bcb98d03-7cdb-49c6-baa8-c4aa9605a2d6" pod="openshift-marketplace/certified-operators-w2rvh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-w2rvh\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:38 crc kubenswrapper[4953]: I1211 10:16:38.884939 4953 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:38 crc kubenswrapper[4953]: I1211 10:16:38.885152 4953 status_manager.go:851] "Failed to get status for pod" podUID="4468c58a-3cfc-4197-bf1b-8afc67dfda5e" pod="openshift-marketplace/redhat-marketplace-kkp25" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kkp25\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:38 crc kubenswrapper[4953]: I1211 10:16:38.885347 4953 status_manager.go:851] "Failed to get status for pod" podUID="277dd1b3-16e0-4ca6-9ceb-dc3f7ddc0c7e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:38 crc kubenswrapper[4953]: I1211 10:16:38.885527 4953 status_manager.go:851] "Failed to get status for pod" podUID="46f197d9-de5c-42c2-9781-47ed42389e11" pod="openshift-marketplace/redhat-operators-pxglb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pxglb\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:38 crc kubenswrapper[4953]: I1211 10:16:38.885737 4953 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:40 crc kubenswrapper[4953]: E1211 10:16:40.827048 4953 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.129.56.134:6443: connect: connection refused" event="&Event{ObjectMeta:{redhat-marketplace-gnxp9.188021c4df0f9304 openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:redhat-marketplace-gnxp9,UID:fe9b2116-8ab4-4c4c-8c58-74e62f28893d,APIVersion:v1,ResourceVersion:28292,FieldPath:spec.initContainers{extract-content},},Reason:Pulled,Message:Successfully pulled image \"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\" in 21.223s (21.223s including waiting). Image size: 1154573130 bytes.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 10:16:23.3688809 +0000 UTC m=+301.392739933,LastTimestamp:2025-12-11 10:16:23.3688809 +0000 UTC m=+301.392739933,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 10:16:40 crc kubenswrapper[4953]: I1211 10:16:40.853426 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Dec 11 10:16:40 crc kubenswrapper[4953]: I1211 10:16:40.853512 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"482ddf828e078e5a4179ee4bf76e6bb101d0c5ff2d3198fffa0a2040898566f5"} Dec 11 10:16:40 crc kubenswrapper[4953]: I1211 10:16:40.854748 4953 status_manager.go:851] "Failed to get status for pod" podUID="c6173b60-4d44-435b-a606-0b3836f71ad2" pod="openshift-marketplace/community-operators-dbdlx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-dbdlx\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:40 crc kubenswrapper[4953]: I1211 10:16:40.855213 4953 status_manager.go:851] "Failed to get status for pod" podUID="cef0b6d3-40d2-4981-894b-962df1304c36" pod="openshift-marketplace/certified-operators-cf6dd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-cf6dd\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:40 crc kubenswrapper[4953]: I1211 10:16:40.855567 4953 status_manager.go:851] "Failed to get status for pod" podUID="fe9b2116-8ab4-4c4c-8c58-74e62f28893d" pod="openshift-marketplace/redhat-marketplace-gnxp9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-gnxp9\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:40 crc kubenswrapper[4953]: I1211 10:16:40.855808 4953 status_manager.go:851] "Failed to get status for pod" podUID="bcb98d03-7cdb-49c6-baa8-c4aa9605a2d6" pod="openshift-marketplace/certified-operators-w2rvh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-w2rvh\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:40 crc kubenswrapper[4953]: I1211 10:16:40.855982 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"9eab272951f08007b4ca625718ef769c44392b4daad94417d2c9ead36db7e0ea"} Dec 11 10:16:40 crc kubenswrapper[4953]: I1211 10:16:40.855894 4953 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="9eab272951f08007b4ca625718ef769c44392b4daad94417d2c9ead36db7e0ea" exitCode=0 Dec 11 10:16:40 crc kubenswrapper[4953]: I1211 10:16:40.856158 4953 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:40 crc kubenswrapper[4953]: I1211 10:16:40.856311 4953 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a7f8ca70-14ac-499f-9a73-c03f1cb9d3f5" Dec 11 10:16:40 crc kubenswrapper[4953]: I1211 10:16:40.856427 4953 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a7f8ca70-14ac-499f-9a73-c03f1cb9d3f5" Dec 11 10:16:40 crc kubenswrapper[4953]: I1211 10:16:40.856782 4953 status_manager.go:851] "Failed to get status for pod" podUID="4468c58a-3cfc-4197-bf1b-8afc67dfda5e" pod="openshift-marketplace/redhat-marketplace-kkp25" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kkp25\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:40 crc kubenswrapper[4953]: E1211 10:16:40.856831 4953 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.134:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 10:16:40 crc kubenswrapper[4953]: I1211 10:16:40.857081 4953 status_manager.go:851] "Failed to get status for pod" podUID="277dd1b3-16e0-4ca6-9ceb-dc3f7ddc0c7e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:40 crc kubenswrapper[4953]: I1211 10:16:40.857529 4953 status_manager.go:851] "Failed to get status for pod" podUID="46f197d9-de5c-42c2-9781-47ed42389e11" pod="openshift-marketplace/redhat-operators-pxglb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pxglb\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:40 crc kubenswrapper[4953]: I1211 10:16:40.858034 4953 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:40 crc kubenswrapper[4953]: I1211 10:16:40.858556 4953 status_manager.go:851] "Failed to get status for pod" podUID="2f406ece-016a-43bc-92c9-473b85ad0ca9" pod="openshift-marketplace/redhat-operators-2f46z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-2f46z\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:40 crc kubenswrapper[4953]: I1211 10:16:40.859316 4953 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:40 crc kubenswrapper[4953]: I1211 10:16:40.859903 4953 status_manager.go:851] "Failed to get status for pod" podUID="4468c58a-3cfc-4197-bf1b-8afc67dfda5e" pod="openshift-marketplace/redhat-marketplace-kkp25" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kkp25\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:40 crc kubenswrapper[4953]: I1211 10:16:40.860365 4953 status_manager.go:851] "Failed to get status for pod" podUID="277dd1b3-16e0-4ca6-9ceb-dc3f7ddc0c7e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:40 crc kubenswrapper[4953]: I1211 10:16:40.860806 4953 status_manager.go:851] "Failed to get status for pod" podUID="46f197d9-de5c-42c2-9781-47ed42389e11" pod="openshift-marketplace/redhat-operators-pxglb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pxglb\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:40 crc kubenswrapper[4953]: I1211 10:16:40.861184 4953 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:40 crc kubenswrapper[4953]: I1211 10:16:40.861676 4953 status_manager.go:851] "Failed to get status for pod" podUID="2f406ece-016a-43bc-92c9-473b85ad0ca9" pod="openshift-marketplace/redhat-operators-2f46z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-2f46z\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:40 crc kubenswrapper[4953]: I1211 10:16:40.861934 4953 status_manager.go:851] "Failed to get status for pod" podUID="c6173b60-4d44-435b-a606-0b3836f71ad2" pod="openshift-marketplace/community-operators-dbdlx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-dbdlx\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:40 crc kubenswrapper[4953]: I1211 10:16:40.862880 4953 status_manager.go:851] "Failed to get status for pod" podUID="cef0b6d3-40d2-4981-894b-962df1304c36" pod="openshift-marketplace/certified-operators-cf6dd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-cf6dd\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:40 crc kubenswrapper[4953]: I1211 10:16:40.863369 4953 status_manager.go:851] "Failed to get status for pod" podUID="fe9b2116-8ab4-4c4c-8c58-74e62f28893d" pod="openshift-marketplace/redhat-marketplace-gnxp9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-gnxp9\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:40 crc kubenswrapper[4953]: I1211 10:16:40.864193 4953 status_manager.go:851] "Failed to get status for pod" podUID="bcb98d03-7cdb-49c6-baa8-c4aa9605a2d6" pod="openshift-marketplace/certified-operators-w2rvh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-w2rvh\": dial tcp 38.129.56.134:6443: connect: connection refused" Dec 11 10:16:41 crc kubenswrapper[4953]: I1211 10:16:41.867945 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"c1495f8688e16c14838970a8e3f32e0b38252a025024a493ab39d9b440d7e6e3"} Dec 11 10:16:42 crc kubenswrapper[4953]: I1211 10:16:42.881752 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"497f7b6ab0c84e11515fd376f60b42527bab3e4ddb6dd305b7e287d8d209a696"} Dec 11 10:16:43 crc kubenswrapper[4953]: I1211 10:16:43.571887 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 10:16:43 crc kubenswrapper[4953]: I1211 10:16:43.625876 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 10:16:43 crc kubenswrapper[4953]: I1211 10:16:43.893479 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"c998cd2d91d53af24896abaa339ec8669a671905c90902d7bb531b929be52a1e"} Dec 11 10:16:43 crc kubenswrapper[4953]: I1211 10:16:43.893627 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 10:16:44 crc kubenswrapper[4953]: I1211 10:16:44.902948 4953 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a7f8ca70-14ac-499f-9a73-c03f1cb9d3f5" Dec 11 10:16:44 crc kubenswrapper[4953]: I1211 10:16:44.903015 4953 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a7f8ca70-14ac-499f-9a73-c03f1cb9d3f5" Dec 11 10:16:44 crc kubenswrapper[4953]: I1211 10:16:44.903313 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"35ee9e3fe0f3007b7404dc638d1509af089c1ba7ebef42b8e4d372587576f951"} Dec 11 10:16:44 crc kubenswrapper[4953]: I1211 10:16:44.903362 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"936656fef03bccc059f1f2131a5754960aa743318268ecac016265a2b724b793"} Dec 11 10:16:44 crc kubenswrapper[4953]: I1211 10:16:44.903430 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 10:16:45 crc kubenswrapper[4953]: I1211 10:16:45.203939 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-gnxp9" Dec 11 10:16:45 crc kubenswrapper[4953]: I1211 10:16:45.204321 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-gnxp9" Dec 11 10:16:45 crc kubenswrapper[4953]: I1211 10:16:45.250215 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-gnxp9" Dec 11 10:16:45 crc kubenswrapper[4953]: I1211 10:16:45.633168 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-2f46z" Dec 11 10:16:45 crc kubenswrapper[4953]: I1211 10:16:45.645924 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-pxglb" Dec 11 10:16:45 crc kubenswrapper[4953]: I1211 10:16:45.680399 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-kkp25" Dec 11 10:16:45 crc kubenswrapper[4953]: I1211 10:16:45.681157 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-2f46z" Dec 11 10:16:45 crc kubenswrapper[4953]: I1211 10:16:45.691490 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-pxglb" Dec 11 10:16:45 crc kubenswrapper[4953]: I1211 10:16:45.946787 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-gnxp9" Dec 11 10:16:47 crc kubenswrapper[4953]: I1211 10:16:47.492627 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 10:16:47 crc kubenswrapper[4953]: I1211 10:16:47.492813 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 10:16:47 crc kubenswrapper[4953]: I1211 10:16:47.498312 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 10:16:49 crc kubenswrapper[4953]: I1211 10:16:49.943369 4953 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 10:16:50 crc kubenswrapper[4953]: I1211 10:16:50.983257 4953 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a7f8ca70-14ac-499f-9a73-c03f1cb9d3f5" Dec 11 10:16:50 crc kubenswrapper[4953]: I1211 10:16:50.983293 4953 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a7f8ca70-14ac-499f-9a73-c03f1cb9d3f5" Dec 11 10:16:50 crc kubenswrapper[4953]: I1211 10:16:50.986809 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 10:16:50 crc kubenswrapper[4953]: I1211 10:16:50.989334 4953 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="baaa535f-ce4a-46e1-841f-3aee4f9516bb" Dec 11 10:16:51 crc kubenswrapper[4953]: I1211 10:16:51.989185 4953 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a7f8ca70-14ac-499f-9a73-c03f1cb9d3f5" Dec 11 10:16:51 crc kubenswrapper[4953]: I1211 10:16:51.989253 4953 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a7f8ca70-14ac-499f-9a73-c03f1cb9d3f5" Dec 11 10:16:52 crc kubenswrapper[4953]: I1211 10:16:52.528701 4953 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="baaa535f-ce4a-46e1-841f-3aee4f9516bb" Dec 11 10:16:57 crc kubenswrapper[4953]: I1211 10:16:57.354232 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 10:16:58 crc kubenswrapper[4953]: I1211 10:16:58.657646 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Dec 11 10:16:59 crc kubenswrapper[4953]: I1211 10:16:59.589368 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Dec 11 10:16:59 crc kubenswrapper[4953]: I1211 10:16:59.880038 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Dec 11 10:17:00 crc kubenswrapper[4953]: I1211 10:17:00.019807 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Dec 11 10:17:00 crc kubenswrapper[4953]: I1211 10:17:00.028717 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Dec 11 10:17:00 crc kubenswrapper[4953]: I1211 10:17:00.364414 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Dec 11 10:17:00 crc kubenswrapper[4953]: I1211 10:17:00.412096 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Dec 11 10:17:00 crc kubenswrapper[4953]: I1211 10:17:00.584109 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Dec 11 10:17:00 crc kubenswrapper[4953]: I1211 10:17:00.874604 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Dec 11 10:17:01 crc kubenswrapper[4953]: I1211 10:17:01.263327 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Dec 11 10:17:01 crc kubenswrapper[4953]: I1211 10:17:01.299441 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Dec 11 10:17:01 crc kubenswrapper[4953]: I1211 10:17:01.475557 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Dec 11 10:17:01 crc kubenswrapper[4953]: I1211 10:17:01.616275 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Dec 11 10:17:01 crc kubenswrapper[4953]: I1211 10:17:01.619324 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Dec 11 10:17:01 crc kubenswrapper[4953]: I1211 10:17:01.783528 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Dec 11 10:17:01 crc kubenswrapper[4953]: I1211 10:17:01.838942 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Dec 11 10:17:01 crc kubenswrapper[4953]: I1211 10:17:01.945439 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Dec 11 10:17:02 crc kubenswrapper[4953]: I1211 10:17:02.010945 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Dec 11 10:17:02 crc kubenswrapper[4953]: I1211 10:17:02.013042 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Dec 11 10:17:02 crc kubenswrapper[4953]: I1211 10:17:02.031134 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Dec 11 10:17:02 crc kubenswrapper[4953]: I1211 10:17:02.104512 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Dec 11 10:17:02 crc kubenswrapper[4953]: I1211 10:17:02.122292 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Dec 11 10:17:02 crc kubenswrapper[4953]: I1211 10:17:02.282517 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Dec 11 10:17:02 crc kubenswrapper[4953]: I1211 10:17:02.383521 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Dec 11 10:17:02 crc kubenswrapper[4953]: I1211 10:17:02.393665 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Dec 11 10:17:02 crc kubenswrapper[4953]: I1211 10:17:02.429810 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Dec 11 10:17:02 crc kubenswrapper[4953]: I1211 10:17:02.436716 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Dec 11 10:17:02 crc kubenswrapper[4953]: I1211 10:17:02.615302 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Dec 11 10:17:02 crc kubenswrapper[4953]: I1211 10:17:02.665504 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Dec 11 10:17:02 crc kubenswrapper[4953]: I1211 10:17:02.727285 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Dec 11 10:17:02 crc kubenswrapper[4953]: I1211 10:17:02.787670 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Dec 11 10:17:02 crc kubenswrapper[4953]: I1211 10:17:02.792119 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Dec 11 10:17:02 crc kubenswrapper[4953]: I1211 10:17:02.800520 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Dec 11 10:17:02 crc kubenswrapper[4953]: I1211 10:17:02.922278 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Dec 11 10:17:02 crc kubenswrapper[4953]: I1211 10:17:02.935156 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Dec 11 10:17:02 crc kubenswrapper[4953]: I1211 10:17:02.938877 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Dec 11 10:17:02 crc kubenswrapper[4953]: I1211 10:17:02.965249 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Dec 11 10:17:02 crc kubenswrapper[4953]: I1211 10:17:02.989825 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Dec 11 10:17:03 crc kubenswrapper[4953]: I1211 10:17:03.135920 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Dec 11 10:17:03 crc kubenswrapper[4953]: I1211 10:17:03.203974 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Dec 11 10:17:03 crc kubenswrapper[4953]: I1211 10:17:03.214095 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Dec 11 10:17:03 crc kubenswrapper[4953]: I1211 10:17:03.317980 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Dec 11 10:17:03 crc kubenswrapper[4953]: I1211 10:17:03.353386 4953 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Dec 11 10:17:03 crc kubenswrapper[4953]: I1211 10:17:03.358677 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Dec 11 10:17:03 crc kubenswrapper[4953]: I1211 10:17:03.424987 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Dec 11 10:17:03 crc kubenswrapper[4953]: I1211 10:17:03.451605 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Dec 11 10:17:03 crc kubenswrapper[4953]: I1211 10:17:03.507527 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Dec 11 10:17:03 crc kubenswrapper[4953]: I1211 10:17:03.516854 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Dec 11 10:17:03 crc kubenswrapper[4953]: I1211 10:17:03.735436 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Dec 11 10:17:03 crc kubenswrapper[4953]: I1211 10:17:03.824476 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Dec 11 10:17:03 crc kubenswrapper[4953]: I1211 10:17:03.919145 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Dec 11 10:17:03 crc kubenswrapper[4953]: I1211 10:17:03.938758 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Dec 11 10:17:03 crc kubenswrapper[4953]: I1211 10:17:03.961267 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Dec 11 10:17:03 crc kubenswrapper[4953]: I1211 10:17:03.983964 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Dec 11 10:17:04 crc kubenswrapper[4953]: I1211 10:17:04.021411 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Dec 11 10:17:04 crc kubenswrapper[4953]: I1211 10:17:04.213476 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 11 10:17:04 crc kubenswrapper[4953]: I1211 10:17:04.234977 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Dec 11 10:17:04 crc kubenswrapper[4953]: I1211 10:17:04.269006 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Dec 11 10:17:04 crc kubenswrapper[4953]: I1211 10:17:04.365082 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Dec 11 10:17:04 crc kubenswrapper[4953]: I1211 10:17:04.389066 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Dec 11 10:17:04 crc kubenswrapper[4953]: I1211 10:17:04.443591 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Dec 11 10:17:04 crc kubenswrapper[4953]: I1211 10:17:04.448390 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Dec 11 10:17:04 crc kubenswrapper[4953]: I1211 10:17:04.472502 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Dec 11 10:17:04 crc kubenswrapper[4953]: I1211 10:17:04.491786 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Dec 11 10:17:04 crc kubenswrapper[4953]: I1211 10:17:04.504049 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Dec 11 10:17:04 crc kubenswrapper[4953]: I1211 10:17:04.563646 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 11 10:17:04 crc kubenswrapper[4953]: I1211 10:17:04.566052 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 11 10:17:04 crc kubenswrapper[4953]: I1211 10:17:04.704060 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Dec 11 10:17:04 crc kubenswrapper[4953]: I1211 10:17:04.721810 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Dec 11 10:17:04 crc kubenswrapper[4953]: I1211 10:17:04.733850 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Dec 11 10:17:04 crc kubenswrapper[4953]: I1211 10:17:04.835407 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Dec 11 10:17:04 crc kubenswrapper[4953]: I1211 10:17:04.858104 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Dec 11 10:17:04 crc kubenswrapper[4953]: I1211 10:17:04.975994 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Dec 11 10:17:05 crc kubenswrapper[4953]: I1211 10:17:05.073100 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 11 10:17:05 crc kubenswrapper[4953]: I1211 10:17:05.121004 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 11 10:17:05 crc kubenswrapper[4953]: I1211 10:17:05.121055 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Dec 11 10:17:05 crc kubenswrapper[4953]: I1211 10:17:05.135443 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Dec 11 10:17:05 crc kubenswrapper[4953]: I1211 10:17:05.137084 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Dec 11 10:17:05 crc kubenswrapper[4953]: I1211 10:17:05.346938 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Dec 11 10:17:05 crc kubenswrapper[4953]: I1211 10:17:05.387459 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Dec 11 10:17:05 crc kubenswrapper[4953]: I1211 10:17:05.394251 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Dec 11 10:17:05 crc kubenswrapper[4953]: I1211 10:17:05.400823 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Dec 11 10:17:05 crc kubenswrapper[4953]: I1211 10:17:05.418666 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Dec 11 10:17:05 crc kubenswrapper[4953]: I1211 10:17:05.437373 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Dec 11 10:17:05 crc kubenswrapper[4953]: I1211 10:17:05.565481 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 11 10:17:05 crc kubenswrapper[4953]: I1211 10:17:05.735386 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Dec 11 10:17:05 crc kubenswrapper[4953]: I1211 10:17:05.790121 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 11 10:17:05 crc kubenswrapper[4953]: I1211 10:17:05.855254 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Dec 11 10:17:05 crc kubenswrapper[4953]: I1211 10:17:05.942920 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Dec 11 10:17:05 crc kubenswrapper[4953]: I1211 10:17:05.994686 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Dec 11 10:17:06 crc kubenswrapper[4953]: I1211 10:17:06.037039 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 11 10:17:06 crc kubenswrapper[4953]: I1211 10:17:06.047450 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Dec 11 10:17:06 crc kubenswrapper[4953]: I1211 10:17:06.066948 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Dec 11 10:17:06 crc kubenswrapper[4953]: I1211 10:17:06.138716 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Dec 11 10:17:06 crc kubenswrapper[4953]: I1211 10:17:06.168930 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Dec 11 10:17:06 crc kubenswrapper[4953]: I1211 10:17:06.198913 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Dec 11 10:17:06 crc kubenswrapper[4953]: I1211 10:17:06.262759 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Dec 11 10:17:06 crc kubenswrapper[4953]: I1211 10:17:06.269524 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Dec 11 10:17:06 crc kubenswrapper[4953]: I1211 10:17:06.333110 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Dec 11 10:17:06 crc kubenswrapper[4953]: I1211 10:17:06.334993 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Dec 11 10:17:06 crc kubenswrapper[4953]: I1211 10:17:06.335707 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Dec 11 10:17:06 crc kubenswrapper[4953]: I1211 10:17:06.340143 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Dec 11 10:17:06 crc kubenswrapper[4953]: I1211 10:17:06.353765 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Dec 11 10:17:06 crc kubenswrapper[4953]: I1211 10:17:06.388465 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Dec 11 10:17:06 crc kubenswrapper[4953]: I1211 10:17:06.439849 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Dec 11 10:17:06 crc kubenswrapper[4953]: I1211 10:17:06.453763 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Dec 11 10:17:06 crc kubenswrapper[4953]: I1211 10:17:06.550844 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Dec 11 10:17:06 crc kubenswrapper[4953]: I1211 10:17:06.580757 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Dec 11 10:17:06 crc kubenswrapper[4953]: I1211 10:17:06.645176 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Dec 11 10:17:06 crc kubenswrapper[4953]: I1211 10:17:06.710157 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Dec 11 10:17:06 crc kubenswrapper[4953]: I1211 10:17:06.719022 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Dec 11 10:17:06 crc kubenswrapper[4953]: I1211 10:17:06.847905 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Dec 11 10:17:06 crc kubenswrapper[4953]: I1211 10:17:06.891564 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 11 10:17:06 crc kubenswrapper[4953]: I1211 10:17:06.929919 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Dec 11 10:17:07 crc kubenswrapper[4953]: I1211 10:17:07.028591 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Dec 11 10:17:07 crc kubenswrapper[4953]: I1211 10:17:07.175616 4953 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Dec 11 10:17:07 crc kubenswrapper[4953]: I1211 10:17:07.238524 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Dec 11 10:17:07 crc kubenswrapper[4953]: I1211 10:17:07.242813 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Dec 11 10:17:07 crc kubenswrapper[4953]: I1211 10:17:07.320531 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Dec 11 10:17:07 crc kubenswrapper[4953]: I1211 10:17:07.340046 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Dec 11 10:17:07 crc kubenswrapper[4953]: I1211 10:17:07.365711 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Dec 11 10:17:07 crc kubenswrapper[4953]: I1211 10:17:07.372014 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Dec 11 10:17:07 crc kubenswrapper[4953]: I1211 10:17:07.598566 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Dec 11 10:17:08 crc kubenswrapper[4953]: I1211 10:17:08.063605 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 11 10:17:08 crc kubenswrapper[4953]: I1211 10:17:08.064970 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Dec 11 10:17:08 crc kubenswrapper[4953]: I1211 10:17:08.090806 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Dec 11 10:17:08 crc kubenswrapper[4953]: I1211 10:17:08.106980 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Dec 11 10:17:08 crc kubenswrapper[4953]: I1211 10:17:08.160481 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Dec 11 10:17:08 crc kubenswrapper[4953]: I1211 10:17:08.164903 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 11 10:17:08 crc kubenswrapper[4953]: I1211 10:17:08.170139 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Dec 11 10:17:08 crc kubenswrapper[4953]: I1211 10:17:08.225392 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Dec 11 10:17:08 crc kubenswrapper[4953]: I1211 10:17:08.244668 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Dec 11 10:17:08 crc kubenswrapper[4953]: I1211 10:17:08.256370 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Dec 11 10:17:08 crc kubenswrapper[4953]: I1211 10:17:08.257388 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Dec 11 10:17:08 crc kubenswrapper[4953]: I1211 10:17:08.347440 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Dec 11 10:17:08 crc kubenswrapper[4953]: I1211 10:17:08.452993 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Dec 11 10:17:08 crc kubenswrapper[4953]: I1211 10:17:08.486207 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 11 10:17:08 crc kubenswrapper[4953]: I1211 10:17:08.557219 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Dec 11 10:17:08 crc kubenswrapper[4953]: I1211 10:17:08.619767 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Dec 11 10:17:08 crc kubenswrapper[4953]: I1211 10:17:08.670717 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Dec 11 10:17:08 crc kubenswrapper[4953]: I1211 10:17:08.714029 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Dec 11 10:17:08 crc kubenswrapper[4953]: I1211 10:17:08.741267 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Dec 11 10:17:08 crc kubenswrapper[4953]: I1211 10:17:08.742186 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Dec 11 10:17:08 crc kubenswrapper[4953]: I1211 10:17:08.748839 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Dec 11 10:17:08 crc kubenswrapper[4953]: I1211 10:17:08.764853 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Dec 11 10:17:08 crc kubenswrapper[4953]: I1211 10:17:08.834857 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Dec 11 10:17:08 crc kubenswrapper[4953]: I1211 10:17:08.849751 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 11 10:17:08 crc kubenswrapper[4953]: I1211 10:17:08.861238 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Dec 11 10:17:08 crc kubenswrapper[4953]: I1211 10:17:08.882513 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Dec 11 10:17:08 crc kubenswrapper[4953]: I1211 10:17:08.913652 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Dec 11 10:17:08 crc kubenswrapper[4953]: I1211 10:17:08.966249 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Dec 11 10:17:08 crc kubenswrapper[4953]: I1211 10:17:08.978898 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Dec 11 10:17:08 crc kubenswrapper[4953]: I1211 10:17:08.998152 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Dec 11 10:17:09 crc kubenswrapper[4953]: I1211 10:17:09.132203 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Dec 11 10:17:09 crc kubenswrapper[4953]: I1211 10:17:09.184527 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Dec 11 10:17:09 crc kubenswrapper[4953]: I1211 10:17:09.206916 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Dec 11 10:17:09 crc kubenswrapper[4953]: I1211 10:17:09.219209 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Dec 11 10:17:09 crc kubenswrapper[4953]: I1211 10:17:09.383229 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Dec 11 10:17:09 crc kubenswrapper[4953]: I1211 10:17:09.408260 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Dec 11 10:17:09 crc kubenswrapper[4953]: I1211 10:17:09.553485 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Dec 11 10:17:09 crc kubenswrapper[4953]: I1211 10:17:09.615166 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Dec 11 10:17:09 crc kubenswrapper[4953]: I1211 10:17:09.652599 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Dec 11 10:17:09 crc kubenswrapper[4953]: I1211 10:17:09.718472 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Dec 11 10:17:09 crc kubenswrapper[4953]: I1211 10:17:09.801696 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Dec 11 10:17:09 crc kubenswrapper[4953]: I1211 10:17:09.875899 4953 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Dec 11 10:17:10 crc kubenswrapper[4953]: I1211 10:17:10.081752 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Dec 11 10:17:10 crc kubenswrapper[4953]: I1211 10:17:10.147405 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Dec 11 10:17:10 crc kubenswrapper[4953]: I1211 10:17:10.189338 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Dec 11 10:17:10 crc kubenswrapper[4953]: I1211 10:17:10.228684 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Dec 11 10:17:10 crc kubenswrapper[4953]: I1211 10:17:10.232138 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Dec 11 10:17:10 crc kubenswrapper[4953]: I1211 10:17:10.307165 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Dec 11 10:17:10 crc kubenswrapper[4953]: I1211 10:17:10.387369 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Dec 11 10:17:10 crc kubenswrapper[4953]: I1211 10:17:10.530567 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Dec 11 10:17:10 crc kubenswrapper[4953]: I1211 10:17:10.567139 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Dec 11 10:17:10 crc kubenswrapper[4953]: I1211 10:17:10.641707 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Dec 11 10:17:10 crc kubenswrapper[4953]: I1211 10:17:10.713506 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Dec 11 10:17:10 crc kubenswrapper[4953]: I1211 10:17:10.779240 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Dec 11 10:17:10 crc kubenswrapper[4953]: I1211 10:17:10.922046 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Dec 11 10:17:10 crc kubenswrapper[4953]: I1211 10:17:10.947520 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Dec 11 10:17:10 crc kubenswrapper[4953]: I1211 10:17:10.988727 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Dec 11 10:17:11 crc kubenswrapper[4953]: I1211 10:17:11.070399 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Dec 11 10:17:11 crc kubenswrapper[4953]: I1211 10:17:11.089539 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Dec 11 10:17:11 crc kubenswrapper[4953]: I1211 10:17:11.191795 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Dec 11 10:17:11 crc kubenswrapper[4953]: I1211 10:17:11.283240 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Dec 11 10:17:11 crc kubenswrapper[4953]: I1211 10:17:11.322366 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Dec 11 10:17:11 crc kubenswrapper[4953]: I1211 10:17:11.440187 4953 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Dec 11 10:17:11 crc kubenswrapper[4953]: I1211 10:17:11.657722 4953 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Dec 11 10:17:11 crc kubenswrapper[4953]: I1211 10:17:11.659424 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-kkp25" podStartSLOduration=48.151425592 podStartE2EDuration="3m1.659394952s" podCreationTimestamp="2025-12-11 10:14:10 +0000 UTC" firstStartedPulling="2025-12-11 10:14:19.24616603 +0000 UTC m=+177.270025063" lastFinishedPulling="2025-12-11 10:16:32.75413539 +0000 UTC m=+310.777994423" observedRunningTime="2025-12-11 10:16:49.011835428 +0000 UTC m=+327.035694481" watchObservedRunningTime="2025-12-11 10:17:11.659394952 +0000 UTC m=+349.683254035" Dec 11 10:17:11 crc kubenswrapper[4953]: I1211 10:17:11.660189 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=49.660171508 podStartE2EDuration="49.660171508s" podCreationTimestamp="2025-12-11 10:16:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:16:48.75942579 +0000 UTC m=+326.783284843" watchObservedRunningTime="2025-12-11 10:17:11.660171508 +0000 UTC m=+349.684030581" Dec 11 10:17:11 crc kubenswrapper[4953]: I1211 10:17:11.662086 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-2f46z" podStartSLOduration=48.221793774 podStartE2EDuration="3m1.662072651s" podCreationTimestamp="2025-12-11 10:14:10 +0000 UTC" firstStartedPulling="2025-12-11 10:14:19.25318264 +0000 UTC m=+177.277041683" lastFinishedPulling="2025-12-11 10:16:32.693461527 +0000 UTC m=+310.717320560" observedRunningTime="2025-12-11 10:16:48.781404779 +0000 UTC m=+326.805263842" watchObservedRunningTime="2025-12-11 10:17:11.662072651 +0000 UTC m=+349.685931714" Dec 11 10:17:11 crc kubenswrapper[4953]: I1211 10:17:11.664183 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-gnxp9" podStartSLOduration=45.371606921 podStartE2EDuration="3m1.66412763s" podCreationTimestamp="2025-12-11 10:14:10 +0000 UTC" firstStartedPulling="2025-12-11 10:14:19.219314675 +0000 UTC m=+177.243173708" lastFinishedPulling="2025-12-11 10:16:35.511835374 +0000 UTC m=+313.535694417" observedRunningTime="2025-12-11 10:16:48.868283843 +0000 UTC m=+326.892142876" watchObservedRunningTime="2025-12-11 10:17:11.66412763 +0000 UTC m=+349.687986673" Dec 11 10:17:11 crc kubenswrapper[4953]: I1211 10:17:11.666972 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-pxglb" podStartSLOduration=48.136874014 podStartE2EDuration="3m1.666955324s" podCreationTimestamp="2025-12-11 10:14:10 +0000 UTC" firstStartedPulling="2025-12-11 10:14:19.227550277 +0000 UTC m=+177.251409320" lastFinishedPulling="2025-12-11 10:16:32.757631597 +0000 UTC m=+310.781490630" observedRunningTime="2025-12-11 10:16:48.652359637 +0000 UTC m=+326.676218680" watchObservedRunningTime="2025-12-11 10:17:11.666955324 +0000 UTC m=+349.690814357" Dec 11 10:17:11 crc kubenswrapper[4953]: I1211 10:17:11.667522 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-w2rvh" podStartSLOduration=51.146417777 podStartE2EDuration="3m4.667514162s" podCreationTimestamp="2025-12-11 10:14:07 +0000 UTC" firstStartedPulling="2025-12-11 10:14:19.236927135 +0000 UTC m=+177.260786168" lastFinishedPulling="2025-12-11 10:16:32.75802352 +0000 UTC m=+310.781882553" observedRunningTime="2025-12-11 10:16:48.959667497 +0000 UTC m=+326.983526530" watchObservedRunningTime="2025-12-11 10:17:11.667514162 +0000 UTC m=+349.691373195" Dec 11 10:17:11 crc kubenswrapper[4953]: I1211 10:17:11.668612 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-marketplace/certified-operators-cf6dd","openshift-marketplace/community-operators-dbdlx"] Dec 11 10:17:11 crc kubenswrapper[4953]: I1211 10:17:11.668710 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 11 10:17:11 crc kubenswrapper[4953]: I1211 10:17:11.733160 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Dec 11 10:17:11 crc kubenswrapper[4953]: I1211 10:17:11.738209 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 10:17:11 crc kubenswrapper[4953]: I1211 10:17:11.750713 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=22.750695049 podStartE2EDuration="22.750695049s" podCreationTimestamp="2025-12-11 10:16:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:17:11.749591382 +0000 UTC m=+349.773450405" watchObservedRunningTime="2025-12-11 10:17:11.750695049 +0000 UTC m=+349.774554082" Dec 11 10:17:11 crc kubenswrapper[4953]: I1211 10:17:11.767462 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Dec 11 10:17:11 crc kubenswrapper[4953]: I1211 10:17:11.824730 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Dec 11 10:17:11 crc kubenswrapper[4953]: I1211 10:17:11.977560 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Dec 11 10:17:12 crc kubenswrapper[4953]: I1211 10:17:12.075719 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Dec 11 10:17:12 crc kubenswrapper[4953]: I1211 10:17:12.133954 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 11 10:17:12 crc kubenswrapper[4953]: I1211 10:17:12.305966 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Dec 11 10:17:12 crc kubenswrapper[4953]: I1211 10:17:12.481334 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Dec 11 10:17:12 crc kubenswrapper[4953]: I1211 10:17:12.482702 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6173b60-4d44-435b-a606-0b3836f71ad2" path="/var/lib/kubelet/pods/c6173b60-4d44-435b-a606-0b3836f71ad2/volumes" Dec 11 10:17:12 crc kubenswrapper[4953]: I1211 10:17:12.484453 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cef0b6d3-40d2-4981-894b-962df1304c36" path="/var/lib/kubelet/pods/cef0b6d3-40d2-4981-894b-962df1304c36/volumes" Dec 11 10:17:12 crc kubenswrapper[4953]: I1211 10:17:12.686189 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Dec 11 10:17:13 crc kubenswrapper[4953]: I1211 10:17:13.005365 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Dec 11 10:17:13 crc kubenswrapper[4953]: I1211 10:17:13.023374 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Dec 11 10:17:13 crc kubenswrapper[4953]: I1211 10:17:13.147711 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Dec 11 10:17:13 crc kubenswrapper[4953]: I1211 10:17:13.186303 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Dec 11 10:17:13 crc kubenswrapper[4953]: I1211 10:17:13.385094 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Dec 11 10:17:13 crc kubenswrapper[4953]: I1211 10:17:13.807787 4953 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Dec 11 10:17:14 crc kubenswrapper[4953]: I1211 10:17:14.069105 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Dec 11 10:17:14 crc kubenswrapper[4953]: I1211 10:17:14.304500 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Dec 11 10:17:14 crc kubenswrapper[4953]: I1211 10:17:14.335843 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Dec 11 10:17:14 crc kubenswrapper[4953]: I1211 10:17:14.423347 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Dec 11 10:17:14 crc kubenswrapper[4953]: I1211 10:17:14.534758 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Dec 11 10:17:14 crc kubenswrapper[4953]: I1211 10:17:14.627495 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Dec 11 10:17:15 crc kubenswrapper[4953]: I1211 10:17:15.164830 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Dec 11 10:17:21 crc kubenswrapper[4953]: I1211 10:17:21.802226 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Dec 11 10:17:22 crc kubenswrapper[4953]: I1211 10:17:22.795238 4953 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 11 10:17:22 crc kubenswrapper[4953]: I1211 10:17:22.795823 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://8342838c53b33171c8aa25456ff49154589485951fdf68f8d8b0f18798ef9384" gracePeriod=5 Dec 11 10:17:22 crc kubenswrapper[4953]: I1211 10:17:22.959066 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Dec 11 10:17:28 crc kubenswrapper[4953]: I1211 10:17:28.077681 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Dec 11 10:17:28 crc kubenswrapper[4953]: I1211 10:17:28.219195 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Dec 11 10:17:28 crc kubenswrapper[4953]: I1211 10:17:28.219255 4953 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="8342838c53b33171c8aa25456ff49154589485951fdf68f8d8b0f18798ef9384" exitCode=137 Dec 11 10:17:28 crc kubenswrapper[4953]: I1211 10:17:28.385641 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Dec 11 10:17:28 crc kubenswrapper[4953]: I1211 10:17:28.386262 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 10:17:28 crc kubenswrapper[4953]: I1211 10:17:28.481152 4953 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Dec 11 10:17:28 crc kubenswrapper[4953]: I1211 10:17:28.498104 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 11 10:17:28 crc kubenswrapper[4953]: I1211 10:17:28.498139 4953 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="3bf1c7e3-5ac2-4eaf-8fbe-2f90c4a1074d" Dec 11 10:17:28 crc kubenswrapper[4953]: I1211 10:17:28.503321 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 11 10:17:28 crc kubenswrapper[4953]: I1211 10:17:28.503508 4953 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="3bf1c7e3-5ac2-4eaf-8fbe-2f90c4a1074d" Dec 11 10:17:28 crc kubenswrapper[4953]: I1211 10:17:28.560666 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 11 10:17:28 crc kubenswrapper[4953]: I1211 10:17:28.560714 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 11 10:17:28 crc kubenswrapper[4953]: I1211 10:17:28.560752 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 11 10:17:28 crc kubenswrapper[4953]: I1211 10:17:28.560838 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 11 10:17:28 crc kubenswrapper[4953]: I1211 10:17:28.560854 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 11 10:17:28 crc kubenswrapper[4953]: I1211 10:17:28.561329 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 10:17:28 crc kubenswrapper[4953]: I1211 10:17:28.561319 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 10:17:28 crc kubenswrapper[4953]: I1211 10:17:28.561373 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 10:17:28 crc kubenswrapper[4953]: I1211 10:17:28.561631 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 10:17:28 crc kubenswrapper[4953]: I1211 10:17:28.573348 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 10:17:28 crc kubenswrapper[4953]: I1211 10:17:28.576431 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Dec 11 10:17:28 crc kubenswrapper[4953]: I1211 10:17:28.672956 4953 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Dec 11 10:17:28 crc kubenswrapper[4953]: I1211 10:17:28.673041 4953 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 11 10:17:28 crc kubenswrapper[4953]: I1211 10:17:28.673073 4953 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Dec 11 10:17:28 crc kubenswrapper[4953]: I1211 10:17:28.673097 4953 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 11 10:17:28 crc kubenswrapper[4953]: I1211 10:17:28.673125 4953 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Dec 11 10:17:28 crc kubenswrapper[4953]: I1211 10:17:28.692138 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Dec 11 10:17:28 crc kubenswrapper[4953]: I1211 10:17:28.921446 4953 scope.go:117] "RemoveContainer" containerID="5850c59617cbc5cbf3d86246bfb8d7645964fdb32f406648e47de3d2e1dcca39" Dec 11 10:17:28 crc kubenswrapper[4953]: I1211 10:17:28.937833 4953 scope.go:117] "RemoveContainer" containerID="c4536100178c4611eea8cfe2b15d79f55fdd32b7004de523cc2694bd656ce5be" Dec 11 10:17:28 crc kubenswrapper[4953]: I1211 10:17:28.956150 4953 scope.go:117] "RemoveContainer" containerID="d2348bd7a336966cd91aa6ba1cf71771e7fd111085acbb0481adee82d7a6e109" Dec 11 10:17:28 crc kubenswrapper[4953]: I1211 10:17:28.978898 4953 scope.go:117] "RemoveContainer" containerID="89487ecc0b25583d92a2adb537e660618a1f0477d9b0ca805c7d5cc120a38ef5" Dec 11 10:17:28 crc kubenswrapper[4953]: I1211 10:17:28.994056 4953 scope.go:117] "RemoveContainer" containerID="afbf1d478a1ccbd17c29483adf2e39e60be93dfde72d96dd4c45ee2b81c7db7f" Dec 11 10:17:29 crc kubenswrapper[4953]: I1211 10:17:29.226764 4953 scope.go:117] "RemoveContainer" containerID="8342838c53b33171c8aa25456ff49154589485951fdf68f8d8b0f18798ef9384" Dec 11 10:17:29 crc kubenswrapper[4953]: I1211 10:17:29.226765 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 10:17:29 crc kubenswrapper[4953]: I1211 10:17:29.659712 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 11 10:17:30 crc kubenswrapper[4953]: I1211 10:17:30.046057 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Dec 11 10:17:30 crc kubenswrapper[4953]: I1211 10:17:30.437443 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Dec 11 10:17:30 crc kubenswrapper[4953]: I1211 10:17:30.480774 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Dec 11 10:17:31 crc kubenswrapper[4953]: I1211 10:17:31.140958 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Dec 11 10:17:32 crc kubenswrapper[4953]: I1211 10:17:32.619336 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Dec 11 10:17:33 crc kubenswrapper[4953]: I1211 10:17:33.219385 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Dec 11 10:17:33 crc kubenswrapper[4953]: I1211 10:17:33.277012 4953 generic.go:334] "Generic (PLEG): container finished" podID="06554344-a634-4dec-aaf7-e3d9919d9e80" containerID="512517f86924282acd209acc698ebf59a82a4e0987feffc8ef093ea10d90139f" exitCode=0 Dec 11 10:17:33 crc kubenswrapper[4953]: I1211 10:17:33.277091 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-xmb4p" event={"ID":"06554344-a634-4dec-aaf7-e3d9919d9e80","Type":"ContainerDied","Data":"512517f86924282acd209acc698ebf59a82a4e0987feffc8ef093ea10d90139f"} Dec 11 10:17:33 crc kubenswrapper[4953]: I1211 10:17:33.278322 4953 scope.go:117] "RemoveContainer" containerID="512517f86924282acd209acc698ebf59a82a4e0987feffc8ef093ea10d90139f" Dec 11 10:17:34 crc kubenswrapper[4953]: I1211 10:17:34.272005 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Dec 11 10:17:34 crc kubenswrapper[4953]: I1211 10:17:34.285783 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-xmb4p" event={"ID":"06554344-a634-4dec-aaf7-e3d9919d9e80","Type":"ContainerStarted","Data":"198e08d832fd5afa990f02b583df9f04268e5d1a887a3c9e3d1d9b80c743e035"} Dec 11 10:17:34 crc kubenswrapper[4953]: I1211 10:17:34.286120 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-xmb4p" Dec 11 10:17:34 crc kubenswrapper[4953]: I1211 10:17:34.287448 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-xmb4p" Dec 11 10:17:34 crc kubenswrapper[4953]: I1211 10:17:34.384061 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Dec 11 10:17:34 crc kubenswrapper[4953]: I1211 10:17:34.532347 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 11 10:17:34 crc kubenswrapper[4953]: I1211 10:17:34.588437 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Dec 11 10:17:34 crc kubenswrapper[4953]: I1211 10:17:34.643291 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Dec 11 10:17:35 crc kubenswrapper[4953]: I1211 10:17:35.030318 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Dec 11 10:17:35 crc kubenswrapper[4953]: I1211 10:17:35.152507 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-w2rvh"] Dec 11 10:17:35 crc kubenswrapper[4953]: I1211 10:17:35.152964 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-w2rvh" podUID="bcb98d03-7cdb-49c6-baa8-c4aa9605a2d6" containerName="registry-server" containerID="cri-o://21d5ba454bbbb7dd8c66a2b86d0764c525225c76a2b143c1aa1102d65d0d8bb3" gracePeriod=30 Dec 11 10:17:35 crc kubenswrapper[4953]: I1211 10:17:35.167999 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-l5pbm"] Dec 11 10:17:35 crc kubenswrapper[4953]: I1211 10:17:35.168264 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-l5pbm" podUID="3b099bc8-faec-451b-88a3-f03e46e3ad94" containerName="registry-server" containerID="cri-o://712dc190de17abed413e4e7eadcec31160c952c72a60dc5438de29e84c8d93ed" gracePeriod=30 Dec 11 10:17:35 crc kubenswrapper[4953]: I1211 10:17:35.172019 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-xmb4p"] Dec 11 10:17:35 crc kubenswrapper[4953]: I1211 10:17:35.181699 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gnxp9"] Dec 11 10:17:35 crc kubenswrapper[4953]: I1211 10:17:35.181994 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-gnxp9" podUID="fe9b2116-8ab4-4c4c-8c58-74e62f28893d" containerName="registry-server" containerID="cri-o://18abbb2ed80b43bcc4babfdcf1fbb2913bf37a9cf81cba51f67100f33496a0e2" gracePeriod=30 Dec 11 10:17:35 crc kubenswrapper[4953]: I1211 10:17:35.187807 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kkp25"] Dec 11 10:17:35 crc kubenswrapper[4953]: I1211 10:17:35.188792 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-kkp25" podUID="4468c58a-3cfc-4197-bf1b-8afc67dfda5e" containerName="registry-server" containerID="cri-o://e28fbeee778975782d25ee5289ddcbdc17fdbaac5db330c2e81a70d501961dc9" gracePeriod=30 Dec 11 10:17:35 crc kubenswrapper[4953]: I1211 10:17:35.200863 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2f46z"] Dec 11 10:17:35 crc kubenswrapper[4953]: I1211 10:17:35.201168 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-2f46z" podUID="2f406ece-016a-43bc-92c9-473b85ad0ca9" containerName="registry-server" containerID="cri-o://9acd0de46f0ffe055f7a961c8a6e5dc33e4dbd99bf269efa6a779bb23da8633a" gracePeriod=30 Dec 11 10:17:35 crc kubenswrapper[4953]: I1211 10:17:35.216444 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pxglb"] Dec 11 10:17:35 crc kubenswrapper[4953]: I1211 10:17:35.216705 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-pxglb" podUID="46f197d9-de5c-42c2-9781-47ed42389e11" containerName="registry-server" containerID="cri-o://a4cdf564d7724667a615ff95a4a62a06e3e554763478f3f962e6d4fc3bafb5f8" gracePeriod=30 Dec 11 10:17:35 crc kubenswrapper[4953]: E1211 10:17:35.216821 4953 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="18abbb2ed80b43bcc4babfdcf1fbb2913bf37a9cf81cba51f67100f33496a0e2" cmd=["grpc_health_probe","-addr=:50051"] Dec 11 10:17:35 crc kubenswrapper[4953]: E1211 10:17:35.218180 4953 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 18abbb2ed80b43bcc4babfdcf1fbb2913bf37a9cf81cba51f67100f33496a0e2 is running failed: container process not found" containerID="18abbb2ed80b43bcc4babfdcf1fbb2913bf37a9cf81cba51f67100f33496a0e2" cmd=["grpc_health_probe","-addr=:50051"] Dec 11 10:17:35 crc kubenswrapper[4953]: E1211 10:17:35.218940 4953 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 18abbb2ed80b43bcc4babfdcf1fbb2913bf37a9cf81cba51f67100f33496a0e2 is running failed: container process not found" containerID="18abbb2ed80b43bcc4babfdcf1fbb2913bf37a9cf81cba51f67100f33496a0e2" cmd=["grpc_health_probe","-addr=:50051"] Dec 11 10:17:35 crc kubenswrapper[4953]: E1211 10:17:35.219032 4953 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 18abbb2ed80b43bcc4babfdcf1fbb2913bf37a9cf81cba51f67100f33496a0e2 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-gnxp9" podUID="fe9b2116-8ab4-4c4c-8c58-74e62f28893d" containerName="registry-server" Dec 11 10:17:35 crc kubenswrapper[4953]: I1211 10:17:35.219631 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-2zqkh"] Dec 11 10:17:35 crc kubenswrapper[4953]: E1211 10:17:35.219948 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cef0b6d3-40d2-4981-894b-962df1304c36" containerName="registry-server" Dec 11 10:17:35 crc kubenswrapper[4953]: I1211 10:17:35.219984 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="cef0b6d3-40d2-4981-894b-962df1304c36" containerName="registry-server" Dec 11 10:17:35 crc kubenswrapper[4953]: E1211 10:17:35.220005 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="277dd1b3-16e0-4ca6-9ceb-dc3f7ddc0c7e" containerName="installer" Dec 11 10:17:35 crc kubenswrapper[4953]: I1211 10:17:35.220015 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="277dd1b3-16e0-4ca6-9ceb-dc3f7ddc0c7e" containerName="installer" Dec 11 10:17:35 crc kubenswrapper[4953]: E1211 10:17:35.220027 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cef0b6d3-40d2-4981-894b-962df1304c36" containerName="extract-content" Dec 11 10:17:35 crc kubenswrapper[4953]: I1211 10:17:35.220035 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="cef0b6d3-40d2-4981-894b-962df1304c36" containerName="extract-content" Dec 11 10:17:35 crc kubenswrapper[4953]: E1211 10:17:35.220047 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6173b60-4d44-435b-a606-0b3836f71ad2" containerName="extract-content" Dec 11 10:17:35 crc kubenswrapper[4953]: I1211 10:17:35.220055 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6173b60-4d44-435b-a606-0b3836f71ad2" containerName="extract-content" Dec 11 10:17:35 crc kubenswrapper[4953]: E1211 10:17:35.220070 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 11 10:17:35 crc kubenswrapper[4953]: I1211 10:17:35.220095 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 11 10:17:35 crc kubenswrapper[4953]: E1211 10:17:35.220110 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6173b60-4d44-435b-a606-0b3836f71ad2" containerName="registry-server" Dec 11 10:17:35 crc kubenswrapper[4953]: I1211 10:17:35.220119 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6173b60-4d44-435b-a606-0b3836f71ad2" containerName="registry-server" Dec 11 10:17:35 crc kubenswrapper[4953]: E1211 10:17:35.220140 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6173b60-4d44-435b-a606-0b3836f71ad2" containerName="extract-utilities" Dec 11 10:17:35 crc kubenswrapper[4953]: I1211 10:17:35.220149 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6173b60-4d44-435b-a606-0b3836f71ad2" containerName="extract-utilities" Dec 11 10:17:35 crc kubenswrapper[4953]: E1211 10:17:35.220160 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cef0b6d3-40d2-4981-894b-962df1304c36" containerName="extract-utilities" Dec 11 10:17:35 crc kubenswrapper[4953]: I1211 10:17:35.220169 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="cef0b6d3-40d2-4981-894b-962df1304c36" containerName="extract-utilities" Dec 11 10:17:35 crc kubenswrapper[4953]: I1211 10:17:35.220323 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="277dd1b3-16e0-4ca6-9ceb-dc3f7ddc0c7e" containerName="installer" Dec 11 10:17:35 crc kubenswrapper[4953]: I1211 10:17:35.220343 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 11 10:17:35 crc kubenswrapper[4953]: I1211 10:17:35.220354 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6173b60-4d44-435b-a606-0b3836f71ad2" containerName="registry-server" Dec 11 10:17:35 crc kubenswrapper[4953]: I1211 10:17:35.220367 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="cef0b6d3-40d2-4981-894b-962df1304c36" containerName="registry-server" Dec 11 10:17:35 crc kubenswrapper[4953]: I1211 10:17:35.220941 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-2zqkh" Dec 11 10:17:35 crc kubenswrapper[4953]: I1211 10:17:35.223793 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1de6fb85-e275-4bf8-84f3-ab4a3b1e5565-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-2zqkh\" (UID: \"1de6fb85-e275-4bf8-84f3-ab4a3b1e5565\") " pod="openshift-marketplace/marketplace-operator-79b997595-2zqkh" Dec 11 10:17:35 crc kubenswrapper[4953]: I1211 10:17:35.223857 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/1de6fb85-e275-4bf8-84f3-ab4a3b1e5565-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-2zqkh\" (UID: \"1de6fb85-e275-4bf8-84f3-ab4a3b1e5565\") " pod="openshift-marketplace/marketplace-operator-79b997595-2zqkh" Dec 11 10:17:35 crc kubenswrapper[4953]: I1211 10:17:35.223994 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nq2v4\" (UniqueName: \"kubernetes.io/projected/1de6fb85-e275-4bf8-84f3-ab4a3b1e5565-kube-api-access-nq2v4\") pod \"marketplace-operator-79b997595-2zqkh\" (UID: \"1de6fb85-e275-4bf8-84f3-ab4a3b1e5565\") " pod="openshift-marketplace/marketplace-operator-79b997595-2zqkh" Dec 11 10:17:35 crc kubenswrapper[4953]: I1211 10:17:35.230289 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-2zqkh"] Dec 11 10:17:35 crc kubenswrapper[4953]: I1211 10:17:35.294848 4953 generic.go:334] "Generic (PLEG): container finished" podID="3b099bc8-faec-451b-88a3-f03e46e3ad94" containerID="712dc190de17abed413e4e7eadcec31160c952c72a60dc5438de29e84c8d93ed" exitCode=0 Dec 11 10:17:35 crc kubenswrapper[4953]: I1211 10:17:35.294938 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l5pbm" event={"ID":"3b099bc8-faec-451b-88a3-f03e46e3ad94","Type":"ContainerDied","Data":"712dc190de17abed413e4e7eadcec31160c952c72a60dc5438de29e84c8d93ed"} Dec 11 10:17:35 crc kubenswrapper[4953]: I1211 10:17:35.298738 4953 generic.go:334] "Generic (PLEG): container finished" podID="bcb98d03-7cdb-49c6-baa8-c4aa9605a2d6" containerID="21d5ba454bbbb7dd8c66a2b86d0764c525225c76a2b143c1aa1102d65d0d8bb3" exitCode=0 Dec 11 10:17:35 crc kubenswrapper[4953]: I1211 10:17:35.298918 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w2rvh" event={"ID":"bcb98d03-7cdb-49c6-baa8-c4aa9605a2d6","Type":"ContainerDied","Data":"21d5ba454bbbb7dd8c66a2b86d0764c525225c76a2b143c1aa1102d65d0d8bb3"} Dec 11 10:17:35 crc kubenswrapper[4953]: I1211 10:17:35.325192 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1de6fb85-e275-4bf8-84f3-ab4a3b1e5565-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-2zqkh\" (UID: \"1de6fb85-e275-4bf8-84f3-ab4a3b1e5565\") " pod="openshift-marketplace/marketplace-operator-79b997595-2zqkh" Dec 11 10:17:35 crc kubenswrapper[4953]: I1211 10:17:35.325250 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/1de6fb85-e275-4bf8-84f3-ab4a3b1e5565-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-2zqkh\" (UID: \"1de6fb85-e275-4bf8-84f3-ab4a3b1e5565\") " pod="openshift-marketplace/marketplace-operator-79b997595-2zqkh" Dec 11 10:17:35 crc kubenswrapper[4953]: I1211 10:17:35.325317 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nq2v4\" (UniqueName: \"kubernetes.io/projected/1de6fb85-e275-4bf8-84f3-ab4a3b1e5565-kube-api-access-nq2v4\") pod \"marketplace-operator-79b997595-2zqkh\" (UID: \"1de6fb85-e275-4bf8-84f3-ab4a3b1e5565\") " pod="openshift-marketplace/marketplace-operator-79b997595-2zqkh" Dec 11 10:17:35 crc kubenswrapper[4953]: I1211 10:17:35.328302 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1de6fb85-e275-4bf8-84f3-ab4a3b1e5565-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-2zqkh\" (UID: \"1de6fb85-e275-4bf8-84f3-ab4a3b1e5565\") " pod="openshift-marketplace/marketplace-operator-79b997595-2zqkh" Dec 11 10:17:35 crc kubenswrapper[4953]: I1211 10:17:35.333434 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/1de6fb85-e275-4bf8-84f3-ab4a3b1e5565-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-2zqkh\" (UID: \"1de6fb85-e275-4bf8-84f3-ab4a3b1e5565\") " pod="openshift-marketplace/marketplace-operator-79b997595-2zqkh" Dec 11 10:17:35 crc kubenswrapper[4953]: I1211 10:17:35.348448 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nq2v4\" (UniqueName: \"kubernetes.io/projected/1de6fb85-e275-4bf8-84f3-ab4a3b1e5565-kube-api-access-nq2v4\") pod \"marketplace-operator-79b997595-2zqkh\" (UID: \"1de6fb85-e275-4bf8-84f3-ab4a3b1e5565\") " pod="openshift-marketplace/marketplace-operator-79b997595-2zqkh" Dec 11 10:17:35 crc kubenswrapper[4953]: I1211 10:17:35.583480 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Dec 11 10:17:35 crc kubenswrapper[4953]: E1211 10:17:35.584216 4953 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9acd0de46f0ffe055f7a961c8a6e5dc33e4dbd99bf269efa6a779bb23da8633a is running failed: container process not found" containerID="9acd0de46f0ffe055f7a961c8a6e5dc33e4dbd99bf269efa6a779bb23da8633a" cmd=["grpc_health_probe","-addr=:50051"] Dec 11 10:17:35 crc kubenswrapper[4953]: E1211 10:17:35.584643 4953 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9acd0de46f0ffe055f7a961c8a6e5dc33e4dbd99bf269efa6a779bb23da8633a is running failed: container process not found" containerID="9acd0de46f0ffe055f7a961c8a6e5dc33e4dbd99bf269efa6a779bb23da8633a" cmd=["grpc_health_probe","-addr=:50051"] Dec 11 10:17:35 crc kubenswrapper[4953]: E1211 10:17:35.584887 4953 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9acd0de46f0ffe055f7a961c8a6e5dc33e4dbd99bf269efa6a779bb23da8633a is running failed: container process not found" containerID="9acd0de46f0ffe055f7a961c8a6e5dc33e4dbd99bf269efa6a779bb23da8633a" cmd=["grpc_health_probe","-addr=:50051"] Dec 11 10:17:35 crc kubenswrapper[4953]: E1211 10:17:35.584923 4953 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9acd0de46f0ffe055f7a961c8a6e5dc33e4dbd99bf269efa6a779bb23da8633a is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-operators-2f46z" podUID="2f406ece-016a-43bc-92c9-473b85ad0ca9" containerName="registry-server" Dec 11 10:17:35 crc kubenswrapper[4953]: E1211 10:17:35.601122 4953 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a4cdf564d7724667a615ff95a4a62a06e3e554763478f3f962e6d4fc3bafb5f8 is running failed: container process not found" containerID="a4cdf564d7724667a615ff95a4a62a06e3e554763478f3f962e6d4fc3bafb5f8" cmd=["grpc_health_probe","-addr=:50051"] Dec 11 10:17:35 crc kubenswrapper[4953]: E1211 10:17:35.601489 4953 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a4cdf564d7724667a615ff95a4a62a06e3e554763478f3f962e6d4fc3bafb5f8 is running failed: container process not found" containerID="a4cdf564d7724667a615ff95a4a62a06e3e554763478f3f962e6d4fc3bafb5f8" cmd=["grpc_health_probe","-addr=:50051"] Dec 11 10:17:35 crc kubenswrapper[4953]: E1211 10:17:35.602059 4953 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a4cdf564d7724667a615ff95a4a62a06e3e554763478f3f962e6d4fc3bafb5f8 is running failed: container process not found" containerID="a4cdf564d7724667a615ff95a4a62a06e3e554763478f3f962e6d4fc3bafb5f8" cmd=["grpc_health_probe","-addr=:50051"] Dec 11 10:17:35 crc kubenswrapper[4953]: E1211 10:17:35.602121 4953 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a4cdf564d7724667a615ff95a4a62a06e3e554763478f3f962e6d4fc3bafb5f8 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-operators-pxglb" podUID="46f197d9-de5c-42c2-9781-47ed42389e11" containerName="registry-server" Dec 11 10:17:35 crc kubenswrapper[4953]: I1211 10:17:35.617321 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Dec 11 10:17:35 crc kubenswrapper[4953]: I1211 10:17:35.637054 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-2zqkh" Dec 11 10:17:35 crc kubenswrapper[4953]: I1211 10:17:35.639270 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w2rvh" Dec 11 10:17:35 crc kubenswrapper[4953]: I1211 10:17:35.646750 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kkp25" Dec 11 10:17:35 crc kubenswrapper[4953]: I1211 10:17:35.676917 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l5pbm" Dec 11 10:17:35 crc kubenswrapper[4953]: I1211 10:17:35.678393 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gnxp9" Dec 11 10:17:35 crc kubenswrapper[4953]: I1211 10:17:35.696841 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2f46z" Dec 11 10:17:35 crc kubenswrapper[4953]: I1211 10:17:35.700660 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pxglb" Dec 11 10:17:35 crc kubenswrapper[4953]: I1211 10:17:35.733377 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-67vlb\" (UniqueName: \"kubernetes.io/projected/3b099bc8-faec-451b-88a3-f03e46e3ad94-kube-api-access-67vlb\") pod \"3b099bc8-faec-451b-88a3-f03e46e3ad94\" (UID: \"3b099bc8-faec-451b-88a3-f03e46e3ad94\") " Dec 11 10:17:35 crc kubenswrapper[4953]: I1211 10:17:35.733414 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe9b2116-8ab4-4c4c-8c58-74e62f28893d-utilities\") pod \"fe9b2116-8ab4-4c4c-8c58-74e62f28893d\" (UID: \"fe9b2116-8ab4-4c4c-8c58-74e62f28893d\") " Dec 11 10:17:35 crc kubenswrapper[4953]: I1211 10:17:35.733436 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4468c58a-3cfc-4197-bf1b-8afc67dfda5e-utilities\") pod \"4468c58a-3cfc-4197-bf1b-8afc67dfda5e\" (UID: \"4468c58a-3cfc-4197-bf1b-8afc67dfda5e\") " Dec 11 10:17:35 crc kubenswrapper[4953]: I1211 10:17:35.733461 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x6xx8\" (UniqueName: \"kubernetes.io/projected/bcb98d03-7cdb-49c6-baa8-c4aa9605a2d6-kube-api-access-x6xx8\") pod \"bcb98d03-7cdb-49c6-baa8-c4aa9605a2d6\" (UID: \"bcb98d03-7cdb-49c6-baa8-c4aa9605a2d6\") " Dec 11 10:17:35 crc kubenswrapper[4953]: I1211 10:17:35.733479 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b099bc8-faec-451b-88a3-f03e46e3ad94-utilities\") pod \"3b099bc8-faec-451b-88a3-f03e46e3ad94\" (UID: \"3b099bc8-faec-451b-88a3-f03e46e3ad94\") " Dec 11 10:17:35 crc kubenswrapper[4953]: I1211 10:17:35.733497 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b099bc8-faec-451b-88a3-f03e46e3ad94-catalog-content\") pod \"3b099bc8-faec-451b-88a3-f03e46e3ad94\" (UID: \"3b099bc8-faec-451b-88a3-f03e46e3ad94\") " Dec 11 10:17:35 crc kubenswrapper[4953]: I1211 10:17:35.733518 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f406ece-016a-43bc-92c9-473b85ad0ca9-catalog-content\") pod \"2f406ece-016a-43bc-92c9-473b85ad0ca9\" (UID: \"2f406ece-016a-43bc-92c9-473b85ad0ca9\") " Dec 11 10:17:35 crc kubenswrapper[4953]: I1211 10:17:35.733542 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f406ece-016a-43bc-92c9-473b85ad0ca9-utilities\") pod \"2f406ece-016a-43bc-92c9-473b85ad0ca9\" (UID: \"2f406ece-016a-43bc-92c9-473b85ad0ca9\") " Dec 11 10:17:35 crc kubenswrapper[4953]: I1211 10:17:35.733597 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sl7ms\" (UniqueName: \"kubernetes.io/projected/4468c58a-3cfc-4197-bf1b-8afc67dfda5e-kube-api-access-sl7ms\") pod \"4468c58a-3cfc-4197-bf1b-8afc67dfda5e\" (UID: \"4468c58a-3cfc-4197-bf1b-8afc67dfda5e\") " Dec 11 10:17:35 crc kubenswrapper[4953]: I1211 10:17:35.733619 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6n47f\" (UniqueName: \"kubernetes.io/projected/2f406ece-016a-43bc-92c9-473b85ad0ca9-kube-api-access-6n47f\") pod \"2f406ece-016a-43bc-92c9-473b85ad0ca9\" (UID: \"2f406ece-016a-43bc-92c9-473b85ad0ca9\") " Dec 11 10:17:35 crc kubenswrapper[4953]: I1211 10:17:35.733644 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zmd6v\" (UniqueName: \"kubernetes.io/projected/fe9b2116-8ab4-4c4c-8c58-74e62f28893d-kube-api-access-zmd6v\") pod \"fe9b2116-8ab4-4c4c-8c58-74e62f28893d\" (UID: \"fe9b2116-8ab4-4c4c-8c58-74e62f28893d\") " Dec 11 10:17:35 crc kubenswrapper[4953]: I1211 10:17:35.733662 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46f197d9-de5c-42c2-9781-47ed42389e11-catalog-content\") pod \"46f197d9-de5c-42c2-9781-47ed42389e11\" (UID: \"46f197d9-de5c-42c2-9781-47ed42389e11\") " Dec 11 10:17:35 crc kubenswrapper[4953]: I1211 10:17:35.733684 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4468c58a-3cfc-4197-bf1b-8afc67dfda5e-catalog-content\") pod \"4468c58a-3cfc-4197-bf1b-8afc67dfda5e\" (UID: \"4468c58a-3cfc-4197-bf1b-8afc67dfda5e\") " Dec 11 10:17:35 crc kubenswrapper[4953]: I1211 10:17:35.733699 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46f197d9-de5c-42c2-9781-47ed42389e11-utilities\") pod \"46f197d9-de5c-42c2-9781-47ed42389e11\" (UID: \"46f197d9-de5c-42c2-9781-47ed42389e11\") " Dec 11 10:17:35 crc kubenswrapper[4953]: I1211 10:17:35.733731 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bcb98d03-7cdb-49c6-baa8-c4aa9605a2d6-catalog-content\") pod \"bcb98d03-7cdb-49c6-baa8-c4aa9605a2d6\" (UID: \"bcb98d03-7cdb-49c6-baa8-c4aa9605a2d6\") " Dec 11 10:17:35 crc kubenswrapper[4953]: I1211 10:17:35.733748 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe9b2116-8ab4-4c4c-8c58-74e62f28893d-catalog-content\") pod \"fe9b2116-8ab4-4c4c-8c58-74e62f28893d\" (UID: \"fe9b2116-8ab4-4c4c-8c58-74e62f28893d\") " Dec 11 10:17:35 crc kubenswrapper[4953]: I1211 10:17:35.733770 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bcb98d03-7cdb-49c6-baa8-c4aa9605a2d6-utilities\") pod \"bcb98d03-7cdb-49c6-baa8-c4aa9605a2d6\" (UID: \"bcb98d03-7cdb-49c6-baa8-c4aa9605a2d6\") " Dec 11 10:17:35 crc kubenswrapper[4953]: I1211 10:17:35.733790 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8h45m\" (UniqueName: \"kubernetes.io/projected/46f197d9-de5c-42c2-9781-47ed42389e11-kube-api-access-8h45m\") pod \"46f197d9-de5c-42c2-9781-47ed42389e11\" (UID: \"46f197d9-de5c-42c2-9781-47ed42389e11\") " Dec 11 10:17:35 crc kubenswrapper[4953]: I1211 10:17:35.736001 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b099bc8-faec-451b-88a3-f03e46e3ad94-utilities" (OuterVolumeSpecName: "utilities") pod "3b099bc8-faec-451b-88a3-f03e46e3ad94" (UID: "3b099bc8-faec-451b-88a3-f03e46e3ad94"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:17:35 crc kubenswrapper[4953]: I1211 10:17:35.737105 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe9b2116-8ab4-4c4c-8c58-74e62f28893d-utilities" (OuterVolumeSpecName: "utilities") pod "fe9b2116-8ab4-4c4c-8c58-74e62f28893d" (UID: "fe9b2116-8ab4-4c4c-8c58-74e62f28893d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:17:35 crc kubenswrapper[4953]: I1211 10:17:35.739430 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4468c58a-3cfc-4197-bf1b-8afc67dfda5e-utilities" (OuterVolumeSpecName: "utilities") pod "4468c58a-3cfc-4197-bf1b-8afc67dfda5e" (UID: "4468c58a-3cfc-4197-bf1b-8afc67dfda5e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:17:35 crc kubenswrapper[4953]: I1211 10:17:35.739871 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b099bc8-faec-451b-88a3-f03e46e3ad94-kube-api-access-67vlb" (OuterVolumeSpecName: "kube-api-access-67vlb") pod "3b099bc8-faec-451b-88a3-f03e46e3ad94" (UID: "3b099bc8-faec-451b-88a3-f03e46e3ad94"). InnerVolumeSpecName "kube-api-access-67vlb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:17:35 crc kubenswrapper[4953]: I1211 10:17:35.740544 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bcb98d03-7cdb-49c6-baa8-c4aa9605a2d6-utilities" (OuterVolumeSpecName: "utilities") pod "bcb98d03-7cdb-49c6-baa8-c4aa9605a2d6" (UID: "bcb98d03-7cdb-49c6-baa8-c4aa9605a2d6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:17:35 crc kubenswrapper[4953]: I1211 10:17:35.741428 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/46f197d9-de5c-42c2-9781-47ed42389e11-utilities" (OuterVolumeSpecName: "utilities") pod "46f197d9-de5c-42c2-9781-47ed42389e11" (UID: "46f197d9-de5c-42c2-9781-47ed42389e11"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:17:35 crc kubenswrapper[4953]: I1211 10:17:35.741830 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46f197d9-de5c-42c2-9781-47ed42389e11-kube-api-access-8h45m" (OuterVolumeSpecName: "kube-api-access-8h45m") pod "46f197d9-de5c-42c2-9781-47ed42389e11" (UID: "46f197d9-de5c-42c2-9781-47ed42389e11"). InnerVolumeSpecName "kube-api-access-8h45m". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:17:35 crc kubenswrapper[4953]: I1211 10:17:35.742119 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe9b2116-8ab4-4c4c-8c58-74e62f28893d-kube-api-access-zmd6v" (OuterVolumeSpecName: "kube-api-access-zmd6v") pod "fe9b2116-8ab4-4c4c-8c58-74e62f28893d" (UID: "fe9b2116-8ab4-4c4c-8c58-74e62f28893d"). InnerVolumeSpecName "kube-api-access-zmd6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:17:35 crc kubenswrapper[4953]: I1211 10:17:35.747075 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f406ece-016a-43bc-92c9-473b85ad0ca9-utilities" (OuterVolumeSpecName: "utilities") pod "2f406ece-016a-43bc-92c9-473b85ad0ca9" (UID: "2f406ece-016a-43bc-92c9-473b85ad0ca9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:17:35 crc kubenswrapper[4953]: I1211 10:17:35.753252 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4468c58a-3cfc-4197-bf1b-8afc67dfda5e-kube-api-access-sl7ms" (OuterVolumeSpecName: "kube-api-access-sl7ms") pod "4468c58a-3cfc-4197-bf1b-8afc67dfda5e" (UID: "4468c58a-3cfc-4197-bf1b-8afc67dfda5e"). InnerVolumeSpecName "kube-api-access-sl7ms". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:17:35 crc kubenswrapper[4953]: I1211 10:17:35.761431 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bcb98d03-7cdb-49c6-baa8-c4aa9605a2d6-kube-api-access-x6xx8" (OuterVolumeSpecName: "kube-api-access-x6xx8") pod "bcb98d03-7cdb-49c6-baa8-c4aa9605a2d6" (UID: "bcb98d03-7cdb-49c6-baa8-c4aa9605a2d6"). InnerVolumeSpecName "kube-api-access-x6xx8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:17:35 crc kubenswrapper[4953]: I1211 10:17:35.772486 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f406ece-016a-43bc-92c9-473b85ad0ca9-kube-api-access-6n47f" (OuterVolumeSpecName: "kube-api-access-6n47f") pod "2f406ece-016a-43bc-92c9-473b85ad0ca9" (UID: "2f406ece-016a-43bc-92c9-473b85ad0ca9"). InnerVolumeSpecName "kube-api-access-6n47f". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:17:35 crc kubenswrapper[4953]: I1211 10:17:35.800895 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4468c58a-3cfc-4197-bf1b-8afc67dfda5e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4468c58a-3cfc-4197-bf1b-8afc67dfda5e" (UID: "4468c58a-3cfc-4197-bf1b-8afc67dfda5e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:17:35 crc kubenswrapper[4953]: I1211 10:17:35.834855 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6n47f\" (UniqueName: \"kubernetes.io/projected/2f406ece-016a-43bc-92c9-473b85ad0ca9-kube-api-access-6n47f\") on node \"crc\" DevicePath \"\"" Dec 11 10:17:35 crc kubenswrapper[4953]: I1211 10:17:35.834881 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zmd6v\" (UniqueName: \"kubernetes.io/projected/fe9b2116-8ab4-4c4c-8c58-74e62f28893d-kube-api-access-zmd6v\") on node \"crc\" DevicePath \"\"" Dec 11 10:17:35 crc kubenswrapper[4953]: I1211 10:17:35.834890 4953 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4468c58a-3cfc-4197-bf1b-8afc67dfda5e-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 10:17:35 crc kubenswrapper[4953]: I1211 10:17:35.834899 4953 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46f197d9-de5c-42c2-9781-47ed42389e11-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 10:17:35 crc kubenswrapper[4953]: I1211 10:17:35.834909 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8h45m\" (UniqueName: \"kubernetes.io/projected/46f197d9-de5c-42c2-9781-47ed42389e11-kube-api-access-8h45m\") on node \"crc\" DevicePath \"\"" Dec 11 10:17:35 crc kubenswrapper[4953]: I1211 10:17:35.834916 4953 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bcb98d03-7cdb-49c6-baa8-c4aa9605a2d6-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 10:17:35 crc kubenswrapper[4953]: I1211 10:17:35.834925 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-67vlb\" (UniqueName: \"kubernetes.io/projected/3b099bc8-faec-451b-88a3-f03e46e3ad94-kube-api-access-67vlb\") on node \"crc\" DevicePath \"\"" Dec 11 10:17:35 crc kubenswrapper[4953]: I1211 10:17:35.834934 4953 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe9b2116-8ab4-4c4c-8c58-74e62f28893d-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 10:17:35 crc kubenswrapper[4953]: I1211 10:17:35.834943 4953 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4468c58a-3cfc-4197-bf1b-8afc67dfda5e-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 10:17:35 crc kubenswrapper[4953]: I1211 10:17:35.834951 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x6xx8\" (UniqueName: \"kubernetes.io/projected/bcb98d03-7cdb-49c6-baa8-c4aa9605a2d6-kube-api-access-x6xx8\") on node \"crc\" DevicePath \"\"" Dec 11 10:17:35 crc kubenswrapper[4953]: I1211 10:17:35.834961 4953 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b099bc8-faec-451b-88a3-f03e46e3ad94-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 10:17:35 crc kubenswrapper[4953]: I1211 10:17:35.834970 4953 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f406ece-016a-43bc-92c9-473b85ad0ca9-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 10:17:35 crc kubenswrapper[4953]: I1211 10:17:35.834978 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sl7ms\" (UniqueName: \"kubernetes.io/projected/4468c58a-3cfc-4197-bf1b-8afc67dfda5e-kube-api-access-sl7ms\") on node \"crc\" DevicePath \"\"" Dec 11 10:17:35 crc kubenswrapper[4953]: I1211 10:17:35.857024 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe9b2116-8ab4-4c4c-8c58-74e62f28893d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fe9b2116-8ab4-4c4c-8c58-74e62f28893d" (UID: "fe9b2116-8ab4-4c4c-8c58-74e62f28893d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:17:35 crc kubenswrapper[4953]: I1211 10:17:35.858210 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b099bc8-faec-451b-88a3-f03e46e3ad94-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3b099bc8-faec-451b-88a3-f03e46e3ad94" (UID: "3b099bc8-faec-451b-88a3-f03e46e3ad94"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:17:35 crc kubenswrapper[4953]: I1211 10:17:35.866485 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Dec 11 10:17:35 crc kubenswrapper[4953]: I1211 10:17:35.869122 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-2zqkh"] Dec 11 10:17:35 crc kubenswrapper[4953]: I1211 10:17:35.878318 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bcb98d03-7cdb-49c6-baa8-c4aa9605a2d6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bcb98d03-7cdb-49c6-baa8-c4aa9605a2d6" (UID: "bcb98d03-7cdb-49c6-baa8-c4aa9605a2d6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:17:35 crc kubenswrapper[4953]: I1211 10:17:35.892167 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/46f197d9-de5c-42c2-9781-47ed42389e11-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "46f197d9-de5c-42c2-9781-47ed42389e11" (UID: "46f197d9-de5c-42c2-9781-47ed42389e11"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:17:35 crc kubenswrapper[4953]: I1211 10:17:35.894500 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Dec 11 10:17:35 crc kubenswrapper[4953]: I1211 10:17:35.894670 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f406ece-016a-43bc-92c9-473b85ad0ca9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2f406ece-016a-43bc-92c9-473b85ad0ca9" (UID: "2f406ece-016a-43bc-92c9-473b85ad0ca9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:17:35 crc kubenswrapper[4953]: I1211 10:17:35.935470 4953 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b099bc8-faec-451b-88a3-f03e46e3ad94-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 10:17:35 crc kubenswrapper[4953]: I1211 10:17:35.935510 4953 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f406ece-016a-43bc-92c9-473b85ad0ca9-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 10:17:35 crc kubenswrapper[4953]: I1211 10:17:35.935520 4953 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46f197d9-de5c-42c2-9781-47ed42389e11-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 10:17:35 crc kubenswrapper[4953]: I1211 10:17:35.935529 4953 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe9b2116-8ab4-4c4c-8c58-74e62f28893d-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 10:17:35 crc kubenswrapper[4953]: I1211 10:17:35.935538 4953 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bcb98d03-7cdb-49c6-baa8-c4aa9605a2d6-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 10:17:36 crc kubenswrapper[4953]: I1211 10:17:36.305393 4953 generic.go:334] "Generic (PLEG): container finished" podID="2f406ece-016a-43bc-92c9-473b85ad0ca9" containerID="9acd0de46f0ffe055f7a961c8a6e5dc33e4dbd99bf269efa6a779bb23da8633a" exitCode=0 Dec 11 10:17:36 crc kubenswrapper[4953]: I1211 10:17:36.305454 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2f46z" Dec 11 10:17:36 crc kubenswrapper[4953]: I1211 10:17:36.305466 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2f46z" event={"ID":"2f406ece-016a-43bc-92c9-473b85ad0ca9","Type":"ContainerDied","Data":"9acd0de46f0ffe055f7a961c8a6e5dc33e4dbd99bf269efa6a779bb23da8633a"} Dec 11 10:17:36 crc kubenswrapper[4953]: I1211 10:17:36.306043 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2f46z" event={"ID":"2f406ece-016a-43bc-92c9-473b85ad0ca9","Type":"ContainerDied","Data":"23ac89f6dbb4662b081e959175284cf76f8339415386db03d6bfecb78d45b86c"} Dec 11 10:17:36 crc kubenswrapper[4953]: I1211 10:17:36.306073 4953 scope.go:117] "RemoveContainer" containerID="9acd0de46f0ffe055f7a961c8a6e5dc33e4dbd99bf269efa6a779bb23da8633a" Dec 11 10:17:36 crc kubenswrapper[4953]: I1211 10:17:36.307439 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-2zqkh" event={"ID":"1de6fb85-e275-4bf8-84f3-ab4a3b1e5565","Type":"ContainerStarted","Data":"20272e15ea10060759b179599ac90d93a9b697ceac252dc2c81e7ffb0f264989"} Dec 11 10:17:36 crc kubenswrapper[4953]: I1211 10:17:36.307496 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-2zqkh" event={"ID":"1de6fb85-e275-4bf8-84f3-ab4a3b1e5565","Type":"ContainerStarted","Data":"8e5ad08ef655d7b4dc7b66f9cbf9157b993e84084bd2c8fb0022b4fa7655c53a"} Dec 11 10:17:36 crc kubenswrapper[4953]: I1211 10:17:36.307605 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-2zqkh" Dec 11 10:17:36 crc kubenswrapper[4953]: I1211 10:17:36.311321 4953 generic.go:334] "Generic (PLEG): container finished" podID="fe9b2116-8ab4-4c4c-8c58-74e62f28893d" containerID="18abbb2ed80b43bcc4babfdcf1fbb2913bf37a9cf81cba51f67100f33496a0e2" exitCode=0 Dec 11 10:17:36 crc kubenswrapper[4953]: I1211 10:17:36.311389 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gnxp9" Dec 11 10:17:36 crc kubenswrapper[4953]: I1211 10:17:36.311413 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gnxp9" event={"ID":"fe9b2116-8ab4-4c4c-8c58-74e62f28893d","Type":"ContainerDied","Data":"18abbb2ed80b43bcc4babfdcf1fbb2913bf37a9cf81cba51f67100f33496a0e2"} Dec 11 10:17:36 crc kubenswrapper[4953]: I1211 10:17:36.311491 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gnxp9" event={"ID":"fe9b2116-8ab4-4c4c-8c58-74e62f28893d","Type":"ContainerDied","Data":"757d1eecf2c0afe79f6123bb78ee463e89fedb46d05f64216e1fed3e36c4a053"} Dec 11 10:17:36 crc kubenswrapper[4953]: I1211 10:17:36.314728 4953 generic.go:334] "Generic (PLEG): container finished" podID="4468c58a-3cfc-4197-bf1b-8afc67dfda5e" containerID="e28fbeee778975782d25ee5289ddcbdc17fdbaac5db330c2e81a70d501961dc9" exitCode=0 Dec 11 10:17:36 crc kubenswrapper[4953]: I1211 10:17:36.314825 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kkp25" event={"ID":"4468c58a-3cfc-4197-bf1b-8afc67dfda5e","Type":"ContainerDied","Data":"e28fbeee778975782d25ee5289ddcbdc17fdbaac5db330c2e81a70d501961dc9"} Dec 11 10:17:36 crc kubenswrapper[4953]: I1211 10:17:36.314864 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kkp25" Dec 11 10:17:36 crc kubenswrapper[4953]: I1211 10:17:36.314878 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kkp25" event={"ID":"4468c58a-3cfc-4197-bf1b-8afc67dfda5e","Type":"ContainerDied","Data":"968362bf145af4e8c6daf3916ed7c71f6368d6575126703378a2fd58e15ddee4"} Dec 11 10:17:36 crc kubenswrapper[4953]: I1211 10:17:36.317860 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-2zqkh" Dec 11 10:17:36 crc kubenswrapper[4953]: I1211 10:17:36.317893 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pxglb" event={"ID":"46f197d9-de5c-42c2-9781-47ed42389e11","Type":"ContainerDied","Data":"a4cdf564d7724667a615ff95a4a62a06e3e554763478f3f962e6d4fc3bafb5f8"} Dec 11 10:17:36 crc kubenswrapper[4953]: I1211 10:17:36.317903 4953 generic.go:334] "Generic (PLEG): container finished" podID="46f197d9-de5c-42c2-9781-47ed42389e11" containerID="a4cdf564d7724667a615ff95a4a62a06e3e554763478f3f962e6d4fc3bafb5f8" exitCode=0 Dec 11 10:17:36 crc kubenswrapper[4953]: I1211 10:17:36.317979 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pxglb" event={"ID":"46f197d9-de5c-42c2-9781-47ed42389e11","Type":"ContainerDied","Data":"0932ecd86015570542fe1c1f5aadeed270563025cdf37df7353931fac3a61db9"} Dec 11 10:17:36 crc kubenswrapper[4953]: I1211 10:17:36.318023 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pxglb" Dec 11 10:17:36 crc kubenswrapper[4953]: I1211 10:17:36.320789 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w2rvh" event={"ID":"bcb98d03-7cdb-49c6-baa8-c4aa9605a2d6","Type":"ContainerDied","Data":"d578d3c4b9e1bad41906bc892faac751523d1c31943a33eed46560b9d04193e1"} Dec 11 10:17:36 crc kubenswrapper[4953]: I1211 10:17:36.320872 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w2rvh" Dec 11 10:17:36 crc kubenswrapper[4953]: I1211 10:17:36.323144 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-xmb4p" podUID="06554344-a634-4dec-aaf7-e3d9919d9e80" containerName="marketplace-operator" containerID="cri-o://198e08d832fd5afa990f02b583df9f04268e5d1a887a3c9e3d1d9b80c743e035" gracePeriod=30 Dec 11 10:17:36 crc kubenswrapper[4953]: I1211 10:17:36.323407 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l5pbm" event={"ID":"3b099bc8-faec-451b-88a3-f03e46e3ad94","Type":"ContainerDied","Data":"8d51dc80d54a321ce08fbd2cafbceeba078a629c62f486aeb4aa1cef25da139d"} Dec 11 10:17:36 crc kubenswrapper[4953]: I1211 10:17:36.323479 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l5pbm" Dec 11 10:17:36 crc kubenswrapper[4953]: I1211 10:17:36.329710 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-2zqkh" podStartSLOduration=1.329693453 podStartE2EDuration="1.329693453s" podCreationTimestamp="2025-12-11 10:17:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:17:36.32630087 +0000 UTC m=+374.350159913" watchObservedRunningTime="2025-12-11 10:17:36.329693453 +0000 UTC m=+374.353552486" Dec 11 10:17:36 crc kubenswrapper[4953]: I1211 10:17:36.390416 4953 scope.go:117] "RemoveContainer" containerID="4972be10e78ae7134e8bf3613f5b433589964dcdeebc193ba0d6e7ff302ed133" Dec 11 10:17:36 crc kubenswrapper[4953]: I1211 10:17:36.438385 4953 scope.go:117] "RemoveContainer" containerID="1c00015e89efc7e4605224d26c62dfa4087a8777726e92860ab2a27d46ffbbb1" Dec 11 10:17:36 crc kubenswrapper[4953]: I1211 10:17:36.486180 4953 scope.go:117] "RemoveContainer" containerID="9acd0de46f0ffe055f7a961c8a6e5dc33e4dbd99bf269efa6a779bb23da8633a" Dec 11 10:17:36 crc kubenswrapper[4953]: E1211 10:17:36.486689 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9acd0de46f0ffe055f7a961c8a6e5dc33e4dbd99bf269efa6a779bb23da8633a\": container with ID starting with 9acd0de46f0ffe055f7a961c8a6e5dc33e4dbd99bf269efa6a779bb23da8633a not found: ID does not exist" containerID="9acd0de46f0ffe055f7a961c8a6e5dc33e4dbd99bf269efa6a779bb23da8633a" Dec 11 10:17:36 crc kubenswrapper[4953]: I1211 10:17:36.486736 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9acd0de46f0ffe055f7a961c8a6e5dc33e4dbd99bf269efa6a779bb23da8633a"} err="failed to get container status \"9acd0de46f0ffe055f7a961c8a6e5dc33e4dbd99bf269efa6a779bb23da8633a\": rpc error: code = NotFound desc = could not find container \"9acd0de46f0ffe055f7a961c8a6e5dc33e4dbd99bf269efa6a779bb23da8633a\": container with ID starting with 9acd0de46f0ffe055f7a961c8a6e5dc33e4dbd99bf269efa6a779bb23da8633a not found: ID does not exist" Dec 11 10:17:36 crc kubenswrapper[4953]: I1211 10:17:36.486763 4953 scope.go:117] "RemoveContainer" containerID="4972be10e78ae7134e8bf3613f5b433589964dcdeebc193ba0d6e7ff302ed133" Dec 11 10:17:36 crc kubenswrapper[4953]: E1211 10:17:36.487158 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4972be10e78ae7134e8bf3613f5b433589964dcdeebc193ba0d6e7ff302ed133\": container with ID starting with 4972be10e78ae7134e8bf3613f5b433589964dcdeebc193ba0d6e7ff302ed133 not found: ID does not exist" containerID="4972be10e78ae7134e8bf3613f5b433589964dcdeebc193ba0d6e7ff302ed133" Dec 11 10:17:36 crc kubenswrapper[4953]: I1211 10:17:36.487211 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4972be10e78ae7134e8bf3613f5b433589964dcdeebc193ba0d6e7ff302ed133"} err="failed to get container status \"4972be10e78ae7134e8bf3613f5b433589964dcdeebc193ba0d6e7ff302ed133\": rpc error: code = NotFound desc = could not find container \"4972be10e78ae7134e8bf3613f5b433589964dcdeebc193ba0d6e7ff302ed133\": container with ID starting with 4972be10e78ae7134e8bf3613f5b433589964dcdeebc193ba0d6e7ff302ed133 not found: ID does not exist" Dec 11 10:17:36 crc kubenswrapper[4953]: I1211 10:17:36.487240 4953 scope.go:117] "RemoveContainer" containerID="1c00015e89efc7e4605224d26c62dfa4087a8777726e92860ab2a27d46ffbbb1" Dec 11 10:17:36 crc kubenswrapper[4953]: E1211 10:17:36.487553 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c00015e89efc7e4605224d26c62dfa4087a8777726e92860ab2a27d46ffbbb1\": container with ID starting with 1c00015e89efc7e4605224d26c62dfa4087a8777726e92860ab2a27d46ffbbb1 not found: ID does not exist" containerID="1c00015e89efc7e4605224d26c62dfa4087a8777726e92860ab2a27d46ffbbb1" Dec 11 10:17:36 crc kubenswrapper[4953]: I1211 10:17:36.487628 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c00015e89efc7e4605224d26c62dfa4087a8777726e92860ab2a27d46ffbbb1"} err="failed to get container status \"1c00015e89efc7e4605224d26c62dfa4087a8777726e92860ab2a27d46ffbbb1\": rpc error: code = NotFound desc = could not find container \"1c00015e89efc7e4605224d26c62dfa4087a8777726e92860ab2a27d46ffbbb1\": container with ID starting with 1c00015e89efc7e4605224d26c62dfa4087a8777726e92860ab2a27d46ffbbb1 not found: ID does not exist" Dec 11 10:17:36 crc kubenswrapper[4953]: I1211 10:17:36.487662 4953 scope.go:117] "RemoveContainer" containerID="18abbb2ed80b43bcc4babfdcf1fbb2913bf37a9cf81cba51f67100f33496a0e2" Dec 11 10:17:36 crc kubenswrapper[4953]: I1211 10:17:36.500535 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pxglb"] Dec 11 10:17:36 crc kubenswrapper[4953]: I1211 10:17:36.507865 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-pxglb"] Dec 11 10:17:36 crc kubenswrapper[4953]: I1211 10:17:36.518656 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2f46z"] Dec 11 10:17:36 crc kubenswrapper[4953]: I1211 10:17:36.521952 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-2f46z"] Dec 11 10:17:36 crc kubenswrapper[4953]: I1211 10:17:36.523909 4953 scope.go:117] "RemoveContainer" containerID="c858e9cf57114f1d8ef9dce55a3321c45e3383a8bec4f4abb3d69bbf946e7ca0" Dec 11 10:17:36 crc kubenswrapper[4953]: I1211 10:17:36.534202 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-l5pbm"] Dec 11 10:17:36 crc kubenswrapper[4953]: I1211 10:17:36.538645 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-l5pbm"] Dec 11 10:17:36 crc kubenswrapper[4953]: I1211 10:17:36.547783 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-w2rvh"] Dec 11 10:17:36 crc kubenswrapper[4953]: I1211 10:17:36.552031 4953 scope.go:117] "RemoveContainer" containerID="f659b208df32224758aaf8c62286bec90a053c1ef5705d12d4cc2b605c64f1d0" Dec 11 10:17:36 crc kubenswrapper[4953]: I1211 10:17:36.564181 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-w2rvh"] Dec 11 10:17:36 crc kubenswrapper[4953]: I1211 10:17:36.570812 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kkp25"] Dec 11 10:17:36 crc kubenswrapper[4953]: I1211 10:17:36.575891 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-kkp25"] Dec 11 10:17:36 crc kubenswrapper[4953]: I1211 10:17:36.578546 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gnxp9"] Dec 11 10:17:36 crc kubenswrapper[4953]: I1211 10:17:36.583527 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-gnxp9"] Dec 11 10:17:36 crc kubenswrapper[4953]: I1211 10:17:36.587476 4953 scope.go:117] "RemoveContainer" containerID="18abbb2ed80b43bcc4babfdcf1fbb2913bf37a9cf81cba51f67100f33496a0e2" Dec 11 10:17:36 crc kubenswrapper[4953]: E1211 10:17:36.588216 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"18abbb2ed80b43bcc4babfdcf1fbb2913bf37a9cf81cba51f67100f33496a0e2\": container with ID starting with 18abbb2ed80b43bcc4babfdcf1fbb2913bf37a9cf81cba51f67100f33496a0e2 not found: ID does not exist" containerID="18abbb2ed80b43bcc4babfdcf1fbb2913bf37a9cf81cba51f67100f33496a0e2" Dec 11 10:17:36 crc kubenswrapper[4953]: I1211 10:17:36.588273 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18abbb2ed80b43bcc4babfdcf1fbb2913bf37a9cf81cba51f67100f33496a0e2"} err="failed to get container status \"18abbb2ed80b43bcc4babfdcf1fbb2913bf37a9cf81cba51f67100f33496a0e2\": rpc error: code = NotFound desc = could not find container \"18abbb2ed80b43bcc4babfdcf1fbb2913bf37a9cf81cba51f67100f33496a0e2\": container with ID starting with 18abbb2ed80b43bcc4babfdcf1fbb2913bf37a9cf81cba51f67100f33496a0e2 not found: ID does not exist" Dec 11 10:17:36 crc kubenswrapper[4953]: I1211 10:17:36.588299 4953 scope.go:117] "RemoveContainer" containerID="c858e9cf57114f1d8ef9dce55a3321c45e3383a8bec4f4abb3d69bbf946e7ca0" Dec 11 10:17:36 crc kubenswrapper[4953]: E1211 10:17:36.588718 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c858e9cf57114f1d8ef9dce55a3321c45e3383a8bec4f4abb3d69bbf946e7ca0\": container with ID starting with c858e9cf57114f1d8ef9dce55a3321c45e3383a8bec4f4abb3d69bbf946e7ca0 not found: ID does not exist" containerID="c858e9cf57114f1d8ef9dce55a3321c45e3383a8bec4f4abb3d69bbf946e7ca0" Dec 11 10:17:36 crc kubenswrapper[4953]: I1211 10:17:36.588735 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c858e9cf57114f1d8ef9dce55a3321c45e3383a8bec4f4abb3d69bbf946e7ca0"} err="failed to get container status \"c858e9cf57114f1d8ef9dce55a3321c45e3383a8bec4f4abb3d69bbf946e7ca0\": rpc error: code = NotFound desc = could not find container \"c858e9cf57114f1d8ef9dce55a3321c45e3383a8bec4f4abb3d69bbf946e7ca0\": container with ID starting with c858e9cf57114f1d8ef9dce55a3321c45e3383a8bec4f4abb3d69bbf946e7ca0 not found: ID does not exist" Dec 11 10:17:36 crc kubenswrapper[4953]: I1211 10:17:36.588783 4953 scope.go:117] "RemoveContainer" containerID="f659b208df32224758aaf8c62286bec90a053c1ef5705d12d4cc2b605c64f1d0" Dec 11 10:17:36 crc kubenswrapper[4953]: E1211 10:17:36.591404 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f659b208df32224758aaf8c62286bec90a053c1ef5705d12d4cc2b605c64f1d0\": container with ID starting with f659b208df32224758aaf8c62286bec90a053c1ef5705d12d4cc2b605c64f1d0 not found: ID does not exist" containerID="f659b208df32224758aaf8c62286bec90a053c1ef5705d12d4cc2b605c64f1d0" Dec 11 10:17:36 crc kubenswrapper[4953]: I1211 10:17:36.591429 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f659b208df32224758aaf8c62286bec90a053c1ef5705d12d4cc2b605c64f1d0"} err="failed to get container status \"f659b208df32224758aaf8c62286bec90a053c1ef5705d12d4cc2b605c64f1d0\": rpc error: code = NotFound desc = could not find container \"f659b208df32224758aaf8c62286bec90a053c1ef5705d12d4cc2b605c64f1d0\": container with ID starting with f659b208df32224758aaf8c62286bec90a053c1ef5705d12d4cc2b605c64f1d0 not found: ID does not exist" Dec 11 10:17:36 crc kubenswrapper[4953]: I1211 10:17:36.591448 4953 scope.go:117] "RemoveContainer" containerID="e28fbeee778975782d25ee5289ddcbdc17fdbaac5db330c2e81a70d501961dc9" Dec 11 10:17:36 crc kubenswrapper[4953]: I1211 10:17:36.774383 4953 scope.go:117] "RemoveContainer" containerID="e18411fd4726cf01f45d745aeb9e324e72883ea1dd3a39e4beac0742646f2dae" Dec 11 10:17:36 crc kubenswrapper[4953]: I1211 10:17:36.791694 4953 scope.go:117] "RemoveContainer" containerID="8366f85b8b45f4b94f8bd1c365c7ccbf506331536c88f6ec8c38d7ebcc9650f9" Dec 11 10:17:36 crc kubenswrapper[4953]: I1211 10:17:36.806016 4953 scope.go:117] "RemoveContainer" containerID="e28fbeee778975782d25ee5289ddcbdc17fdbaac5db330c2e81a70d501961dc9" Dec 11 10:17:36 crc kubenswrapper[4953]: E1211 10:17:36.806501 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e28fbeee778975782d25ee5289ddcbdc17fdbaac5db330c2e81a70d501961dc9\": container with ID starting with e28fbeee778975782d25ee5289ddcbdc17fdbaac5db330c2e81a70d501961dc9 not found: ID does not exist" containerID="e28fbeee778975782d25ee5289ddcbdc17fdbaac5db330c2e81a70d501961dc9" Dec 11 10:17:36 crc kubenswrapper[4953]: I1211 10:17:36.806558 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e28fbeee778975782d25ee5289ddcbdc17fdbaac5db330c2e81a70d501961dc9"} err="failed to get container status \"e28fbeee778975782d25ee5289ddcbdc17fdbaac5db330c2e81a70d501961dc9\": rpc error: code = NotFound desc = could not find container \"e28fbeee778975782d25ee5289ddcbdc17fdbaac5db330c2e81a70d501961dc9\": container with ID starting with e28fbeee778975782d25ee5289ddcbdc17fdbaac5db330c2e81a70d501961dc9 not found: ID does not exist" Dec 11 10:17:36 crc kubenswrapper[4953]: I1211 10:17:36.806637 4953 scope.go:117] "RemoveContainer" containerID="e18411fd4726cf01f45d745aeb9e324e72883ea1dd3a39e4beac0742646f2dae" Dec 11 10:17:36 crc kubenswrapper[4953]: E1211 10:17:36.807003 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e18411fd4726cf01f45d745aeb9e324e72883ea1dd3a39e4beac0742646f2dae\": container with ID starting with e18411fd4726cf01f45d745aeb9e324e72883ea1dd3a39e4beac0742646f2dae not found: ID does not exist" containerID="e18411fd4726cf01f45d745aeb9e324e72883ea1dd3a39e4beac0742646f2dae" Dec 11 10:17:36 crc kubenswrapper[4953]: I1211 10:17:36.807042 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e18411fd4726cf01f45d745aeb9e324e72883ea1dd3a39e4beac0742646f2dae"} err="failed to get container status \"e18411fd4726cf01f45d745aeb9e324e72883ea1dd3a39e4beac0742646f2dae\": rpc error: code = NotFound desc = could not find container \"e18411fd4726cf01f45d745aeb9e324e72883ea1dd3a39e4beac0742646f2dae\": container with ID starting with e18411fd4726cf01f45d745aeb9e324e72883ea1dd3a39e4beac0742646f2dae not found: ID does not exist" Dec 11 10:17:36 crc kubenswrapper[4953]: I1211 10:17:36.807074 4953 scope.go:117] "RemoveContainer" containerID="8366f85b8b45f4b94f8bd1c365c7ccbf506331536c88f6ec8c38d7ebcc9650f9" Dec 11 10:17:36 crc kubenswrapper[4953]: E1211 10:17:36.807435 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8366f85b8b45f4b94f8bd1c365c7ccbf506331536c88f6ec8c38d7ebcc9650f9\": container with ID starting with 8366f85b8b45f4b94f8bd1c365c7ccbf506331536c88f6ec8c38d7ebcc9650f9 not found: ID does not exist" containerID="8366f85b8b45f4b94f8bd1c365c7ccbf506331536c88f6ec8c38d7ebcc9650f9" Dec 11 10:17:36 crc kubenswrapper[4953]: I1211 10:17:36.807481 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8366f85b8b45f4b94f8bd1c365c7ccbf506331536c88f6ec8c38d7ebcc9650f9"} err="failed to get container status \"8366f85b8b45f4b94f8bd1c365c7ccbf506331536c88f6ec8c38d7ebcc9650f9\": rpc error: code = NotFound desc = could not find container \"8366f85b8b45f4b94f8bd1c365c7ccbf506331536c88f6ec8c38d7ebcc9650f9\": container with ID starting with 8366f85b8b45f4b94f8bd1c365c7ccbf506331536c88f6ec8c38d7ebcc9650f9 not found: ID does not exist" Dec 11 10:17:36 crc kubenswrapper[4953]: I1211 10:17:36.807511 4953 scope.go:117] "RemoveContainer" containerID="a4cdf564d7724667a615ff95a4a62a06e3e554763478f3f962e6d4fc3bafb5f8" Dec 11 10:17:36 crc kubenswrapper[4953]: I1211 10:17:36.816518 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Dec 11 10:17:36 crc kubenswrapper[4953]: I1211 10:17:36.818723 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-xmb4p" Dec 11 10:17:36 crc kubenswrapper[4953]: I1211 10:17:36.825114 4953 scope.go:117] "RemoveContainer" containerID="8e496bb06849616e21236ee40f86b43a8f15a8473596a3beef266909bfda57b5" Dec 11 10:17:36 crc kubenswrapper[4953]: I1211 10:17:36.855325 4953 scope.go:117] "RemoveContainer" containerID="db03f21c27567a57fd340d4410c247316b54cd4b0a32d0b44758e0041a8b16f9" Dec 11 10:17:36 crc kubenswrapper[4953]: I1211 10:17:36.869976 4953 scope.go:117] "RemoveContainer" containerID="a4cdf564d7724667a615ff95a4a62a06e3e554763478f3f962e6d4fc3bafb5f8" Dec 11 10:17:36 crc kubenswrapper[4953]: E1211 10:17:36.870549 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4cdf564d7724667a615ff95a4a62a06e3e554763478f3f962e6d4fc3bafb5f8\": container with ID starting with a4cdf564d7724667a615ff95a4a62a06e3e554763478f3f962e6d4fc3bafb5f8 not found: ID does not exist" containerID="a4cdf564d7724667a615ff95a4a62a06e3e554763478f3f962e6d4fc3bafb5f8" Dec 11 10:17:36 crc kubenswrapper[4953]: I1211 10:17:36.870663 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4cdf564d7724667a615ff95a4a62a06e3e554763478f3f962e6d4fc3bafb5f8"} err="failed to get container status \"a4cdf564d7724667a615ff95a4a62a06e3e554763478f3f962e6d4fc3bafb5f8\": rpc error: code = NotFound desc = could not find container \"a4cdf564d7724667a615ff95a4a62a06e3e554763478f3f962e6d4fc3bafb5f8\": container with ID starting with a4cdf564d7724667a615ff95a4a62a06e3e554763478f3f962e6d4fc3bafb5f8 not found: ID does not exist" Dec 11 10:17:36 crc kubenswrapper[4953]: I1211 10:17:36.870705 4953 scope.go:117] "RemoveContainer" containerID="8e496bb06849616e21236ee40f86b43a8f15a8473596a3beef266909bfda57b5" Dec 11 10:17:36 crc kubenswrapper[4953]: E1211 10:17:36.871083 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e496bb06849616e21236ee40f86b43a8f15a8473596a3beef266909bfda57b5\": container with ID starting with 8e496bb06849616e21236ee40f86b43a8f15a8473596a3beef266909bfda57b5 not found: ID does not exist" containerID="8e496bb06849616e21236ee40f86b43a8f15a8473596a3beef266909bfda57b5" Dec 11 10:17:36 crc kubenswrapper[4953]: I1211 10:17:36.871102 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e496bb06849616e21236ee40f86b43a8f15a8473596a3beef266909bfda57b5"} err="failed to get container status \"8e496bb06849616e21236ee40f86b43a8f15a8473596a3beef266909bfda57b5\": rpc error: code = NotFound desc = could not find container \"8e496bb06849616e21236ee40f86b43a8f15a8473596a3beef266909bfda57b5\": container with ID starting with 8e496bb06849616e21236ee40f86b43a8f15a8473596a3beef266909bfda57b5 not found: ID does not exist" Dec 11 10:17:36 crc kubenswrapper[4953]: I1211 10:17:36.871127 4953 scope.go:117] "RemoveContainer" containerID="db03f21c27567a57fd340d4410c247316b54cd4b0a32d0b44758e0041a8b16f9" Dec 11 10:17:36 crc kubenswrapper[4953]: E1211 10:17:36.871370 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db03f21c27567a57fd340d4410c247316b54cd4b0a32d0b44758e0041a8b16f9\": container with ID starting with db03f21c27567a57fd340d4410c247316b54cd4b0a32d0b44758e0041a8b16f9 not found: ID does not exist" containerID="db03f21c27567a57fd340d4410c247316b54cd4b0a32d0b44758e0041a8b16f9" Dec 11 10:17:36 crc kubenswrapper[4953]: I1211 10:17:36.871451 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db03f21c27567a57fd340d4410c247316b54cd4b0a32d0b44758e0041a8b16f9"} err="failed to get container status \"db03f21c27567a57fd340d4410c247316b54cd4b0a32d0b44758e0041a8b16f9\": rpc error: code = NotFound desc = could not find container \"db03f21c27567a57fd340d4410c247316b54cd4b0a32d0b44758e0041a8b16f9\": container with ID starting with db03f21c27567a57fd340d4410c247316b54cd4b0a32d0b44758e0041a8b16f9 not found: ID does not exist" Dec 11 10:17:36 crc kubenswrapper[4953]: I1211 10:17:36.871483 4953 scope.go:117] "RemoveContainer" containerID="21d5ba454bbbb7dd8c66a2b86d0764c525225c76a2b143c1aa1102d65d0d8bb3" Dec 11 10:17:36 crc kubenswrapper[4953]: I1211 10:17:36.883915 4953 scope.go:117] "RemoveContainer" containerID="716c7bc31ebb28ad0a2637286a01f7c8cc6b1fb54a5615922d5a2021ae0d4ef5" Dec 11 10:17:36 crc kubenswrapper[4953]: I1211 10:17:36.903474 4953 scope.go:117] "RemoveContainer" containerID="1771c954424aecc637c957b97da9788db4fdf5b8c7ce9bd839dfb771c6515e1f" Dec 11 10:17:36 crc kubenswrapper[4953]: I1211 10:17:36.917194 4953 scope.go:117] "RemoveContainer" containerID="712dc190de17abed413e4e7eadcec31160c952c72a60dc5438de29e84c8d93ed" Dec 11 10:17:36 crc kubenswrapper[4953]: I1211 10:17:36.931535 4953 scope.go:117] "RemoveContainer" containerID="40cfa2cd3768c6aaca6fb54e82e93fe3ba9a6a359f139c30a4c65bc21eb4799d" Dec 11 10:17:36 crc kubenswrapper[4953]: I1211 10:17:36.948411 4953 scope.go:117] "RemoveContainer" containerID="1130d1c0eb55095648ba931767502e4b391aab712a60ba66ca340b733199bcad" Dec 11 10:17:36 crc kubenswrapper[4953]: I1211 10:17:36.960789 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/06554344-a634-4dec-aaf7-e3d9919d9e80-marketplace-trusted-ca\") pod \"06554344-a634-4dec-aaf7-e3d9919d9e80\" (UID: \"06554344-a634-4dec-aaf7-e3d9919d9e80\") " Dec 11 10:17:36 crc kubenswrapper[4953]: I1211 10:17:36.960903 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/06554344-a634-4dec-aaf7-e3d9919d9e80-marketplace-operator-metrics\") pod \"06554344-a634-4dec-aaf7-e3d9919d9e80\" (UID: \"06554344-a634-4dec-aaf7-e3d9919d9e80\") " Dec 11 10:17:36 crc kubenswrapper[4953]: I1211 10:17:36.961015 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fmtln\" (UniqueName: \"kubernetes.io/projected/06554344-a634-4dec-aaf7-e3d9919d9e80-kube-api-access-fmtln\") pod \"06554344-a634-4dec-aaf7-e3d9919d9e80\" (UID: \"06554344-a634-4dec-aaf7-e3d9919d9e80\") " Dec 11 10:17:36 crc kubenswrapper[4953]: I1211 10:17:36.961433 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06554344-a634-4dec-aaf7-e3d9919d9e80-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "06554344-a634-4dec-aaf7-e3d9919d9e80" (UID: "06554344-a634-4dec-aaf7-e3d9919d9e80"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:17:36 crc kubenswrapper[4953]: I1211 10:17:36.966255 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06554344-a634-4dec-aaf7-e3d9919d9e80-kube-api-access-fmtln" (OuterVolumeSpecName: "kube-api-access-fmtln") pod "06554344-a634-4dec-aaf7-e3d9919d9e80" (UID: "06554344-a634-4dec-aaf7-e3d9919d9e80"). InnerVolumeSpecName "kube-api-access-fmtln". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:17:36 crc kubenswrapper[4953]: I1211 10:17:36.966272 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06554344-a634-4dec-aaf7-e3d9919d9e80-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "06554344-a634-4dec-aaf7-e3d9919d9e80" (UID: "06554344-a634-4dec-aaf7-e3d9919d9e80"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:17:37 crc kubenswrapper[4953]: I1211 10:17:37.062126 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fmtln\" (UniqueName: \"kubernetes.io/projected/06554344-a634-4dec-aaf7-e3d9919d9e80-kube-api-access-fmtln\") on node \"crc\" DevicePath \"\"" Dec 11 10:17:37 crc kubenswrapper[4953]: I1211 10:17:37.062180 4953 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/06554344-a634-4dec-aaf7-e3d9919d9e80-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 11 10:17:37 crc kubenswrapper[4953]: I1211 10:17:37.062194 4953 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/06554344-a634-4dec-aaf7-e3d9919d9e80-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Dec 11 10:17:37 crc kubenswrapper[4953]: I1211 10:17:37.333050 4953 generic.go:334] "Generic (PLEG): container finished" podID="06554344-a634-4dec-aaf7-e3d9919d9e80" containerID="198e08d832fd5afa990f02b583df9f04268e5d1a887a3c9e3d1d9b80c743e035" exitCode=0 Dec 11 10:17:37 crc kubenswrapper[4953]: I1211 10:17:37.333137 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-xmb4p" event={"ID":"06554344-a634-4dec-aaf7-e3d9919d9e80","Type":"ContainerDied","Data":"198e08d832fd5afa990f02b583df9f04268e5d1a887a3c9e3d1d9b80c743e035"} Dec 11 10:17:37 crc kubenswrapper[4953]: I1211 10:17:37.333186 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-xmb4p" Dec 11 10:17:37 crc kubenswrapper[4953]: I1211 10:17:37.333208 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-xmb4p" event={"ID":"06554344-a634-4dec-aaf7-e3d9919d9e80","Type":"ContainerDied","Data":"366a26b1116e09d72e9985dd5d5cf3c2279f6e338d085e54287029b14246fc31"} Dec 11 10:17:37 crc kubenswrapper[4953]: I1211 10:17:37.333245 4953 scope.go:117] "RemoveContainer" containerID="198e08d832fd5afa990f02b583df9f04268e5d1a887a3c9e3d1d9b80c743e035" Dec 11 10:17:37 crc kubenswrapper[4953]: I1211 10:17:37.339416 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Dec 11 10:17:37 crc kubenswrapper[4953]: I1211 10:17:37.349873 4953 scope.go:117] "RemoveContainer" containerID="512517f86924282acd209acc698ebf59a82a4e0987feffc8ef093ea10d90139f" Dec 11 10:17:37 crc kubenswrapper[4953]: I1211 10:17:37.374804 4953 scope.go:117] "RemoveContainer" containerID="198e08d832fd5afa990f02b583df9f04268e5d1a887a3c9e3d1d9b80c743e035" Dec 11 10:17:37 crc kubenswrapper[4953]: I1211 10:17:37.384163 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-xmb4p"] Dec 11 10:17:37 crc kubenswrapper[4953]: E1211 10:17:37.384537 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"198e08d832fd5afa990f02b583df9f04268e5d1a887a3c9e3d1d9b80c743e035\": container with ID starting with 198e08d832fd5afa990f02b583df9f04268e5d1a887a3c9e3d1d9b80c743e035 not found: ID does not exist" containerID="198e08d832fd5afa990f02b583df9f04268e5d1a887a3c9e3d1d9b80c743e035" Dec 11 10:17:37 crc kubenswrapper[4953]: I1211 10:17:37.384618 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"198e08d832fd5afa990f02b583df9f04268e5d1a887a3c9e3d1d9b80c743e035"} err="failed to get container status \"198e08d832fd5afa990f02b583df9f04268e5d1a887a3c9e3d1d9b80c743e035\": rpc error: code = NotFound desc = could not find container \"198e08d832fd5afa990f02b583df9f04268e5d1a887a3c9e3d1d9b80c743e035\": container with ID starting with 198e08d832fd5afa990f02b583df9f04268e5d1a887a3c9e3d1d9b80c743e035 not found: ID does not exist" Dec 11 10:17:37 crc kubenswrapper[4953]: I1211 10:17:37.384656 4953 scope.go:117] "RemoveContainer" containerID="512517f86924282acd209acc698ebf59a82a4e0987feffc8ef093ea10d90139f" Dec 11 10:17:37 crc kubenswrapper[4953]: E1211 10:17:37.385382 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"512517f86924282acd209acc698ebf59a82a4e0987feffc8ef093ea10d90139f\": container with ID starting with 512517f86924282acd209acc698ebf59a82a4e0987feffc8ef093ea10d90139f not found: ID does not exist" containerID="512517f86924282acd209acc698ebf59a82a4e0987feffc8ef093ea10d90139f" Dec 11 10:17:37 crc kubenswrapper[4953]: I1211 10:17:37.385403 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"512517f86924282acd209acc698ebf59a82a4e0987feffc8ef093ea10d90139f"} err="failed to get container status \"512517f86924282acd209acc698ebf59a82a4e0987feffc8ef093ea10d90139f\": rpc error: code = NotFound desc = could not find container \"512517f86924282acd209acc698ebf59a82a4e0987feffc8ef093ea10d90139f\": container with ID starting with 512517f86924282acd209acc698ebf59a82a4e0987feffc8ef093ea10d90139f not found: ID does not exist" Dec 11 10:17:37 crc kubenswrapper[4953]: I1211 10:17:37.386697 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-xmb4p"] Dec 11 10:17:37 crc kubenswrapper[4953]: I1211 10:17:37.514950 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Dec 11 10:17:37 crc kubenswrapper[4953]: I1211 10:17:37.662280 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 11 10:17:37 crc kubenswrapper[4953]: I1211 10:17:37.761010 4953 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-xmb4p container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.36:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 11 10:17:37 crc kubenswrapper[4953]: I1211 10:17:37.761102 4953 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-xmb4p" podUID="06554344-a634-4dec-aaf7-e3d9919d9e80" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.36:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 10:17:38 crc kubenswrapper[4953]: I1211 10:17:38.081591 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Dec 11 10:17:38 crc kubenswrapper[4953]: I1211 10:17:38.143340 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Dec 11 10:17:38 crc kubenswrapper[4953]: I1211 10:17:38.481825 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06554344-a634-4dec-aaf7-e3d9919d9e80" path="/var/lib/kubelet/pods/06554344-a634-4dec-aaf7-e3d9919d9e80/volumes" Dec 11 10:17:38 crc kubenswrapper[4953]: I1211 10:17:38.482449 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f406ece-016a-43bc-92c9-473b85ad0ca9" path="/var/lib/kubelet/pods/2f406ece-016a-43bc-92c9-473b85ad0ca9/volumes" Dec 11 10:17:38 crc kubenswrapper[4953]: I1211 10:17:38.483200 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b099bc8-faec-451b-88a3-f03e46e3ad94" path="/var/lib/kubelet/pods/3b099bc8-faec-451b-88a3-f03e46e3ad94/volumes" Dec 11 10:17:38 crc kubenswrapper[4953]: I1211 10:17:38.484794 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4468c58a-3cfc-4197-bf1b-8afc67dfda5e" path="/var/lib/kubelet/pods/4468c58a-3cfc-4197-bf1b-8afc67dfda5e/volumes" Dec 11 10:17:38 crc kubenswrapper[4953]: I1211 10:17:38.485661 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46f197d9-de5c-42c2-9781-47ed42389e11" path="/var/lib/kubelet/pods/46f197d9-de5c-42c2-9781-47ed42389e11/volumes" Dec 11 10:17:38 crc kubenswrapper[4953]: I1211 10:17:38.486934 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bcb98d03-7cdb-49c6-baa8-c4aa9605a2d6" path="/var/lib/kubelet/pods/bcb98d03-7cdb-49c6-baa8-c4aa9605a2d6/volumes" Dec 11 10:17:38 crc kubenswrapper[4953]: I1211 10:17:38.487833 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe9b2116-8ab4-4c4c-8c58-74e62f28893d" path="/var/lib/kubelet/pods/fe9b2116-8ab4-4c4c-8c58-74e62f28893d/volumes" Dec 11 10:17:39 crc kubenswrapper[4953]: I1211 10:17:39.468642 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Dec 11 10:17:39 crc kubenswrapper[4953]: I1211 10:17:39.825144 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Dec 11 10:17:40 crc kubenswrapper[4953]: I1211 10:17:40.163790 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Dec 11 10:17:40 crc kubenswrapper[4953]: I1211 10:17:40.346934 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Dec 11 10:17:40 crc kubenswrapper[4953]: I1211 10:17:40.594909 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Dec 11 10:17:40 crc kubenswrapper[4953]: I1211 10:17:40.983203 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Dec 11 10:17:41 crc kubenswrapper[4953]: I1211 10:17:41.479642 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Dec 11 10:17:41 crc kubenswrapper[4953]: I1211 10:17:41.560110 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Dec 11 10:17:42 crc kubenswrapper[4953]: I1211 10:17:42.008329 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Dec 11 10:17:43 crc kubenswrapper[4953]: I1211 10:17:43.785018 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Dec 11 10:17:44 crc kubenswrapper[4953]: I1211 10:17:44.135942 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Dec 11 10:17:45 crc kubenswrapper[4953]: I1211 10:17:45.752913 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Dec 11 10:17:46 crc kubenswrapper[4953]: I1211 10:17:46.688145 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Dec 11 10:17:48 crc kubenswrapper[4953]: I1211 10:17:48.193809 4953 patch_prober.go:28] interesting pod/machine-config-daemon-q2898 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 10:17:48 crc kubenswrapper[4953]: I1211 10:17:48.194201 4953 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 10:17:50 crc kubenswrapper[4953]: I1211 10:17:50.708203 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Dec 11 10:17:51 crc kubenswrapper[4953]: I1211 10:17:51.262500 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Dec 11 10:17:51 crc kubenswrapper[4953]: I1211 10:17:51.963402 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Dec 11 10:18:04 crc kubenswrapper[4953]: I1211 10:18:04.727418 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-jnqj6"] Dec 11 10:18:04 crc kubenswrapper[4953]: I1211 10:18:04.728068 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-jnqj6" podUID="49bbe965-c5d1-4c35-a42b-3b8e7a264de7" containerName="controller-manager" containerID="cri-o://44508179cc11dcc34dbd7c78a7707efcbb07d29cdabb4d4822f6ed691c0eb73e" gracePeriod=30 Dec 11 10:18:04 crc kubenswrapper[4953]: I1211 10:18:04.791815 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-hmk2h"] Dec 11 10:18:04 crc kubenswrapper[4953]: I1211 10:18:04.792048 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hmk2h" podUID="d5a48387-bf09-4cf3-b4c0-2b75d3c9b3f9" containerName="route-controller-manager" containerID="cri-o://9963647be13983d82998a1a73165b9e9d0a7e47c07f700c59a1fe37fdb80c5af" gracePeriod=30 Dec 11 10:18:04 crc kubenswrapper[4953]: I1211 10:18:04.942406 4953 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-hmk2h container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Dec 11 10:18:04 crc kubenswrapper[4953]: I1211 10:18:04.942536 4953 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hmk2h" podUID="d5a48387-bf09-4cf3-b4c0-2b75d3c9b3f9" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Dec 11 10:18:05 crc kubenswrapper[4953]: I1211 10:18:05.241304 4953 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-jnqj6 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Dec 11 10:18:05 crc kubenswrapper[4953]: I1211 10:18:05.241784 4953 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-jnqj6" podUID="49bbe965-c5d1-4c35-a42b-3b8e7a264de7" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" Dec 11 10:18:05 crc kubenswrapper[4953]: I1211 10:18:05.515693 4953 generic.go:334] "Generic (PLEG): container finished" podID="49bbe965-c5d1-4c35-a42b-3b8e7a264de7" containerID="44508179cc11dcc34dbd7c78a7707efcbb07d29cdabb4d4822f6ed691c0eb73e" exitCode=0 Dec 11 10:18:05 crc kubenswrapper[4953]: I1211 10:18:05.515770 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-jnqj6" event={"ID":"49bbe965-c5d1-4c35-a42b-3b8e7a264de7","Type":"ContainerDied","Data":"44508179cc11dcc34dbd7c78a7707efcbb07d29cdabb4d4822f6ed691c0eb73e"} Dec 11 10:18:05 crc kubenswrapper[4953]: I1211 10:18:05.517122 4953 generic.go:334] "Generic (PLEG): container finished" podID="d5a48387-bf09-4cf3-b4c0-2b75d3c9b3f9" containerID="9963647be13983d82998a1a73165b9e9d0a7e47c07f700c59a1fe37fdb80c5af" exitCode=0 Dec 11 10:18:05 crc kubenswrapper[4953]: I1211 10:18:05.517155 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hmk2h" event={"ID":"d5a48387-bf09-4cf3-b4c0-2b75d3c9b3f9","Type":"ContainerDied","Data":"9963647be13983d82998a1a73165b9e9d0a7e47c07f700c59a1fe37fdb80c5af"} Dec 11 10:18:06 crc kubenswrapper[4953]: I1211 10:18:05.998126 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hmk2h" Dec 11 10:18:06 crc kubenswrapper[4953]: I1211 10:18:06.004416 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-jnqj6" Dec 11 10:18:06 crc kubenswrapper[4953]: I1211 10:18:06.032852 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6b5999896-bngts"] Dec 11 10:18:06 crc kubenswrapper[4953]: E1211 10:18:06.033152 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f406ece-016a-43bc-92c9-473b85ad0ca9" containerName="extract-content" Dec 11 10:18:06 crc kubenswrapper[4953]: I1211 10:18:06.033178 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f406ece-016a-43bc-92c9-473b85ad0ca9" containerName="extract-content" Dec 11 10:18:06 crc kubenswrapper[4953]: E1211 10:18:06.033188 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5a48387-bf09-4cf3-b4c0-2b75d3c9b3f9" containerName="route-controller-manager" Dec 11 10:18:06 crc kubenswrapper[4953]: I1211 10:18:06.033196 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5a48387-bf09-4cf3-b4c0-2b75d3c9b3f9" containerName="route-controller-manager" Dec 11 10:18:06 crc kubenswrapper[4953]: E1211 10:18:06.033211 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcb98d03-7cdb-49c6-baa8-c4aa9605a2d6" containerName="registry-server" Dec 11 10:18:06 crc kubenswrapper[4953]: I1211 10:18:06.033219 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcb98d03-7cdb-49c6-baa8-c4aa9605a2d6" containerName="registry-server" Dec 11 10:18:06 crc kubenswrapper[4953]: E1211 10:18:06.033230 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b099bc8-faec-451b-88a3-f03e46e3ad94" containerName="registry-server" Dec 11 10:18:06 crc kubenswrapper[4953]: I1211 10:18:06.033239 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b099bc8-faec-451b-88a3-f03e46e3ad94" containerName="registry-server" Dec 11 10:18:06 crc kubenswrapper[4953]: E1211 10:18:06.033249 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe9b2116-8ab4-4c4c-8c58-74e62f28893d" containerName="registry-server" Dec 11 10:18:06 crc kubenswrapper[4953]: I1211 10:18:06.033256 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe9b2116-8ab4-4c4c-8c58-74e62f28893d" containerName="registry-server" Dec 11 10:18:06 crc kubenswrapper[4953]: E1211 10:18:06.033267 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46f197d9-de5c-42c2-9781-47ed42389e11" containerName="extract-content" Dec 11 10:18:06 crc kubenswrapper[4953]: I1211 10:18:06.033273 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="46f197d9-de5c-42c2-9781-47ed42389e11" containerName="extract-content" Dec 11 10:18:06 crc kubenswrapper[4953]: E1211 10:18:06.033282 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4468c58a-3cfc-4197-bf1b-8afc67dfda5e" containerName="registry-server" Dec 11 10:18:06 crc kubenswrapper[4953]: I1211 10:18:06.033288 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="4468c58a-3cfc-4197-bf1b-8afc67dfda5e" containerName="registry-server" Dec 11 10:18:06 crc kubenswrapper[4953]: E1211 10:18:06.033297 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f406ece-016a-43bc-92c9-473b85ad0ca9" containerName="registry-server" Dec 11 10:18:06 crc kubenswrapper[4953]: I1211 10:18:06.033302 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f406ece-016a-43bc-92c9-473b85ad0ca9" containerName="registry-server" Dec 11 10:18:06 crc kubenswrapper[4953]: E1211 10:18:06.033310 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcb98d03-7cdb-49c6-baa8-c4aa9605a2d6" containerName="extract-utilities" Dec 11 10:18:06 crc kubenswrapper[4953]: I1211 10:18:06.033315 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcb98d03-7cdb-49c6-baa8-c4aa9605a2d6" containerName="extract-utilities" Dec 11 10:18:06 crc kubenswrapper[4953]: E1211 10:18:06.033321 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcb98d03-7cdb-49c6-baa8-c4aa9605a2d6" containerName="extract-content" Dec 11 10:18:06 crc kubenswrapper[4953]: I1211 10:18:06.033327 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcb98d03-7cdb-49c6-baa8-c4aa9605a2d6" containerName="extract-content" Dec 11 10:18:06 crc kubenswrapper[4953]: E1211 10:18:06.033338 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46f197d9-de5c-42c2-9781-47ed42389e11" containerName="registry-server" Dec 11 10:18:06 crc kubenswrapper[4953]: I1211 10:18:06.033345 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="46f197d9-de5c-42c2-9781-47ed42389e11" containerName="registry-server" Dec 11 10:18:06 crc kubenswrapper[4953]: E1211 10:18:06.033354 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46f197d9-de5c-42c2-9781-47ed42389e11" containerName="extract-utilities" Dec 11 10:18:06 crc kubenswrapper[4953]: I1211 10:18:06.033360 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="46f197d9-de5c-42c2-9781-47ed42389e11" containerName="extract-utilities" Dec 11 10:18:06 crc kubenswrapper[4953]: E1211 10:18:06.033367 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4468c58a-3cfc-4197-bf1b-8afc67dfda5e" containerName="extract-content" Dec 11 10:18:06 crc kubenswrapper[4953]: I1211 10:18:06.033372 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="4468c58a-3cfc-4197-bf1b-8afc67dfda5e" containerName="extract-content" Dec 11 10:18:06 crc kubenswrapper[4953]: E1211 10:18:06.033381 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06554344-a634-4dec-aaf7-e3d9919d9e80" containerName="marketplace-operator" Dec 11 10:18:06 crc kubenswrapper[4953]: I1211 10:18:06.033386 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="06554344-a634-4dec-aaf7-e3d9919d9e80" containerName="marketplace-operator" Dec 11 10:18:06 crc kubenswrapper[4953]: E1211 10:18:06.033395 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49bbe965-c5d1-4c35-a42b-3b8e7a264de7" containerName="controller-manager" Dec 11 10:18:06 crc kubenswrapper[4953]: I1211 10:18:06.033401 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="49bbe965-c5d1-4c35-a42b-3b8e7a264de7" containerName="controller-manager" Dec 11 10:18:06 crc kubenswrapper[4953]: E1211 10:18:06.033410 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe9b2116-8ab4-4c4c-8c58-74e62f28893d" containerName="extract-utilities" Dec 11 10:18:06 crc kubenswrapper[4953]: I1211 10:18:06.033416 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe9b2116-8ab4-4c4c-8c58-74e62f28893d" containerName="extract-utilities" Dec 11 10:18:06 crc kubenswrapper[4953]: E1211 10:18:06.033423 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06554344-a634-4dec-aaf7-e3d9919d9e80" containerName="marketplace-operator" Dec 11 10:18:06 crc kubenswrapper[4953]: I1211 10:18:06.033429 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="06554344-a634-4dec-aaf7-e3d9919d9e80" containerName="marketplace-operator" Dec 11 10:18:06 crc kubenswrapper[4953]: E1211 10:18:06.033437 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4468c58a-3cfc-4197-bf1b-8afc67dfda5e" containerName="extract-utilities" Dec 11 10:18:06 crc kubenswrapper[4953]: I1211 10:18:06.033443 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="4468c58a-3cfc-4197-bf1b-8afc67dfda5e" containerName="extract-utilities" Dec 11 10:18:06 crc kubenswrapper[4953]: E1211 10:18:06.033450 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f406ece-016a-43bc-92c9-473b85ad0ca9" containerName="extract-utilities" Dec 11 10:18:06 crc kubenswrapper[4953]: I1211 10:18:06.033457 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f406ece-016a-43bc-92c9-473b85ad0ca9" containerName="extract-utilities" Dec 11 10:18:06 crc kubenswrapper[4953]: E1211 10:18:06.033464 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b099bc8-faec-451b-88a3-f03e46e3ad94" containerName="extract-utilities" Dec 11 10:18:06 crc kubenswrapper[4953]: I1211 10:18:06.033470 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b099bc8-faec-451b-88a3-f03e46e3ad94" containerName="extract-utilities" Dec 11 10:18:06 crc kubenswrapper[4953]: E1211 10:18:06.033476 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b099bc8-faec-451b-88a3-f03e46e3ad94" containerName="extract-content" Dec 11 10:18:06 crc kubenswrapper[4953]: I1211 10:18:06.033482 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b099bc8-faec-451b-88a3-f03e46e3ad94" containerName="extract-content" Dec 11 10:18:06 crc kubenswrapper[4953]: E1211 10:18:06.033490 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe9b2116-8ab4-4c4c-8c58-74e62f28893d" containerName="extract-content" Dec 11 10:18:06 crc kubenswrapper[4953]: I1211 10:18:06.033496 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe9b2116-8ab4-4c4c-8c58-74e62f28893d" containerName="extract-content" Dec 11 10:18:06 crc kubenswrapper[4953]: I1211 10:18:06.033605 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5a48387-bf09-4cf3-b4c0-2b75d3c9b3f9" containerName="route-controller-manager" Dec 11 10:18:06 crc kubenswrapper[4953]: I1211 10:18:06.033617 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f406ece-016a-43bc-92c9-473b85ad0ca9" containerName="registry-server" Dec 11 10:18:06 crc kubenswrapper[4953]: I1211 10:18:06.033626 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b099bc8-faec-451b-88a3-f03e46e3ad94" containerName="registry-server" Dec 11 10:18:06 crc kubenswrapper[4953]: I1211 10:18:06.033634 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="06554344-a634-4dec-aaf7-e3d9919d9e80" containerName="marketplace-operator" Dec 11 10:18:06 crc kubenswrapper[4953]: I1211 10:18:06.033642 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe9b2116-8ab4-4c4c-8c58-74e62f28893d" containerName="registry-server" Dec 11 10:18:06 crc kubenswrapper[4953]: I1211 10:18:06.033653 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="46f197d9-de5c-42c2-9781-47ed42389e11" containerName="registry-server" Dec 11 10:18:06 crc kubenswrapper[4953]: I1211 10:18:06.033665 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="06554344-a634-4dec-aaf7-e3d9919d9e80" containerName="marketplace-operator" Dec 11 10:18:06 crc kubenswrapper[4953]: I1211 10:18:06.033675 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="49bbe965-c5d1-4c35-a42b-3b8e7a264de7" containerName="controller-manager" Dec 11 10:18:06 crc kubenswrapper[4953]: I1211 10:18:06.033685 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="4468c58a-3cfc-4197-bf1b-8afc67dfda5e" containerName="registry-server" Dec 11 10:18:06 crc kubenswrapper[4953]: I1211 10:18:06.033693 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="bcb98d03-7cdb-49c6-baa8-c4aa9605a2d6" containerName="registry-server" Dec 11 10:18:06 crc kubenswrapper[4953]: I1211 10:18:06.034148 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6b5999896-bngts" Dec 11 10:18:06 crc kubenswrapper[4953]: I1211 10:18:06.040854 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6b5999896-bngts"] Dec 11 10:18:06 crc kubenswrapper[4953]: I1211 10:18:06.071070 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49bbe965-c5d1-4c35-a42b-3b8e7a264de7-config\") pod \"49bbe965-c5d1-4c35-a42b-3b8e7a264de7\" (UID: \"49bbe965-c5d1-4c35-a42b-3b8e7a264de7\") " Dec 11 10:18:06 crc kubenswrapper[4953]: I1211 10:18:06.071161 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/49bbe965-c5d1-4c35-a42b-3b8e7a264de7-client-ca\") pod \"49bbe965-c5d1-4c35-a42b-3b8e7a264de7\" (UID: \"49bbe965-c5d1-4c35-a42b-3b8e7a264de7\") " Dec 11 10:18:06 crc kubenswrapper[4953]: I1211 10:18:06.071198 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/49bbe965-c5d1-4c35-a42b-3b8e7a264de7-serving-cert\") pod \"49bbe965-c5d1-4c35-a42b-3b8e7a264de7\" (UID: \"49bbe965-c5d1-4c35-a42b-3b8e7a264de7\") " Dec 11 10:18:06 crc kubenswrapper[4953]: I1211 10:18:06.071226 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nd4hb\" (UniqueName: \"kubernetes.io/projected/49bbe965-c5d1-4c35-a42b-3b8e7a264de7-kube-api-access-nd4hb\") pod \"49bbe965-c5d1-4c35-a42b-3b8e7a264de7\" (UID: \"49bbe965-c5d1-4c35-a42b-3b8e7a264de7\") " Dec 11 10:18:06 crc kubenswrapper[4953]: I1211 10:18:06.071250 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7hn9r\" (UniqueName: \"kubernetes.io/projected/d5a48387-bf09-4cf3-b4c0-2b75d3c9b3f9-kube-api-access-7hn9r\") pod \"d5a48387-bf09-4cf3-b4c0-2b75d3c9b3f9\" (UID: \"d5a48387-bf09-4cf3-b4c0-2b75d3c9b3f9\") " Dec 11 10:18:06 crc kubenswrapper[4953]: I1211 10:18:06.071291 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5a48387-bf09-4cf3-b4c0-2b75d3c9b3f9-config\") pod \"d5a48387-bf09-4cf3-b4c0-2b75d3c9b3f9\" (UID: \"d5a48387-bf09-4cf3-b4c0-2b75d3c9b3f9\") " Dec 11 10:18:06 crc kubenswrapper[4953]: I1211 10:18:06.071314 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/49bbe965-c5d1-4c35-a42b-3b8e7a264de7-proxy-ca-bundles\") pod \"49bbe965-c5d1-4c35-a42b-3b8e7a264de7\" (UID: \"49bbe965-c5d1-4c35-a42b-3b8e7a264de7\") " Dec 11 10:18:06 crc kubenswrapper[4953]: I1211 10:18:06.071333 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d5a48387-bf09-4cf3-b4c0-2b75d3c9b3f9-client-ca\") pod \"d5a48387-bf09-4cf3-b4c0-2b75d3c9b3f9\" (UID: \"d5a48387-bf09-4cf3-b4c0-2b75d3c9b3f9\") " Dec 11 10:18:06 crc kubenswrapper[4953]: I1211 10:18:06.071361 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d5a48387-bf09-4cf3-b4c0-2b75d3c9b3f9-serving-cert\") pod \"d5a48387-bf09-4cf3-b4c0-2b75d3c9b3f9\" (UID: \"d5a48387-bf09-4cf3-b4c0-2b75d3c9b3f9\") " Dec 11 10:18:06 crc kubenswrapper[4953]: I1211 10:18:06.071537 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e32c4d79-5695-42bf-a790-20e60544b3c1-config\") pod \"route-controller-manager-6b5999896-bngts\" (UID: \"e32c4d79-5695-42bf-a790-20e60544b3c1\") " pod="openshift-route-controller-manager/route-controller-manager-6b5999896-bngts" Dec 11 10:18:06 crc kubenswrapper[4953]: I1211 10:18:06.071611 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcdrr\" (UniqueName: \"kubernetes.io/projected/e32c4d79-5695-42bf-a790-20e60544b3c1-kube-api-access-rcdrr\") pod \"route-controller-manager-6b5999896-bngts\" (UID: \"e32c4d79-5695-42bf-a790-20e60544b3c1\") " pod="openshift-route-controller-manager/route-controller-manager-6b5999896-bngts" Dec 11 10:18:06 crc kubenswrapper[4953]: I1211 10:18:06.071703 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e32c4d79-5695-42bf-a790-20e60544b3c1-client-ca\") pod \"route-controller-manager-6b5999896-bngts\" (UID: \"e32c4d79-5695-42bf-a790-20e60544b3c1\") " pod="openshift-route-controller-manager/route-controller-manager-6b5999896-bngts" Dec 11 10:18:06 crc kubenswrapper[4953]: I1211 10:18:06.071766 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e32c4d79-5695-42bf-a790-20e60544b3c1-serving-cert\") pod \"route-controller-manager-6b5999896-bngts\" (UID: \"e32c4d79-5695-42bf-a790-20e60544b3c1\") " pod="openshift-route-controller-manager/route-controller-manager-6b5999896-bngts" Dec 11 10:18:06 crc kubenswrapper[4953]: I1211 10:18:06.072159 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49bbe965-c5d1-4c35-a42b-3b8e7a264de7-client-ca" (OuterVolumeSpecName: "client-ca") pod "49bbe965-c5d1-4c35-a42b-3b8e7a264de7" (UID: "49bbe965-c5d1-4c35-a42b-3b8e7a264de7"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:18:06 crc kubenswrapper[4953]: I1211 10:18:06.072562 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5a48387-bf09-4cf3-b4c0-2b75d3c9b3f9-client-ca" (OuterVolumeSpecName: "client-ca") pod "d5a48387-bf09-4cf3-b4c0-2b75d3c9b3f9" (UID: "d5a48387-bf09-4cf3-b4c0-2b75d3c9b3f9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:18:06 crc kubenswrapper[4953]: I1211 10:18:06.072610 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5a48387-bf09-4cf3-b4c0-2b75d3c9b3f9-config" (OuterVolumeSpecName: "config") pod "d5a48387-bf09-4cf3-b4c0-2b75d3c9b3f9" (UID: "d5a48387-bf09-4cf3-b4c0-2b75d3c9b3f9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:18:06 crc kubenswrapper[4953]: I1211 10:18:06.073085 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49bbe965-c5d1-4c35-a42b-3b8e7a264de7-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "49bbe965-c5d1-4c35-a42b-3b8e7a264de7" (UID: "49bbe965-c5d1-4c35-a42b-3b8e7a264de7"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:18:06 crc kubenswrapper[4953]: I1211 10:18:06.073676 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49bbe965-c5d1-4c35-a42b-3b8e7a264de7-config" (OuterVolumeSpecName: "config") pod "49bbe965-c5d1-4c35-a42b-3b8e7a264de7" (UID: "49bbe965-c5d1-4c35-a42b-3b8e7a264de7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:18:06 crc kubenswrapper[4953]: I1211 10:18:06.078863 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49bbe965-c5d1-4c35-a42b-3b8e7a264de7-kube-api-access-nd4hb" (OuterVolumeSpecName: "kube-api-access-nd4hb") pod "49bbe965-c5d1-4c35-a42b-3b8e7a264de7" (UID: "49bbe965-c5d1-4c35-a42b-3b8e7a264de7"). InnerVolumeSpecName "kube-api-access-nd4hb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:18:06 crc kubenswrapper[4953]: I1211 10:18:06.084350 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5a48387-bf09-4cf3-b4c0-2b75d3c9b3f9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d5a48387-bf09-4cf3-b4c0-2b75d3c9b3f9" (UID: "d5a48387-bf09-4cf3-b4c0-2b75d3c9b3f9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:18:06 crc kubenswrapper[4953]: I1211 10:18:06.090022 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49bbe965-c5d1-4c35-a42b-3b8e7a264de7-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "49bbe965-c5d1-4c35-a42b-3b8e7a264de7" (UID: "49bbe965-c5d1-4c35-a42b-3b8e7a264de7"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:18:06 crc kubenswrapper[4953]: I1211 10:18:06.092611 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5a48387-bf09-4cf3-b4c0-2b75d3c9b3f9-kube-api-access-7hn9r" (OuterVolumeSpecName: "kube-api-access-7hn9r") pod "d5a48387-bf09-4cf3-b4c0-2b75d3c9b3f9" (UID: "d5a48387-bf09-4cf3-b4c0-2b75d3c9b3f9"). InnerVolumeSpecName "kube-api-access-7hn9r". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:18:06 crc kubenswrapper[4953]: I1211 10:18:06.172439 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e32c4d79-5695-42bf-a790-20e60544b3c1-config\") pod \"route-controller-manager-6b5999896-bngts\" (UID: \"e32c4d79-5695-42bf-a790-20e60544b3c1\") " pod="openshift-route-controller-manager/route-controller-manager-6b5999896-bngts" Dec 11 10:18:06 crc kubenswrapper[4953]: I1211 10:18:06.172558 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rcdrr\" (UniqueName: \"kubernetes.io/projected/e32c4d79-5695-42bf-a790-20e60544b3c1-kube-api-access-rcdrr\") pod \"route-controller-manager-6b5999896-bngts\" (UID: \"e32c4d79-5695-42bf-a790-20e60544b3c1\") " pod="openshift-route-controller-manager/route-controller-manager-6b5999896-bngts" Dec 11 10:18:06 crc kubenswrapper[4953]: I1211 10:18:06.172610 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e32c4d79-5695-42bf-a790-20e60544b3c1-client-ca\") pod \"route-controller-manager-6b5999896-bngts\" (UID: \"e32c4d79-5695-42bf-a790-20e60544b3c1\") " pod="openshift-route-controller-manager/route-controller-manager-6b5999896-bngts" Dec 11 10:18:06 crc kubenswrapper[4953]: I1211 10:18:06.172640 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e32c4d79-5695-42bf-a790-20e60544b3c1-serving-cert\") pod \"route-controller-manager-6b5999896-bngts\" (UID: \"e32c4d79-5695-42bf-a790-20e60544b3c1\") " pod="openshift-route-controller-manager/route-controller-manager-6b5999896-bngts" Dec 11 10:18:06 crc kubenswrapper[4953]: I1211 10:18:06.172684 4953 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49bbe965-c5d1-4c35-a42b-3b8e7a264de7-config\") on node \"crc\" DevicePath \"\"" Dec 11 10:18:06 crc kubenswrapper[4953]: I1211 10:18:06.172694 4953 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/49bbe965-c5d1-4c35-a42b-3b8e7a264de7-client-ca\") on node \"crc\" DevicePath \"\"" Dec 11 10:18:06 crc kubenswrapper[4953]: I1211 10:18:06.172702 4953 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/49bbe965-c5d1-4c35-a42b-3b8e7a264de7-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 10:18:06 crc kubenswrapper[4953]: I1211 10:18:06.172710 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nd4hb\" (UniqueName: \"kubernetes.io/projected/49bbe965-c5d1-4c35-a42b-3b8e7a264de7-kube-api-access-nd4hb\") on node \"crc\" DevicePath \"\"" Dec 11 10:18:06 crc kubenswrapper[4953]: I1211 10:18:06.172718 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7hn9r\" (UniqueName: \"kubernetes.io/projected/d5a48387-bf09-4cf3-b4c0-2b75d3c9b3f9-kube-api-access-7hn9r\") on node \"crc\" DevicePath \"\"" Dec 11 10:18:06 crc kubenswrapper[4953]: I1211 10:18:06.172726 4953 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5a48387-bf09-4cf3-b4c0-2b75d3c9b3f9-config\") on node \"crc\" DevicePath \"\"" Dec 11 10:18:06 crc kubenswrapper[4953]: I1211 10:18:06.172733 4953 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/49bbe965-c5d1-4c35-a42b-3b8e7a264de7-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 11 10:18:06 crc kubenswrapper[4953]: I1211 10:18:06.172741 4953 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d5a48387-bf09-4cf3-b4c0-2b75d3c9b3f9-client-ca\") on node \"crc\" DevicePath \"\"" Dec 11 10:18:06 crc kubenswrapper[4953]: I1211 10:18:06.172748 4953 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d5a48387-bf09-4cf3-b4c0-2b75d3c9b3f9-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 10:18:06 crc kubenswrapper[4953]: I1211 10:18:06.173939 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e32c4d79-5695-42bf-a790-20e60544b3c1-client-ca\") pod \"route-controller-manager-6b5999896-bngts\" (UID: \"e32c4d79-5695-42bf-a790-20e60544b3c1\") " pod="openshift-route-controller-manager/route-controller-manager-6b5999896-bngts" Dec 11 10:18:06 crc kubenswrapper[4953]: I1211 10:18:06.174348 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e32c4d79-5695-42bf-a790-20e60544b3c1-config\") pod \"route-controller-manager-6b5999896-bngts\" (UID: \"e32c4d79-5695-42bf-a790-20e60544b3c1\") " pod="openshift-route-controller-manager/route-controller-manager-6b5999896-bngts" Dec 11 10:18:06 crc kubenswrapper[4953]: I1211 10:18:06.176902 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e32c4d79-5695-42bf-a790-20e60544b3c1-serving-cert\") pod \"route-controller-manager-6b5999896-bngts\" (UID: \"e32c4d79-5695-42bf-a790-20e60544b3c1\") " pod="openshift-route-controller-manager/route-controller-manager-6b5999896-bngts" Dec 11 10:18:06 crc kubenswrapper[4953]: I1211 10:18:06.192872 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rcdrr\" (UniqueName: \"kubernetes.io/projected/e32c4d79-5695-42bf-a790-20e60544b3c1-kube-api-access-rcdrr\") pod \"route-controller-manager-6b5999896-bngts\" (UID: \"e32c4d79-5695-42bf-a790-20e60544b3c1\") " pod="openshift-route-controller-manager/route-controller-manager-6b5999896-bngts" Dec 11 10:18:06 crc kubenswrapper[4953]: I1211 10:18:06.356550 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6b5999896-bngts" Dec 11 10:18:06 crc kubenswrapper[4953]: I1211 10:18:06.525598 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-jnqj6" event={"ID":"49bbe965-c5d1-4c35-a42b-3b8e7a264de7","Type":"ContainerDied","Data":"2145d53d9384bfa2716e2ab7f0f06e4e4f003deda87d07d0062e9d317d5aae61"} Dec 11 10:18:06 crc kubenswrapper[4953]: I1211 10:18:06.525895 4953 scope.go:117] "RemoveContainer" containerID="44508179cc11dcc34dbd7c78a7707efcbb07d29cdabb4d4822f6ed691c0eb73e" Dec 11 10:18:06 crc kubenswrapper[4953]: I1211 10:18:06.525669 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-jnqj6" Dec 11 10:18:06 crc kubenswrapper[4953]: I1211 10:18:06.527704 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hmk2h" event={"ID":"d5a48387-bf09-4cf3-b4c0-2b75d3c9b3f9","Type":"ContainerDied","Data":"0af57bb95800a5e9f3dfc197d71e2ce34a6a055a9eb20e81fd9952a3868a8d5c"} Dec 11 10:18:06 crc kubenswrapper[4953]: I1211 10:18:06.527770 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hmk2h" Dec 11 10:18:06 crc kubenswrapper[4953]: I1211 10:18:06.547495 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-jnqj6"] Dec 11 10:18:06 crc kubenswrapper[4953]: I1211 10:18:06.550235 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-jnqj6"] Dec 11 10:18:06 crc kubenswrapper[4953]: I1211 10:18:06.557887 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6b5999896-bngts"] Dec 11 10:18:06 crc kubenswrapper[4953]: I1211 10:18:06.558559 4953 scope.go:117] "RemoveContainer" containerID="9963647be13983d82998a1a73165b9e9d0a7e47c07f700c59a1fe37fdb80c5af" Dec 11 10:18:06 crc kubenswrapper[4953]: I1211 10:18:06.561218 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-hmk2h"] Dec 11 10:18:06 crc kubenswrapper[4953]: I1211 10:18:06.564689 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-hmk2h"] Dec 11 10:18:07 crc kubenswrapper[4953]: I1211 10:18:07.538762 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6b5999896-bngts" event={"ID":"e32c4d79-5695-42bf-a790-20e60544b3c1","Type":"ContainerStarted","Data":"8c309c2e56bf99a63834cd09456a6c3ee80a2b8f9e83b1a4ef29715d3e47b122"} Dec 11 10:18:07 crc kubenswrapper[4953]: I1211 10:18:07.539111 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6b5999896-bngts" Dec 11 10:18:07 crc kubenswrapper[4953]: I1211 10:18:07.539125 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6b5999896-bngts" event={"ID":"e32c4d79-5695-42bf-a790-20e60544b3c1","Type":"ContainerStarted","Data":"50f55609accc7e24141cb333f9df1bc257bc0d6bfedeb1e134fe2aec71595211"} Dec 11 10:18:07 crc kubenswrapper[4953]: I1211 10:18:07.546442 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6b5999896-bngts" Dec 11 10:18:07 crc kubenswrapper[4953]: I1211 10:18:07.562000 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6b5999896-bngts" podStartSLOduration=3.561980569 podStartE2EDuration="3.561980569s" podCreationTimestamp="2025-12-11 10:18:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:18:07.560486372 +0000 UTC m=+405.584345405" watchObservedRunningTime="2025-12-11 10:18:07.561980569 +0000 UTC m=+405.585839612" Dec 11 10:18:08 crc kubenswrapper[4953]: I1211 10:18:08.593907 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49bbe965-c5d1-4c35-a42b-3b8e7a264de7" path="/var/lib/kubelet/pods/49bbe965-c5d1-4c35-a42b-3b8e7a264de7/volumes" Dec 11 10:18:08 crc kubenswrapper[4953]: I1211 10:18:08.595932 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5a48387-bf09-4cf3-b4c0-2b75d3c9b3f9" path="/var/lib/kubelet/pods/d5a48387-bf09-4cf3-b4c0-2b75d3c9b3f9/volumes" Dec 11 10:18:08 crc kubenswrapper[4953]: I1211 10:18:08.869864 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-766b89b5cb-g9v6d"] Dec 11 10:18:08 crc kubenswrapper[4953]: I1211 10:18:08.878310 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-766b89b5cb-g9v6d" Dec 11 10:18:08 crc kubenswrapper[4953]: I1211 10:18:08.880842 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 11 10:18:08 crc kubenswrapper[4953]: I1211 10:18:08.881896 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 11 10:18:08 crc kubenswrapper[4953]: I1211 10:18:08.882674 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 11 10:18:08 crc kubenswrapper[4953]: I1211 10:18:08.888838 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 11 10:18:08 crc kubenswrapper[4953]: I1211 10:18:08.889119 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 11 10:18:08 crc kubenswrapper[4953]: I1211 10:18:08.890892 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 11 10:18:08 crc kubenswrapper[4953]: I1211 10:18:08.893158 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-766b89b5cb-g9v6d"] Dec 11 10:18:08 crc kubenswrapper[4953]: I1211 10:18:08.905009 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 11 10:18:08 crc kubenswrapper[4953]: I1211 10:18:08.991137 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3ac5883c-6fa6-4978-bf25-46a17fd59cb7-client-ca\") pod \"controller-manager-766b89b5cb-g9v6d\" (UID: \"3ac5883c-6fa6-4978-bf25-46a17fd59cb7\") " pod="openshift-controller-manager/controller-manager-766b89b5cb-g9v6d" Dec 11 10:18:08 crc kubenswrapper[4953]: I1211 10:18:08.991274 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pt4xg\" (UniqueName: \"kubernetes.io/projected/3ac5883c-6fa6-4978-bf25-46a17fd59cb7-kube-api-access-pt4xg\") pod \"controller-manager-766b89b5cb-g9v6d\" (UID: \"3ac5883c-6fa6-4978-bf25-46a17fd59cb7\") " pod="openshift-controller-manager/controller-manager-766b89b5cb-g9v6d" Dec 11 10:18:08 crc kubenswrapper[4953]: I1211 10:18:08.991300 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ac5883c-6fa6-4978-bf25-46a17fd59cb7-config\") pod \"controller-manager-766b89b5cb-g9v6d\" (UID: \"3ac5883c-6fa6-4978-bf25-46a17fd59cb7\") " pod="openshift-controller-manager/controller-manager-766b89b5cb-g9v6d" Dec 11 10:18:08 crc kubenswrapper[4953]: I1211 10:18:08.991320 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3ac5883c-6fa6-4978-bf25-46a17fd59cb7-serving-cert\") pod \"controller-manager-766b89b5cb-g9v6d\" (UID: \"3ac5883c-6fa6-4978-bf25-46a17fd59cb7\") " pod="openshift-controller-manager/controller-manager-766b89b5cb-g9v6d" Dec 11 10:18:08 crc kubenswrapper[4953]: I1211 10:18:08.991384 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3ac5883c-6fa6-4978-bf25-46a17fd59cb7-proxy-ca-bundles\") pod \"controller-manager-766b89b5cb-g9v6d\" (UID: \"3ac5883c-6fa6-4978-bf25-46a17fd59cb7\") " pod="openshift-controller-manager/controller-manager-766b89b5cb-g9v6d" Dec 11 10:18:09 crc kubenswrapper[4953]: I1211 10:18:09.092260 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3ac5883c-6fa6-4978-bf25-46a17fd59cb7-client-ca\") pod \"controller-manager-766b89b5cb-g9v6d\" (UID: \"3ac5883c-6fa6-4978-bf25-46a17fd59cb7\") " pod="openshift-controller-manager/controller-manager-766b89b5cb-g9v6d" Dec 11 10:18:09 crc kubenswrapper[4953]: I1211 10:18:09.092383 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pt4xg\" (UniqueName: \"kubernetes.io/projected/3ac5883c-6fa6-4978-bf25-46a17fd59cb7-kube-api-access-pt4xg\") pod \"controller-manager-766b89b5cb-g9v6d\" (UID: \"3ac5883c-6fa6-4978-bf25-46a17fd59cb7\") " pod="openshift-controller-manager/controller-manager-766b89b5cb-g9v6d" Dec 11 10:18:09 crc kubenswrapper[4953]: I1211 10:18:09.092418 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ac5883c-6fa6-4978-bf25-46a17fd59cb7-config\") pod \"controller-manager-766b89b5cb-g9v6d\" (UID: \"3ac5883c-6fa6-4978-bf25-46a17fd59cb7\") " pod="openshift-controller-manager/controller-manager-766b89b5cb-g9v6d" Dec 11 10:18:09 crc kubenswrapper[4953]: I1211 10:18:09.092458 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3ac5883c-6fa6-4978-bf25-46a17fd59cb7-serving-cert\") pod \"controller-manager-766b89b5cb-g9v6d\" (UID: \"3ac5883c-6fa6-4978-bf25-46a17fd59cb7\") " pod="openshift-controller-manager/controller-manager-766b89b5cb-g9v6d" Dec 11 10:18:09 crc kubenswrapper[4953]: I1211 10:18:09.092507 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3ac5883c-6fa6-4978-bf25-46a17fd59cb7-proxy-ca-bundles\") pod \"controller-manager-766b89b5cb-g9v6d\" (UID: \"3ac5883c-6fa6-4978-bf25-46a17fd59cb7\") " pod="openshift-controller-manager/controller-manager-766b89b5cb-g9v6d" Dec 11 10:18:09 crc kubenswrapper[4953]: I1211 10:18:09.094635 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3ac5883c-6fa6-4978-bf25-46a17fd59cb7-client-ca\") pod \"controller-manager-766b89b5cb-g9v6d\" (UID: \"3ac5883c-6fa6-4978-bf25-46a17fd59cb7\") " pod="openshift-controller-manager/controller-manager-766b89b5cb-g9v6d" Dec 11 10:18:09 crc kubenswrapper[4953]: I1211 10:18:09.094723 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3ac5883c-6fa6-4978-bf25-46a17fd59cb7-proxy-ca-bundles\") pod \"controller-manager-766b89b5cb-g9v6d\" (UID: \"3ac5883c-6fa6-4978-bf25-46a17fd59cb7\") " pod="openshift-controller-manager/controller-manager-766b89b5cb-g9v6d" Dec 11 10:18:09 crc kubenswrapper[4953]: I1211 10:18:09.095693 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ac5883c-6fa6-4978-bf25-46a17fd59cb7-config\") pod \"controller-manager-766b89b5cb-g9v6d\" (UID: \"3ac5883c-6fa6-4978-bf25-46a17fd59cb7\") " pod="openshift-controller-manager/controller-manager-766b89b5cb-g9v6d" Dec 11 10:18:09 crc kubenswrapper[4953]: I1211 10:18:09.105689 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3ac5883c-6fa6-4978-bf25-46a17fd59cb7-serving-cert\") pod \"controller-manager-766b89b5cb-g9v6d\" (UID: \"3ac5883c-6fa6-4978-bf25-46a17fd59cb7\") " pod="openshift-controller-manager/controller-manager-766b89b5cb-g9v6d" Dec 11 10:18:09 crc kubenswrapper[4953]: I1211 10:18:09.132390 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pt4xg\" (UniqueName: \"kubernetes.io/projected/3ac5883c-6fa6-4978-bf25-46a17fd59cb7-kube-api-access-pt4xg\") pod \"controller-manager-766b89b5cb-g9v6d\" (UID: \"3ac5883c-6fa6-4978-bf25-46a17fd59cb7\") " pod="openshift-controller-manager/controller-manager-766b89b5cb-g9v6d" Dec 11 10:18:09 crc kubenswrapper[4953]: I1211 10:18:09.207953 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-766b89b5cb-g9v6d" Dec 11 10:18:09 crc kubenswrapper[4953]: I1211 10:18:09.429398 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-766b89b5cb-g9v6d"] Dec 11 10:18:09 crc kubenswrapper[4953]: I1211 10:18:09.601092 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-766b89b5cb-g9v6d" event={"ID":"3ac5883c-6fa6-4978-bf25-46a17fd59cb7","Type":"ContainerStarted","Data":"147319badd5aa14da22d7363f238f95fe07fe4db7b90e4ee14d40080bf4ef27a"} Dec 11 10:18:11 crc kubenswrapper[4953]: I1211 10:18:11.614365 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-766b89b5cb-g9v6d" event={"ID":"3ac5883c-6fa6-4978-bf25-46a17fd59cb7","Type":"ContainerStarted","Data":"1f1cba4b2ceae3cc5efa767970e16315726d22becdfd736a493fa858bbbfa616"} Dec 11 10:18:11 crc kubenswrapper[4953]: I1211 10:18:11.614748 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-766b89b5cb-g9v6d" Dec 11 10:18:11 crc kubenswrapper[4953]: I1211 10:18:11.628966 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-766b89b5cb-g9v6d" Dec 11 10:18:11 crc kubenswrapper[4953]: I1211 10:18:11.642983 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-766b89b5cb-g9v6d" podStartSLOduration=7.642963461 podStartE2EDuration="7.642963461s" podCreationTimestamp="2025-12-11 10:18:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:18:11.640839522 +0000 UTC m=+409.664698555" watchObservedRunningTime="2025-12-11 10:18:11.642963461 +0000 UTC m=+409.666822494" Dec 11 10:18:12 crc kubenswrapper[4953]: I1211 10:18:12.085135 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-766b89b5cb-g9v6d"] Dec 11 10:18:12 crc kubenswrapper[4953]: I1211 10:18:12.100157 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6b5999896-bngts"] Dec 11 10:18:12 crc kubenswrapper[4953]: I1211 10:18:12.100439 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6b5999896-bngts" podUID="e32c4d79-5695-42bf-a790-20e60544b3c1" containerName="route-controller-manager" containerID="cri-o://8c309c2e56bf99a63834cd09456a6c3ee80a2b8f9e83b1a4ef29715d3e47b122" gracePeriod=30 Dec 11 10:18:12 crc kubenswrapper[4953]: I1211 10:18:12.623537 4953 generic.go:334] "Generic (PLEG): container finished" podID="e32c4d79-5695-42bf-a790-20e60544b3c1" containerID="8c309c2e56bf99a63834cd09456a6c3ee80a2b8f9e83b1a4ef29715d3e47b122" exitCode=0 Dec 11 10:18:12 crc kubenswrapper[4953]: I1211 10:18:12.623627 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6b5999896-bngts" event={"ID":"e32c4d79-5695-42bf-a790-20e60544b3c1","Type":"ContainerDied","Data":"8c309c2e56bf99a63834cd09456a6c3ee80a2b8f9e83b1a4ef29715d3e47b122"} Dec 11 10:18:12 crc kubenswrapper[4953]: I1211 10:18:12.985764 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6b5999896-bngts" Dec 11 10:18:13 crc kubenswrapper[4953]: I1211 10:18:13.157939 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rcdrr\" (UniqueName: \"kubernetes.io/projected/e32c4d79-5695-42bf-a790-20e60544b3c1-kube-api-access-rcdrr\") pod \"e32c4d79-5695-42bf-a790-20e60544b3c1\" (UID: \"e32c4d79-5695-42bf-a790-20e60544b3c1\") " Dec 11 10:18:13 crc kubenswrapper[4953]: I1211 10:18:13.158000 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e32c4d79-5695-42bf-a790-20e60544b3c1-client-ca\") pod \"e32c4d79-5695-42bf-a790-20e60544b3c1\" (UID: \"e32c4d79-5695-42bf-a790-20e60544b3c1\") " Dec 11 10:18:13 crc kubenswrapper[4953]: I1211 10:18:13.158072 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e32c4d79-5695-42bf-a790-20e60544b3c1-serving-cert\") pod \"e32c4d79-5695-42bf-a790-20e60544b3c1\" (UID: \"e32c4d79-5695-42bf-a790-20e60544b3c1\") " Dec 11 10:18:13 crc kubenswrapper[4953]: I1211 10:18:13.158138 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e32c4d79-5695-42bf-a790-20e60544b3c1-config\") pod \"e32c4d79-5695-42bf-a790-20e60544b3c1\" (UID: \"e32c4d79-5695-42bf-a790-20e60544b3c1\") " Dec 11 10:18:13 crc kubenswrapper[4953]: I1211 10:18:13.158814 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e32c4d79-5695-42bf-a790-20e60544b3c1-client-ca" (OuterVolumeSpecName: "client-ca") pod "e32c4d79-5695-42bf-a790-20e60544b3c1" (UID: "e32c4d79-5695-42bf-a790-20e60544b3c1"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:18:13 crc kubenswrapper[4953]: I1211 10:18:13.158851 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e32c4d79-5695-42bf-a790-20e60544b3c1-config" (OuterVolumeSpecName: "config") pod "e32c4d79-5695-42bf-a790-20e60544b3c1" (UID: "e32c4d79-5695-42bf-a790-20e60544b3c1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:18:13 crc kubenswrapper[4953]: I1211 10:18:13.165186 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e32c4d79-5695-42bf-a790-20e60544b3c1-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e32c4d79-5695-42bf-a790-20e60544b3c1" (UID: "e32c4d79-5695-42bf-a790-20e60544b3c1"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:18:13 crc kubenswrapper[4953]: I1211 10:18:13.180318 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e32c4d79-5695-42bf-a790-20e60544b3c1-kube-api-access-rcdrr" (OuterVolumeSpecName: "kube-api-access-rcdrr") pod "e32c4d79-5695-42bf-a790-20e60544b3c1" (UID: "e32c4d79-5695-42bf-a790-20e60544b3c1"). InnerVolumeSpecName "kube-api-access-rcdrr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:18:13 crc kubenswrapper[4953]: I1211 10:18:13.259148 4953 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e32c4d79-5695-42bf-a790-20e60544b3c1-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 10:18:13 crc kubenswrapper[4953]: I1211 10:18:13.259187 4953 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e32c4d79-5695-42bf-a790-20e60544b3c1-config\") on node \"crc\" DevicePath \"\"" Dec 11 10:18:13 crc kubenswrapper[4953]: I1211 10:18:13.259199 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rcdrr\" (UniqueName: \"kubernetes.io/projected/e32c4d79-5695-42bf-a790-20e60544b3c1-kube-api-access-rcdrr\") on node \"crc\" DevicePath \"\"" Dec 11 10:18:13 crc kubenswrapper[4953]: I1211 10:18:13.259214 4953 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e32c4d79-5695-42bf-a790-20e60544b3c1-client-ca\") on node \"crc\" DevicePath \"\"" Dec 11 10:18:13 crc kubenswrapper[4953]: I1211 10:18:13.630123 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6b5999896-bngts" event={"ID":"e32c4d79-5695-42bf-a790-20e60544b3c1","Type":"ContainerDied","Data":"50f55609accc7e24141cb333f9df1bc257bc0d6bfedeb1e134fe2aec71595211"} Dec 11 10:18:13 crc kubenswrapper[4953]: I1211 10:18:13.630483 4953 scope.go:117] "RemoveContainer" containerID="8c309c2e56bf99a63834cd09456a6c3ee80a2b8f9e83b1a4ef29715d3e47b122" Dec 11 10:18:13 crc kubenswrapper[4953]: I1211 10:18:13.630139 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6b5999896-bngts" Dec 11 10:18:13 crc kubenswrapper[4953]: I1211 10:18:13.630273 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-766b89b5cb-g9v6d" podUID="3ac5883c-6fa6-4978-bf25-46a17fd59cb7" containerName="controller-manager" containerID="cri-o://1f1cba4b2ceae3cc5efa767970e16315726d22becdfd736a493fa858bbbfa616" gracePeriod=30 Dec 11 10:18:13 crc kubenswrapper[4953]: I1211 10:18:13.669425 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6b5999896-bngts"] Dec 11 10:18:13 crc kubenswrapper[4953]: I1211 10:18:13.672653 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6b5999896-bngts"] Dec 11 10:18:13 crc kubenswrapper[4953]: I1211 10:18:13.831082 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6c8dd8cc6f-mkg2m"] Dec 11 10:18:13 crc kubenswrapper[4953]: E1211 10:18:13.831327 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e32c4d79-5695-42bf-a790-20e60544b3c1" containerName="route-controller-manager" Dec 11 10:18:13 crc kubenswrapper[4953]: I1211 10:18:13.831339 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="e32c4d79-5695-42bf-a790-20e60544b3c1" containerName="route-controller-manager" Dec 11 10:18:13 crc kubenswrapper[4953]: I1211 10:18:13.831436 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="e32c4d79-5695-42bf-a790-20e60544b3c1" containerName="route-controller-manager" Dec 11 10:18:13 crc kubenswrapper[4953]: I1211 10:18:13.831850 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6c8dd8cc6f-mkg2m" Dec 11 10:18:13 crc kubenswrapper[4953]: I1211 10:18:13.834795 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 11 10:18:13 crc kubenswrapper[4953]: I1211 10:18:13.835277 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 11 10:18:13 crc kubenswrapper[4953]: I1211 10:18:13.835382 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 11 10:18:13 crc kubenswrapper[4953]: I1211 10:18:13.835423 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 11 10:18:13 crc kubenswrapper[4953]: I1211 10:18:13.835811 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 11 10:18:13 crc kubenswrapper[4953]: I1211 10:18:13.837077 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 11 10:18:13 crc kubenswrapper[4953]: I1211 10:18:13.931433 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2d39e897-a654-48d2-95ac-f99002a740b7-serving-cert\") pod \"route-controller-manager-6c8dd8cc6f-mkg2m\" (UID: \"2d39e897-a654-48d2-95ac-f99002a740b7\") " pod="openshift-route-controller-manager/route-controller-manager-6c8dd8cc6f-mkg2m" Dec 11 10:18:13 crc kubenswrapper[4953]: I1211 10:18:13.931534 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxvzr\" (UniqueName: \"kubernetes.io/projected/2d39e897-a654-48d2-95ac-f99002a740b7-kube-api-access-dxvzr\") pod \"route-controller-manager-6c8dd8cc6f-mkg2m\" (UID: \"2d39e897-a654-48d2-95ac-f99002a740b7\") " pod="openshift-route-controller-manager/route-controller-manager-6c8dd8cc6f-mkg2m" Dec 11 10:18:13 crc kubenswrapper[4953]: I1211 10:18:13.931560 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2d39e897-a654-48d2-95ac-f99002a740b7-client-ca\") pod \"route-controller-manager-6c8dd8cc6f-mkg2m\" (UID: \"2d39e897-a654-48d2-95ac-f99002a740b7\") " pod="openshift-route-controller-manager/route-controller-manager-6c8dd8cc6f-mkg2m" Dec 11 10:18:13 crc kubenswrapper[4953]: I1211 10:18:13.931649 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d39e897-a654-48d2-95ac-f99002a740b7-config\") pod \"route-controller-manager-6c8dd8cc6f-mkg2m\" (UID: \"2d39e897-a654-48d2-95ac-f99002a740b7\") " pod="openshift-route-controller-manager/route-controller-manager-6c8dd8cc6f-mkg2m" Dec 11 10:18:13 crc kubenswrapper[4953]: I1211 10:18:13.941412 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6c8dd8cc6f-mkg2m"] Dec 11 10:18:14 crc kubenswrapper[4953]: I1211 10:18:14.032332 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d39e897-a654-48d2-95ac-f99002a740b7-config\") pod \"route-controller-manager-6c8dd8cc6f-mkg2m\" (UID: \"2d39e897-a654-48d2-95ac-f99002a740b7\") " pod="openshift-route-controller-manager/route-controller-manager-6c8dd8cc6f-mkg2m" Dec 11 10:18:14 crc kubenswrapper[4953]: I1211 10:18:14.032383 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2d39e897-a654-48d2-95ac-f99002a740b7-serving-cert\") pod \"route-controller-manager-6c8dd8cc6f-mkg2m\" (UID: \"2d39e897-a654-48d2-95ac-f99002a740b7\") " pod="openshift-route-controller-manager/route-controller-manager-6c8dd8cc6f-mkg2m" Dec 11 10:18:14 crc kubenswrapper[4953]: I1211 10:18:14.032429 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxvzr\" (UniqueName: \"kubernetes.io/projected/2d39e897-a654-48d2-95ac-f99002a740b7-kube-api-access-dxvzr\") pod \"route-controller-manager-6c8dd8cc6f-mkg2m\" (UID: \"2d39e897-a654-48d2-95ac-f99002a740b7\") " pod="openshift-route-controller-manager/route-controller-manager-6c8dd8cc6f-mkg2m" Dec 11 10:18:14 crc kubenswrapper[4953]: I1211 10:18:14.032451 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2d39e897-a654-48d2-95ac-f99002a740b7-client-ca\") pod \"route-controller-manager-6c8dd8cc6f-mkg2m\" (UID: \"2d39e897-a654-48d2-95ac-f99002a740b7\") " pod="openshift-route-controller-manager/route-controller-manager-6c8dd8cc6f-mkg2m" Dec 11 10:18:14 crc kubenswrapper[4953]: I1211 10:18:14.033598 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2d39e897-a654-48d2-95ac-f99002a740b7-client-ca\") pod \"route-controller-manager-6c8dd8cc6f-mkg2m\" (UID: \"2d39e897-a654-48d2-95ac-f99002a740b7\") " pod="openshift-route-controller-manager/route-controller-manager-6c8dd8cc6f-mkg2m" Dec 11 10:18:14 crc kubenswrapper[4953]: I1211 10:18:14.033764 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d39e897-a654-48d2-95ac-f99002a740b7-config\") pod \"route-controller-manager-6c8dd8cc6f-mkg2m\" (UID: \"2d39e897-a654-48d2-95ac-f99002a740b7\") " pod="openshift-route-controller-manager/route-controller-manager-6c8dd8cc6f-mkg2m" Dec 11 10:18:14 crc kubenswrapper[4953]: I1211 10:18:14.038308 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2d39e897-a654-48d2-95ac-f99002a740b7-serving-cert\") pod \"route-controller-manager-6c8dd8cc6f-mkg2m\" (UID: \"2d39e897-a654-48d2-95ac-f99002a740b7\") " pod="openshift-route-controller-manager/route-controller-manager-6c8dd8cc6f-mkg2m" Dec 11 10:18:14 crc kubenswrapper[4953]: I1211 10:18:14.047954 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxvzr\" (UniqueName: \"kubernetes.io/projected/2d39e897-a654-48d2-95ac-f99002a740b7-kube-api-access-dxvzr\") pod \"route-controller-manager-6c8dd8cc6f-mkg2m\" (UID: \"2d39e897-a654-48d2-95ac-f99002a740b7\") " pod="openshift-route-controller-manager/route-controller-manager-6c8dd8cc6f-mkg2m" Dec 11 10:18:14 crc kubenswrapper[4953]: I1211 10:18:14.147651 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6c8dd8cc6f-mkg2m" Dec 11 10:18:14 crc kubenswrapper[4953]: I1211 10:18:14.481691 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e32c4d79-5695-42bf-a790-20e60544b3c1" path="/var/lib/kubelet/pods/e32c4d79-5695-42bf-a790-20e60544b3c1/volumes" Dec 11 10:18:14 crc kubenswrapper[4953]: I1211 10:18:14.482249 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6c8dd8cc6f-mkg2m"] Dec 11 10:18:14 crc kubenswrapper[4953]: W1211 10:18:14.484342 4953 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2d39e897_a654_48d2_95ac_f99002a740b7.slice/crio-2a0bff6e199202bd1d0925b5318e9451353d64602f4ef897e69e7d6c929fca45 WatchSource:0}: Error finding container 2a0bff6e199202bd1d0925b5318e9451353d64602f4ef897e69e7d6c929fca45: Status 404 returned error can't find the container with id 2a0bff6e199202bd1d0925b5318e9451353d64602f4ef897e69e7d6c929fca45 Dec 11 10:18:14 crc kubenswrapper[4953]: I1211 10:18:14.640684 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6c8dd8cc6f-mkg2m" event={"ID":"2d39e897-a654-48d2-95ac-f99002a740b7","Type":"ContainerStarted","Data":"2a0bff6e199202bd1d0925b5318e9451353d64602f4ef897e69e7d6c929fca45"} Dec 11 10:18:15 crc kubenswrapper[4953]: I1211 10:18:15.659337 4953 generic.go:334] "Generic (PLEG): container finished" podID="3ac5883c-6fa6-4978-bf25-46a17fd59cb7" containerID="1f1cba4b2ceae3cc5efa767970e16315726d22becdfd736a493fa858bbbfa616" exitCode=0 Dec 11 10:18:15 crc kubenswrapper[4953]: I1211 10:18:15.659389 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-766b89b5cb-g9v6d" event={"ID":"3ac5883c-6fa6-4978-bf25-46a17fd59cb7","Type":"ContainerDied","Data":"1f1cba4b2ceae3cc5efa767970e16315726d22becdfd736a493fa858bbbfa616"} Dec 11 10:18:16 crc kubenswrapper[4953]: I1211 10:18:16.723222 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-766b89b5cb-g9v6d" event={"ID":"3ac5883c-6fa6-4978-bf25-46a17fd59cb7","Type":"ContainerDied","Data":"147319badd5aa14da22d7363f238f95fe07fe4db7b90e4ee14d40080bf4ef27a"} Dec 11 10:18:16 crc kubenswrapper[4953]: I1211 10:18:16.723558 4953 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="147319badd5aa14da22d7363f238f95fe07fe4db7b90e4ee14d40080bf4ef27a" Dec 11 10:18:16 crc kubenswrapper[4953]: I1211 10:18:16.725062 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6c8dd8cc6f-mkg2m" event={"ID":"2d39e897-a654-48d2-95ac-f99002a740b7","Type":"ContainerStarted","Data":"e7a4fb5c5dd9d3d4c086128ec957d5a9dacd8a651edc49df2ca5df4bfdbceb9c"} Dec 11 10:18:16 crc kubenswrapper[4953]: I1211 10:18:16.725467 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6c8dd8cc6f-mkg2m" Dec 11 10:18:16 crc kubenswrapper[4953]: I1211 10:18:16.728906 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-766b89b5cb-g9v6d" Dec 11 10:18:16 crc kubenswrapper[4953]: I1211 10:18:16.745689 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6c8dd8cc6f-mkg2m" podStartSLOduration=3.745672507 podStartE2EDuration="3.745672507s" podCreationTimestamp="2025-12-11 10:18:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:18:16.742635619 +0000 UTC m=+414.766494642" watchObservedRunningTime="2025-12-11 10:18:16.745672507 +0000 UTC m=+414.769531540" Dec 11 10:18:16 crc kubenswrapper[4953]: I1211 10:18:16.773939 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-c7fcc49c-mg84z"] Dec 11 10:18:16 crc kubenswrapper[4953]: E1211 10:18:16.774157 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ac5883c-6fa6-4978-bf25-46a17fd59cb7" containerName="controller-manager" Dec 11 10:18:16 crc kubenswrapper[4953]: I1211 10:18:16.774170 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ac5883c-6fa6-4978-bf25-46a17fd59cb7" containerName="controller-manager" Dec 11 10:18:16 crc kubenswrapper[4953]: I1211 10:18:16.774295 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ac5883c-6fa6-4978-bf25-46a17fd59cb7" containerName="controller-manager" Dec 11 10:18:16 crc kubenswrapper[4953]: I1211 10:18:16.774895 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-c7fcc49c-mg84z" Dec 11 10:18:16 crc kubenswrapper[4953]: I1211 10:18:16.784948 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-c7fcc49c-mg84z"] Dec 11 10:18:16 crc kubenswrapper[4953]: I1211 10:18:16.809233 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/785253df-afd8-4bc1-b7d0-282c1549daef-proxy-ca-bundles\") pod \"controller-manager-c7fcc49c-mg84z\" (UID: \"785253df-afd8-4bc1-b7d0-282c1549daef\") " pod="openshift-controller-manager/controller-manager-c7fcc49c-mg84z" Dec 11 10:18:16 crc kubenswrapper[4953]: I1211 10:18:16.809359 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/785253df-afd8-4bc1-b7d0-282c1549daef-config\") pod \"controller-manager-c7fcc49c-mg84z\" (UID: \"785253df-afd8-4bc1-b7d0-282c1549daef\") " pod="openshift-controller-manager/controller-manager-c7fcc49c-mg84z" Dec 11 10:18:16 crc kubenswrapper[4953]: I1211 10:18:16.809384 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/785253df-afd8-4bc1-b7d0-282c1549daef-client-ca\") pod \"controller-manager-c7fcc49c-mg84z\" (UID: \"785253df-afd8-4bc1-b7d0-282c1549daef\") " pod="openshift-controller-manager/controller-manager-c7fcc49c-mg84z" Dec 11 10:18:16 crc kubenswrapper[4953]: I1211 10:18:16.809415 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/785253df-afd8-4bc1-b7d0-282c1549daef-serving-cert\") pod \"controller-manager-c7fcc49c-mg84z\" (UID: \"785253df-afd8-4bc1-b7d0-282c1549daef\") " pod="openshift-controller-manager/controller-manager-c7fcc49c-mg84z" Dec 11 10:18:16 crc kubenswrapper[4953]: I1211 10:18:16.809454 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hj4dh\" (UniqueName: \"kubernetes.io/projected/785253df-afd8-4bc1-b7d0-282c1549daef-kube-api-access-hj4dh\") pod \"controller-manager-c7fcc49c-mg84z\" (UID: \"785253df-afd8-4bc1-b7d0-282c1549daef\") " pod="openshift-controller-manager/controller-manager-c7fcc49c-mg84z" Dec 11 10:18:16 crc kubenswrapper[4953]: I1211 10:18:16.911204 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3ac5883c-6fa6-4978-bf25-46a17fd59cb7-client-ca\") pod \"3ac5883c-6fa6-4978-bf25-46a17fd59cb7\" (UID: \"3ac5883c-6fa6-4978-bf25-46a17fd59cb7\") " Dec 11 10:18:16 crc kubenswrapper[4953]: I1211 10:18:16.911282 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3ac5883c-6fa6-4978-bf25-46a17fd59cb7-serving-cert\") pod \"3ac5883c-6fa6-4978-bf25-46a17fd59cb7\" (UID: \"3ac5883c-6fa6-4978-bf25-46a17fd59cb7\") " Dec 11 10:18:16 crc kubenswrapper[4953]: I1211 10:18:16.911376 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pt4xg\" (UniqueName: \"kubernetes.io/projected/3ac5883c-6fa6-4978-bf25-46a17fd59cb7-kube-api-access-pt4xg\") pod \"3ac5883c-6fa6-4978-bf25-46a17fd59cb7\" (UID: \"3ac5883c-6fa6-4978-bf25-46a17fd59cb7\") " Dec 11 10:18:16 crc kubenswrapper[4953]: I1211 10:18:16.911442 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3ac5883c-6fa6-4978-bf25-46a17fd59cb7-proxy-ca-bundles\") pod \"3ac5883c-6fa6-4978-bf25-46a17fd59cb7\" (UID: \"3ac5883c-6fa6-4978-bf25-46a17fd59cb7\") " Dec 11 10:18:16 crc kubenswrapper[4953]: I1211 10:18:16.911505 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ac5883c-6fa6-4978-bf25-46a17fd59cb7-config\") pod \"3ac5883c-6fa6-4978-bf25-46a17fd59cb7\" (UID: \"3ac5883c-6fa6-4978-bf25-46a17fd59cb7\") " Dec 11 10:18:16 crc kubenswrapper[4953]: I1211 10:18:16.911738 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/785253df-afd8-4bc1-b7d0-282c1549daef-config\") pod \"controller-manager-c7fcc49c-mg84z\" (UID: \"785253df-afd8-4bc1-b7d0-282c1549daef\") " pod="openshift-controller-manager/controller-manager-c7fcc49c-mg84z" Dec 11 10:18:16 crc kubenswrapper[4953]: I1211 10:18:16.911822 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/785253df-afd8-4bc1-b7d0-282c1549daef-client-ca\") pod \"controller-manager-c7fcc49c-mg84z\" (UID: \"785253df-afd8-4bc1-b7d0-282c1549daef\") " pod="openshift-controller-manager/controller-manager-c7fcc49c-mg84z" Dec 11 10:18:16 crc kubenswrapper[4953]: I1211 10:18:16.911876 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/785253df-afd8-4bc1-b7d0-282c1549daef-serving-cert\") pod \"controller-manager-c7fcc49c-mg84z\" (UID: \"785253df-afd8-4bc1-b7d0-282c1549daef\") " pod="openshift-controller-manager/controller-manager-c7fcc49c-mg84z" Dec 11 10:18:16 crc kubenswrapper[4953]: I1211 10:18:16.911949 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hj4dh\" (UniqueName: \"kubernetes.io/projected/785253df-afd8-4bc1-b7d0-282c1549daef-kube-api-access-hj4dh\") pod \"controller-manager-c7fcc49c-mg84z\" (UID: \"785253df-afd8-4bc1-b7d0-282c1549daef\") " pod="openshift-controller-manager/controller-manager-c7fcc49c-mg84z" Dec 11 10:18:16 crc kubenswrapper[4953]: I1211 10:18:16.912004 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/785253df-afd8-4bc1-b7d0-282c1549daef-proxy-ca-bundles\") pod \"controller-manager-c7fcc49c-mg84z\" (UID: \"785253df-afd8-4bc1-b7d0-282c1549daef\") " pod="openshift-controller-manager/controller-manager-c7fcc49c-mg84z" Dec 11 10:18:16 crc kubenswrapper[4953]: I1211 10:18:16.912486 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ac5883c-6fa6-4978-bf25-46a17fd59cb7-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "3ac5883c-6fa6-4978-bf25-46a17fd59cb7" (UID: "3ac5883c-6fa6-4978-bf25-46a17fd59cb7"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:18:16 crc kubenswrapper[4953]: I1211 10:18:16.913025 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ac5883c-6fa6-4978-bf25-46a17fd59cb7-config" (OuterVolumeSpecName: "config") pod "3ac5883c-6fa6-4978-bf25-46a17fd59cb7" (UID: "3ac5883c-6fa6-4978-bf25-46a17fd59cb7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:18:16 crc kubenswrapper[4953]: I1211 10:18:16.913476 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/785253df-afd8-4bc1-b7d0-282c1549daef-client-ca\") pod \"controller-manager-c7fcc49c-mg84z\" (UID: \"785253df-afd8-4bc1-b7d0-282c1549daef\") " pod="openshift-controller-manager/controller-manager-c7fcc49c-mg84z" Dec 11 10:18:16 crc kubenswrapper[4953]: I1211 10:18:16.913837 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/785253df-afd8-4bc1-b7d0-282c1549daef-proxy-ca-bundles\") pod \"controller-manager-c7fcc49c-mg84z\" (UID: \"785253df-afd8-4bc1-b7d0-282c1549daef\") " pod="openshift-controller-manager/controller-manager-c7fcc49c-mg84z" Dec 11 10:18:16 crc kubenswrapper[4953]: I1211 10:18:16.914620 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/785253df-afd8-4bc1-b7d0-282c1549daef-config\") pod \"controller-manager-c7fcc49c-mg84z\" (UID: \"785253df-afd8-4bc1-b7d0-282c1549daef\") " pod="openshift-controller-manager/controller-manager-c7fcc49c-mg84z" Dec 11 10:18:16 crc kubenswrapper[4953]: I1211 10:18:16.917730 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/785253df-afd8-4bc1-b7d0-282c1549daef-serving-cert\") pod \"controller-manager-c7fcc49c-mg84z\" (UID: \"785253df-afd8-4bc1-b7d0-282c1549daef\") " pod="openshift-controller-manager/controller-manager-c7fcc49c-mg84z" Dec 11 10:18:16 crc kubenswrapper[4953]: I1211 10:18:16.921840 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ac5883c-6fa6-4978-bf25-46a17fd59cb7-client-ca" (OuterVolumeSpecName: "client-ca") pod "3ac5883c-6fa6-4978-bf25-46a17fd59cb7" (UID: "3ac5883c-6fa6-4978-bf25-46a17fd59cb7"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:18:16 crc kubenswrapper[4953]: I1211 10:18:16.928823 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ac5883c-6fa6-4978-bf25-46a17fd59cb7-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "3ac5883c-6fa6-4978-bf25-46a17fd59cb7" (UID: "3ac5883c-6fa6-4978-bf25-46a17fd59cb7"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:18:16 crc kubenswrapper[4953]: I1211 10:18:16.931714 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ac5883c-6fa6-4978-bf25-46a17fd59cb7-kube-api-access-pt4xg" (OuterVolumeSpecName: "kube-api-access-pt4xg") pod "3ac5883c-6fa6-4978-bf25-46a17fd59cb7" (UID: "3ac5883c-6fa6-4978-bf25-46a17fd59cb7"). InnerVolumeSpecName "kube-api-access-pt4xg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:18:16 crc kubenswrapper[4953]: I1211 10:18:16.947675 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hj4dh\" (UniqueName: \"kubernetes.io/projected/785253df-afd8-4bc1-b7d0-282c1549daef-kube-api-access-hj4dh\") pod \"controller-manager-c7fcc49c-mg84z\" (UID: \"785253df-afd8-4bc1-b7d0-282c1549daef\") " pod="openshift-controller-manager/controller-manager-c7fcc49c-mg84z" Dec 11 10:18:17 crc kubenswrapper[4953]: I1211 10:18:17.006665 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6c8dd8cc6f-mkg2m" Dec 11 10:18:17 crc kubenswrapper[4953]: I1211 10:18:17.026382 4953 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3ac5883c-6fa6-4978-bf25-46a17fd59cb7-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 10:18:17 crc kubenswrapper[4953]: I1211 10:18:17.026433 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pt4xg\" (UniqueName: \"kubernetes.io/projected/3ac5883c-6fa6-4978-bf25-46a17fd59cb7-kube-api-access-pt4xg\") on node \"crc\" DevicePath \"\"" Dec 11 10:18:17 crc kubenswrapper[4953]: I1211 10:18:17.026447 4953 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3ac5883c-6fa6-4978-bf25-46a17fd59cb7-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 11 10:18:17 crc kubenswrapper[4953]: I1211 10:18:17.026466 4953 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ac5883c-6fa6-4978-bf25-46a17fd59cb7-config\") on node \"crc\" DevicePath \"\"" Dec 11 10:18:17 crc kubenswrapper[4953]: I1211 10:18:17.026479 4953 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3ac5883c-6fa6-4978-bf25-46a17fd59cb7-client-ca\") on node \"crc\" DevicePath \"\"" Dec 11 10:18:17 crc kubenswrapper[4953]: I1211 10:18:17.091055 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-c7fcc49c-mg84z" Dec 11 10:18:17 crc kubenswrapper[4953]: I1211 10:18:17.730893 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-c7fcc49c-mg84z"] Dec 11 10:18:17 crc kubenswrapper[4953]: I1211 10:18:17.730942 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-766b89b5cb-g9v6d" Dec 11 10:18:17 crc kubenswrapper[4953]: W1211 10:18:17.739366 4953 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod785253df_afd8_4bc1_b7d0_282c1549daef.slice/crio-96669279bdd3805e0d01c54a8c13163e078d5eb5e3b4e0d84bd6d264843c2ba2 WatchSource:0}: Error finding container 96669279bdd3805e0d01c54a8c13163e078d5eb5e3b4e0d84bd6d264843c2ba2: Status 404 returned error can't find the container with id 96669279bdd3805e0d01c54a8c13163e078d5eb5e3b4e0d84bd6d264843c2ba2 Dec 11 10:18:17 crc kubenswrapper[4953]: I1211 10:18:17.772141 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-766b89b5cb-g9v6d"] Dec 11 10:18:17 crc kubenswrapper[4953]: I1211 10:18:17.775207 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-766b89b5cb-g9v6d"] Dec 11 10:18:18 crc kubenswrapper[4953]: I1211 10:18:18.193954 4953 patch_prober.go:28] interesting pod/machine-config-daemon-q2898 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 10:18:18 crc kubenswrapper[4953]: I1211 10:18:18.194333 4953 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 10:18:18 crc kubenswrapper[4953]: I1211 10:18:18.480686 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ac5883c-6fa6-4978-bf25-46a17fd59cb7" path="/var/lib/kubelet/pods/3ac5883c-6fa6-4978-bf25-46a17fd59cb7/volumes" Dec 11 10:18:18 crc kubenswrapper[4953]: I1211 10:18:18.737520 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-c7fcc49c-mg84z" event={"ID":"785253df-afd8-4bc1-b7d0-282c1549daef","Type":"ContainerStarted","Data":"6d8e53f6f9ce8ffdcbdfabf128157b4c0ba7bc047c16c910b20c251966724286"} Dec 11 10:18:18 crc kubenswrapper[4953]: I1211 10:18:18.737598 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-c7fcc49c-mg84z" event={"ID":"785253df-afd8-4bc1-b7d0-282c1549daef","Type":"ContainerStarted","Data":"96669279bdd3805e0d01c54a8c13163e078d5eb5e3b4e0d84bd6d264843c2ba2"} Dec 11 10:18:18 crc kubenswrapper[4953]: I1211 10:18:18.753881 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-c7fcc49c-mg84z" podStartSLOduration=5.753859103 podStartE2EDuration="5.753859103s" podCreationTimestamp="2025-12-11 10:18:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:18:18.752277272 +0000 UTC m=+416.776136335" watchObservedRunningTime="2025-12-11 10:18:18.753859103 +0000 UTC m=+416.777718126" Dec 11 10:18:19 crc kubenswrapper[4953]: I1211 10:18:19.748957 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-c7fcc49c-mg84z" Dec 11 10:18:19 crc kubenswrapper[4953]: I1211 10:18:19.753059 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-c7fcc49c-mg84z" Dec 11 10:18:29 crc kubenswrapper[4953]: I1211 10:18:29.032165 4953 scope.go:117] "RemoveContainer" containerID="6b38e6fc7946d99ff7570627e9bfd01e9f5e029ad3f3e2cda276461f222d7950" Dec 11 10:18:44 crc kubenswrapper[4953]: I1211 10:18:44.712607 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-c7fcc49c-mg84z"] Dec 11 10:18:44 crc kubenswrapper[4953]: I1211 10:18:44.713470 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-c7fcc49c-mg84z" podUID="785253df-afd8-4bc1-b7d0-282c1549daef" containerName="controller-manager" containerID="cri-o://6d8e53f6f9ce8ffdcbdfabf128157b4c0ba7bc047c16c910b20c251966724286" gracePeriod=30 Dec 11 10:18:45 crc kubenswrapper[4953]: I1211 10:18:45.153008 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6c8dd8cc6f-mkg2m"] Dec 11 10:18:45 crc kubenswrapper[4953]: I1211 10:18:45.153324 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6c8dd8cc6f-mkg2m" podUID="2d39e897-a654-48d2-95ac-f99002a740b7" containerName="route-controller-manager" containerID="cri-o://e7a4fb5c5dd9d3d4c086128ec957d5a9dacd8a651edc49df2ca5df4bfdbceb9c" gracePeriod=30 Dec 11 10:18:45 crc kubenswrapper[4953]: I1211 10:18:45.281771 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-jdxzs"] Dec 11 10:18:45 crc kubenswrapper[4953]: I1211 10:18:45.282840 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jdxzs" Dec 11 10:18:45 crc kubenswrapper[4953]: I1211 10:18:45.284320 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 11 10:18:45 crc kubenswrapper[4953]: I1211 10:18:45.291305 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jdxzs"] Dec 11 10:18:45 crc kubenswrapper[4953]: I1211 10:18:45.347808 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65c394e3-db2a-449c-9963-a880e17adbb2-catalog-content\") pod \"redhat-operators-jdxzs\" (UID: \"65c394e3-db2a-449c-9963-a880e17adbb2\") " pod="openshift-marketplace/redhat-operators-jdxzs" Dec 11 10:18:45 crc kubenswrapper[4953]: I1211 10:18:45.347961 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rg689\" (UniqueName: \"kubernetes.io/projected/65c394e3-db2a-449c-9963-a880e17adbb2-kube-api-access-rg689\") pod \"redhat-operators-jdxzs\" (UID: \"65c394e3-db2a-449c-9963-a880e17adbb2\") " pod="openshift-marketplace/redhat-operators-jdxzs" Dec 11 10:18:45 crc kubenswrapper[4953]: I1211 10:18:45.348007 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65c394e3-db2a-449c-9963-a880e17adbb2-utilities\") pod \"redhat-operators-jdxzs\" (UID: \"65c394e3-db2a-449c-9963-a880e17adbb2\") " pod="openshift-marketplace/redhat-operators-jdxzs" Dec 11 10:18:45 crc kubenswrapper[4953]: I1211 10:18:45.449392 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65c394e3-db2a-449c-9963-a880e17adbb2-catalog-content\") pod \"redhat-operators-jdxzs\" (UID: \"65c394e3-db2a-449c-9963-a880e17adbb2\") " pod="openshift-marketplace/redhat-operators-jdxzs" Dec 11 10:18:45 crc kubenswrapper[4953]: I1211 10:18:45.449505 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rg689\" (UniqueName: \"kubernetes.io/projected/65c394e3-db2a-449c-9963-a880e17adbb2-kube-api-access-rg689\") pod \"redhat-operators-jdxzs\" (UID: \"65c394e3-db2a-449c-9963-a880e17adbb2\") " pod="openshift-marketplace/redhat-operators-jdxzs" Dec 11 10:18:45 crc kubenswrapper[4953]: I1211 10:18:45.449537 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65c394e3-db2a-449c-9963-a880e17adbb2-utilities\") pod \"redhat-operators-jdxzs\" (UID: \"65c394e3-db2a-449c-9963-a880e17adbb2\") " pod="openshift-marketplace/redhat-operators-jdxzs" Dec 11 10:18:45 crc kubenswrapper[4953]: I1211 10:18:45.450223 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65c394e3-db2a-449c-9963-a880e17adbb2-utilities\") pod \"redhat-operators-jdxzs\" (UID: \"65c394e3-db2a-449c-9963-a880e17adbb2\") " pod="openshift-marketplace/redhat-operators-jdxzs" Dec 11 10:18:45 crc kubenswrapper[4953]: I1211 10:18:45.450219 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65c394e3-db2a-449c-9963-a880e17adbb2-catalog-content\") pod \"redhat-operators-jdxzs\" (UID: \"65c394e3-db2a-449c-9963-a880e17adbb2\") " pod="openshift-marketplace/redhat-operators-jdxzs" Dec 11 10:18:45 crc kubenswrapper[4953]: I1211 10:18:45.461255 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-jtjgm"] Dec 11 10:18:45 crc kubenswrapper[4953]: I1211 10:18:45.462301 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jtjgm" Dec 11 10:18:45 crc kubenswrapper[4953]: I1211 10:18:45.463927 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 11 10:18:45 crc kubenswrapper[4953]: I1211 10:18:45.469751 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rg689\" (UniqueName: \"kubernetes.io/projected/65c394e3-db2a-449c-9963-a880e17adbb2-kube-api-access-rg689\") pod \"redhat-operators-jdxzs\" (UID: \"65c394e3-db2a-449c-9963-a880e17adbb2\") " pod="openshift-marketplace/redhat-operators-jdxzs" Dec 11 10:18:45 crc kubenswrapper[4953]: I1211 10:18:45.473532 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jtjgm"] Dec 11 10:18:45 crc kubenswrapper[4953]: I1211 10:18:45.550616 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0483db9-edf7-4df4-8cd4-b64966d77014-catalog-content\") pod \"redhat-marketplace-jtjgm\" (UID: \"b0483db9-edf7-4df4-8cd4-b64966d77014\") " pod="openshift-marketplace/redhat-marketplace-jtjgm" Dec 11 10:18:45 crc kubenswrapper[4953]: I1211 10:18:45.550672 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0483db9-edf7-4df4-8cd4-b64966d77014-utilities\") pod \"redhat-marketplace-jtjgm\" (UID: \"b0483db9-edf7-4df4-8cd4-b64966d77014\") " pod="openshift-marketplace/redhat-marketplace-jtjgm" Dec 11 10:18:45 crc kubenswrapper[4953]: I1211 10:18:45.550722 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z522q\" (UniqueName: \"kubernetes.io/projected/b0483db9-edf7-4df4-8cd4-b64966d77014-kube-api-access-z522q\") pod \"redhat-marketplace-jtjgm\" (UID: \"b0483db9-edf7-4df4-8cd4-b64966d77014\") " pod="openshift-marketplace/redhat-marketplace-jtjgm" Dec 11 10:18:45 crc kubenswrapper[4953]: I1211 10:18:45.602008 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jdxzs" Dec 11 10:18:45 crc kubenswrapper[4953]: I1211 10:18:45.651915 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z522q\" (UniqueName: \"kubernetes.io/projected/b0483db9-edf7-4df4-8cd4-b64966d77014-kube-api-access-z522q\") pod \"redhat-marketplace-jtjgm\" (UID: \"b0483db9-edf7-4df4-8cd4-b64966d77014\") " pod="openshift-marketplace/redhat-marketplace-jtjgm" Dec 11 10:18:45 crc kubenswrapper[4953]: I1211 10:18:45.652413 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0483db9-edf7-4df4-8cd4-b64966d77014-catalog-content\") pod \"redhat-marketplace-jtjgm\" (UID: \"b0483db9-edf7-4df4-8cd4-b64966d77014\") " pod="openshift-marketplace/redhat-marketplace-jtjgm" Dec 11 10:18:45 crc kubenswrapper[4953]: I1211 10:18:45.652448 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0483db9-edf7-4df4-8cd4-b64966d77014-utilities\") pod \"redhat-marketplace-jtjgm\" (UID: \"b0483db9-edf7-4df4-8cd4-b64966d77014\") " pod="openshift-marketplace/redhat-marketplace-jtjgm" Dec 11 10:18:45 crc kubenswrapper[4953]: I1211 10:18:45.653120 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0483db9-edf7-4df4-8cd4-b64966d77014-utilities\") pod \"redhat-marketplace-jtjgm\" (UID: \"b0483db9-edf7-4df4-8cd4-b64966d77014\") " pod="openshift-marketplace/redhat-marketplace-jtjgm" Dec 11 10:18:45 crc kubenswrapper[4953]: I1211 10:18:45.654063 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0483db9-edf7-4df4-8cd4-b64966d77014-catalog-content\") pod \"redhat-marketplace-jtjgm\" (UID: \"b0483db9-edf7-4df4-8cd4-b64966d77014\") " pod="openshift-marketplace/redhat-marketplace-jtjgm" Dec 11 10:18:45 crc kubenswrapper[4953]: I1211 10:18:45.675616 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z522q\" (UniqueName: \"kubernetes.io/projected/b0483db9-edf7-4df4-8cd4-b64966d77014-kube-api-access-z522q\") pod \"redhat-marketplace-jtjgm\" (UID: \"b0483db9-edf7-4df4-8cd4-b64966d77014\") " pod="openshift-marketplace/redhat-marketplace-jtjgm" Dec 11 10:18:45 crc kubenswrapper[4953]: I1211 10:18:45.801669 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jtjgm" Dec 11 10:18:46 crc kubenswrapper[4953]: I1211 10:18:46.065170 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jdxzs"] Dec 11 10:18:46 crc kubenswrapper[4953]: I1211 10:18:46.154694 4953 generic.go:334] "Generic (PLEG): container finished" podID="2d39e897-a654-48d2-95ac-f99002a740b7" containerID="e7a4fb5c5dd9d3d4c086128ec957d5a9dacd8a651edc49df2ca5df4bfdbceb9c" exitCode=0 Dec 11 10:18:46 crc kubenswrapper[4953]: I1211 10:18:46.154791 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6c8dd8cc6f-mkg2m" event={"ID":"2d39e897-a654-48d2-95ac-f99002a740b7","Type":"ContainerDied","Data":"e7a4fb5c5dd9d3d4c086128ec957d5a9dacd8a651edc49df2ca5df4bfdbceb9c"} Dec 11 10:18:46 crc kubenswrapper[4953]: I1211 10:18:46.156133 4953 generic.go:334] "Generic (PLEG): container finished" podID="785253df-afd8-4bc1-b7d0-282c1549daef" containerID="6d8e53f6f9ce8ffdcbdfabf128157b4c0ba7bc047c16c910b20c251966724286" exitCode=0 Dec 11 10:18:46 crc kubenswrapper[4953]: I1211 10:18:46.156186 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-c7fcc49c-mg84z" event={"ID":"785253df-afd8-4bc1-b7d0-282c1549daef","Type":"ContainerDied","Data":"6d8e53f6f9ce8ffdcbdfabf128157b4c0ba7bc047c16c910b20c251966724286"} Dec 11 10:18:46 crc kubenswrapper[4953]: I1211 10:18:46.157154 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jdxzs" event={"ID":"65c394e3-db2a-449c-9963-a880e17adbb2","Type":"ContainerStarted","Data":"960676da3e62cf3e0121d8e0078b59699e37ae5fbf9e0b533622c8ba3611f5eb"} Dec 11 10:18:46 crc kubenswrapper[4953]: I1211 10:18:46.207838 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jtjgm"] Dec 11 10:18:46 crc kubenswrapper[4953]: W1211 10:18:46.212308 4953 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb0483db9_edf7_4df4_8cd4_b64966d77014.slice/crio-abbf438a078384bceb67d80f3ddbd7ae7aafed49c1d954b050ce471a1f00e7db WatchSource:0}: Error finding container abbf438a078384bceb67d80f3ddbd7ae7aafed49c1d954b050ce471a1f00e7db: Status 404 returned error can't find the container with id abbf438a078384bceb67d80f3ddbd7ae7aafed49c1d954b050ce471a1f00e7db Dec 11 10:18:46 crc kubenswrapper[4953]: I1211 10:18:46.795490 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6c8dd8cc6f-mkg2m" Dec 11 10:18:46 crc kubenswrapper[4953]: I1211 10:18:46.844602 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-69b576f67c-bctl6"] Dec 11 10:18:46 crc kubenswrapper[4953]: E1211 10:18:46.844843 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d39e897-a654-48d2-95ac-f99002a740b7" containerName="route-controller-manager" Dec 11 10:18:46 crc kubenswrapper[4953]: I1211 10:18:46.844867 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d39e897-a654-48d2-95ac-f99002a740b7" containerName="route-controller-manager" Dec 11 10:18:46 crc kubenswrapper[4953]: I1211 10:18:46.844983 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d39e897-a654-48d2-95ac-f99002a740b7" containerName="route-controller-manager" Dec 11 10:18:46 crc kubenswrapper[4953]: I1211 10:18:46.845410 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-69b576f67c-bctl6" Dec 11 10:18:46 crc kubenswrapper[4953]: I1211 10:18:46.864920 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-69b576f67c-bctl6"] Dec 11 10:18:46 crc kubenswrapper[4953]: I1211 10:18:46.881210 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2d39e897-a654-48d2-95ac-f99002a740b7-client-ca\") pod \"2d39e897-a654-48d2-95ac-f99002a740b7\" (UID: \"2d39e897-a654-48d2-95ac-f99002a740b7\") " Dec 11 10:18:46 crc kubenswrapper[4953]: I1211 10:18:46.881860 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dxvzr\" (UniqueName: \"kubernetes.io/projected/2d39e897-a654-48d2-95ac-f99002a740b7-kube-api-access-dxvzr\") pod \"2d39e897-a654-48d2-95ac-f99002a740b7\" (UID: \"2d39e897-a654-48d2-95ac-f99002a740b7\") " Dec 11 10:18:46 crc kubenswrapper[4953]: I1211 10:18:46.881909 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2d39e897-a654-48d2-95ac-f99002a740b7-serving-cert\") pod \"2d39e897-a654-48d2-95ac-f99002a740b7\" (UID: \"2d39e897-a654-48d2-95ac-f99002a740b7\") " Dec 11 10:18:46 crc kubenswrapper[4953]: I1211 10:18:46.882039 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d39e897-a654-48d2-95ac-f99002a740b7-config\") pod \"2d39e897-a654-48d2-95ac-f99002a740b7\" (UID: \"2d39e897-a654-48d2-95ac-f99002a740b7\") " Dec 11 10:18:46 crc kubenswrapper[4953]: I1211 10:18:46.882232 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a76f213-069a-4c7f-a013-0654ef82d5bf-config\") pod \"route-controller-manager-69b576f67c-bctl6\" (UID: \"8a76f213-069a-4c7f-a013-0654ef82d5bf\") " pod="openshift-route-controller-manager/route-controller-manager-69b576f67c-bctl6" Dec 11 10:18:46 crc kubenswrapper[4953]: I1211 10:18:46.882307 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8a76f213-069a-4c7f-a013-0654ef82d5bf-client-ca\") pod \"route-controller-manager-69b576f67c-bctl6\" (UID: \"8a76f213-069a-4c7f-a013-0654ef82d5bf\") " pod="openshift-route-controller-manager/route-controller-manager-69b576f67c-bctl6" Dec 11 10:18:46 crc kubenswrapper[4953]: I1211 10:18:46.882313 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d39e897-a654-48d2-95ac-f99002a740b7-client-ca" (OuterVolumeSpecName: "client-ca") pod "2d39e897-a654-48d2-95ac-f99002a740b7" (UID: "2d39e897-a654-48d2-95ac-f99002a740b7"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:18:46 crc kubenswrapper[4953]: I1211 10:18:46.882428 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62kw8\" (UniqueName: \"kubernetes.io/projected/8a76f213-069a-4c7f-a013-0654ef82d5bf-kube-api-access-62kw8\") pod \"route-controller-manager-69b576f67c-bctl6\" (UID: \"8a76f213-069a-4c7f-a013-0654ef82d5bf\") " pod="openshift-route-controller-manager/route-controller-manager-69b576f67c-bctl6" Dec 11 10:18:46 crc kubenswrapper[4953]: I1211 10:18:46.882510 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8a76f213-069a-4c7f-a013-0654ef82d5bf-serving-cert\") pod \"route-controller-manager-69b576f67c-bctl6\" (UID: \"8a76f213-069a-4c7f-a013-0654ef82d5bf\") " pod="openshift-route-controller-manager/route-controller-manager-69b576f67c-bctl6" Dec 11 10:18:46 crc kubenswrapper[4953]: I1211 10:18:46.882603 4953 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2d39e897-a654-48d2-95ac-f99002a740b7-client-ca\") on node \"crc\" DevicePath \"\"" Dec 11 10:18:46 crc kubenswrapper[4953]: I1211 10:18:46.883113 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d39e897-a654-48d2-95ac-f99002a740b7-config" (OuterVolumeSpecName: "config") pod "2d39e897-a654-48d2-95ac-f99002a740b7" (UID: "2d39e897-a654-48d2-95ac-f99002a740b7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:18:46 crc kubenswrapper[4953]: I1211 10:18:46.885659 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-c7fcc49c-mg84z" Dec 11 10:18:46 crc kubenswrapper[4953]: I1211 10:18:46.888279 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d39e897-a654-48d2-95ac-f99002a740b7-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2d39e897-a654-48d2-95ac-f99002a740b7" (UID: "2d39e897-a654-48d2-95ac-f99002a740b7"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:18:46 crc kubenswrapper[4953]: I1211 10:18:46.889887 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d39e897-a654-48d2-95ac-f99002a740b7-kube-api-access-dxvzr" (OuterVolumeSpecName: "kube-api-access-dxvzr") pod "2d39e897-a654-48d2-95ac-f99002a740b7" (UID: "2d39e897-a654-48d2-95ac-f99002a740b7"). InnerVolumeSpecName "kube-api-access-dxvzr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:18:46 crc kubenswrapper[4953]: I1211 10:18:46.983529 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/785253df-afd8-4bc1-b7d0-282c1549daef-proxy-ca-bundles\") pod \"785253df-afd8-4bc1-b7d0-282c1549daef\" (UID: \"785253df-afd8-4bc1-b7d0-282c1549daef\") " Dec 11 10:18:46 crc kubenswrapper[4953]: I1211 10:18:46.983610 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/785253df-afd8-4bc1-b7d0-282c1549daef-serving-cert\") pod \"785253df-afd8-4bc1-b7d0-282c1549daef\" (UID: \"785253df-afd8-4bc1-b7d0-282c1549daef\") " Dec 11 10:18:46 crc kubenswrapper[4953]: I1211 10:18:46.983650 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/785253df-afd8-4bc1-b7d0-282c1549daef-config\") pod \"785253df-afd8-4bc1-b7d0-282c1549daef\" (UID: \"785253df-afd8-4bc1-b7d0-282c1549daef\") " Dec 11 10:18:46 crc kubenswrapper[4953]: I1211 10:18:46.983733 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/785253df-afd8-4bc1-b7d0-282c1549daef-client-ca\") pod \"785253df-afd8-4bc1-b7d0-282c1549daef\" (UID: \"785253df-afd8-4bc1-b7d0-282c1549daef\") " Dec 11 10:18:46 crc kubenswrapper[4953]: I1211 10:18:46.983844 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hj4dh\" (UniqueName: \"kubernetes.io/projected/785253df-afd8-4bc1-b7d0-282c1549daef-kube-api-access-hj4dh\") pod \"785253df-afd8-4bc1-b7d0-282c1549daef\" (UID: \"785253df-afd8-4bc1-b7d0-282c1549daef\") " Dec 11 10:18:46 crc kubenswrapper[4953]: I1211 10:18:46.984015 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8a76f213-069a-4c7f-a013-0654ef82d5bf-serving-cert\") pod \"route-controller-manager-69b576f67c-bctl6\" (UID: \"8a76f213-069a-4c7f-a013-0654ef82d5bf\") " pod="openshift-route-controller-manager/route-controller-manager-69b576f67c-bctl6" Dec 11 10:18:46 crc kubenswrapper[4953]: I1211 10:18:46.984071 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a76f213-069a-4c7f-a013-0654ef82d5bf-config\") pod \"route-controller-manager-69b576f67c-bctl6\" (UID: \"8a76f213-069a-4c7f-a013-0654ef82d5bf\") " pod="openshift-route-controller-manager/route-controller-manager-69b576f67c-bctl6" Dec 11 10:18:46 crc kubenswrapper[4953]: I1211 10:18:46.984119 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8a76f213-069a-4c7f-a013-0654ef82d5bf-client-ca\") pod \"route-controller-manager-69b576f67c-bctl6\" (UID: \"8a76f213-069a-4c7f-a013-0654ef82d5bf\") " pod="openshift-route-controller-manager/route-controller-manager-69b576f67c-bctl6" Dec 11 10:18:46 crc kubenswrapper[4953]: I1211 10:18:46.984143 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62kw8\" (UniqueName: \"kubernetes.io/projected/8a76f213-069a-4c7f-a013-0654ef82d5bf-kube-api-access-62kw8\") pod \"route-controller-manager-69b576f67c-bctl6\" (UID: \"8a76f213-069a-4c7f-a013-0654ef82d5bf\") " pod="openshift-route-controller-manager/route-controller-manager-69b576f67c-bctl6" Dec 11 10:18:46 crc kubenswrapper[4953]: I1211 10:18:46.984218 4953 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d39e897-a654-48d2-95ac-f99002a740b7-config\") on node \"crc\" DevicePath \"\"" Dec 11 10:18:46 crc kubenswrapper[4953]: I1211 10:18:46.984234 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dxvzr\" (UniqueName: \"kubernetes.io/projected/2d39e897-a654-48d2-95ac-f99002a740b7-kube-api-access-dxvzr\") on node \"crc\" DevicePath \"\"" Dec 11 10:18:46 crc kubenswrapper[4953]: I1211 10:18:46.984246 4953 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2d39e897-a654-48d2-95ac-f99002a740b7-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 10:18:46 crc kubenswrapper[4953]: I1211 10:18:46.984451 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/785253df-afd8-4bc1-b7d0-282c1549daef-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "785253df-afd8-4bc1-b7d0-282c1549daef" (UID: "785253df-afd8-4bc1-b7d0-282c1549daef"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:18:46 crc kubenswrapper[4953]: I1211 10:18:46.984596 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/785253df-afd8-4bc1-b7d0-282c1549daef-config" (OuterVolumeSpecName: "config") pod "785253df-afd8-4bc1-b7d0-282c1549daef" (UID: "785253df-afd8-4bc1-b7d0-282c1549daef"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:18:46 crc kubenswrapper[4953]: I1211 10:18:46.984923 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/785253df-afd8-4bc1-b7d0-282c1549daef-client-ca" (OuterVolumeSpecName: "client-ca") pod "785253df-afd8-4bc1-b7d0-282c1549daef" (UID: "785253df-afd8-4bc1-b7d0-282c1549daef"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:18:46 crc kubenswrapper[4953]: I1211 10:18:46.985615 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a76f213-069a-4c7f-a013-0654ef82d5bf-config\") pod \"route-controller-manager-69b576f67c-bctl6\" (UID: \"8a76f213-069a-4c7f-a013-0654ef82d5bf\") " pod="openshift-route-controller-manager/route-controller-manager-69b576f67c-bctl6" Dec 11 10:18:46 crc kubenswrapper[4953]: I1211 10:18:46.986243 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8a76f213-069a-4c7f-a013-0654ef82d5bf-client-ca\") pod \"route-controller-manager-69b576f67c-bctl6\" (UID: \"8a76f213-069a-4c7f-a013-0654ef82d5bf\") " pod="openshift-route-controller-manager/route-controller-manager-69b576f67c-bctl6" Dec 11 10:18:46 crc kubenswrapper[4953]: I1211 10:18:46.988925 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/785253df-afd8-4bc1-b7d0-282c1549daef-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "785253df-afd8-4bc1-b7d0-282c1549daef" (UID: "785253df-afd8-4bc1-b7d0-282c1549daef"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:18:46 crc kubenswrapper[4953]: I1211 10:18:46.989111 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/785253df-afd8-4bc1-b7d0-282c1549daef-kube-api-access-hj4dh" (OuterVolumeSpecName: "kube-api-access-hj4dh") pod "785253df-afd8-4bc1-b7d0-282c1549daef" (UID: "785253df-afd8-4bc1-b7d0-282c1549daef"). InnerVolumeSpecName "kube-api-access-hj4dh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:18:46 crc kubenswrapper[4953]: I1211 10:18:46.989153 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8a76f213-069a-4c7f-a013-0654ef82d5bf-serving-cert\") pod \"route-controller-manager-69b576f67c-bctl6\" (UID: \"8a76f213-069a-4c7f-a013-0654ef82d5bf\") " pod="openshift-route-controller-manager/route-controller-manager-69b576f67c-bctl6" Dec 11 10:18:47 crc kubenswrapper[4953]: I1211 10:18:47.005729 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62kw8\" (UniqueName: \"kubernetes.io/projected/8a76f213-069a-4c7f-a013-0654ef82d5bf-kube-api-access-62kw8\") pod \"route-controller-manager-69b576f67c-bctl6\" (UID: \"8a76f213-069a-4c7f-a013-0654ef82d5bf\") " pod="openshift-route-controller-manager/route-controller-manager-69b576f67c-bctl6" Dec 11 10:18:47 crc kubenswrapper[4953]: I1211 10:18:47.085889 4953 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/785253df-afd8-4bc1-b7d0-282c1549daef-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 11 10:18:47 crc kubenswrapper[4953]: I1211 10:18:47.085938 4953 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/785253df-afd8-4bc1-b7d0-282c1549daef-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 10:18:47 crc kubenswrapper[4953]: I1211 10:18:47.085950 4953 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/785253df-afd8-4bc1-b7d0-282c1549daef-config\") on node \"crc\" DevicePath \"\"" Dec 11 10:18:47 crc kubenswrapper[4953]: I1211 10:18:47.085960 4953 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/785253df-afd8-4bc1-b7d0-282c1549daef-client-ca\") on node \"crc\" DevicePath \"\"" Dec 11 10:18:47 crc kubenswrapper[4953]: I1211 10:18:47.085978 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hj4dh\" (UniqueName: \"kubernetes.io/projected/785253df-afd8-4bc1-b7d0-282c1549daef-kube-api-access-hj4dh\") on node \"crc\" DevicePath \"\"" Dec 11 10:18:47 crc kubenswrapper[4953]: I1211 10:18:47.165296 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6c8dd8cc6f-mkg2m" event={"ID":"2d39e897-a654-48d2-95ac-f99002a740b7","Type":"ContainerDied","Data":"2a0bff6e199202bd1d0925b5318e9451353d64602f4ef897e69e7d6c929fca45"} Dec 11 10:18:47 crc kubenswrapper[4953]: I1211 10:18:47.165358 4953 scope.go:117] "RemoveContainer" containerID="e7a4fb5c5dd9d3d4c086128ec957d5a9dacd8a651edc49df2ca5df4bfdbceb9c" Dec 11 10:18:47 crc kubenswrapper[4953]: I1211 10:18:47.165457 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6c8dd8cc6f-mkg2m" Dec 11 10:18:47 crc kubenswrapper[4953]: I1211 10:18:47.170587 4953 generic.go:334] "Generic (PLEG): container finished" podID="b0483db9-edf7-4df4-8cd4-b64966d77014" containerID="b93bb524d4ec771e7f0045e4ceb4961847d62ca4f3ccd7f5882efe46d7e71923" exitCode=0 Dec 11 10:18:47 crc kubenswrapper[4953]: I1211 10:18:47.170769 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jtjgm" event={"ID":"b0483db9-edf7-4df4-8cd4-b64966d77014","Type":"ContainerDied","Data":"b93bb524d4ec771e7f0045e4ceb4961847d62ca4f3ccd7f5882efe46d7e71923"} Dec 11 10:18:47 crc kubenswrapper[4953]: I1211 10:18:47.170811 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jtjgm" event={"ID":"b0483db9-edf7-4df4-8cd4-b64966d77014","Type":"ContainerStarted","Data":"abbf438a078384bceb67d80f3ddbd7ae7aafed49c1d954b050ce471a1f00e7db"} Dec 11 10:18:47 crc kubenswrapper[4953]: I1211 10:18:47.174520 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-c7fcc49c-mg84z" event={"ID":"785253df-afd8-4bc1-b7d0-282c1549daef","Type":"ContainerDied","Data":"96669279bdd3805e0d01c54a8c13163e078d5eb5e3b4e0d84bd6d264843c2ba2"} Dec 11 10:18:47 crc kubenswrapper[4953]: I1211 10:18:47.174612 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-c7fcc49c-mg84z" Dec 11 10:18:47 crc kubenswrapper[4953]: I1211 10:18:47.176776 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-69b576f67c-bctl6" Dec 11 10:18:47 crc kubenswrapper[4953]: I1211 10:18:47.177746 4953 generic.go:334] "Generic (PLEG): container finished" podID="65c394e3-db2a-449c-9963-a880e17adbb2" containerID="739c3472931559a816e74897b801b5f0df5d2554f2c715c6de919e849f12553a" exitCode=0 Dec 11 10:18:47 crc kubenswrapper[4953]: I1211 10:18:47.177792 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jdxzs" event={"ID":"65c394e3-db2a-449c-9963-a880e17adbb2","Type":"ContainerDied","Data":"739c3472931559a816e74897b801b5f0df5d2554f2c715c6de919e849f12553a"} Dec 11 10:18:47 crc kubenswrapper[4953]: I1211 10:18:47.203055 4953 scope.go:117] "RemoveContainer" containerID="6d8e53f6f9ce8ffdcbdfabf128157b4c0ba7bc047c16c910b20c251966724286" Dec 11 10:18:47 crc kubenswrapper[4953]: I1211 10:18:47.223090 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6c8dd8cc6f-mkg2m"] Dec 11 10:18:47 crc kubenswrapper[4953]: I1211 10:18:47.232007 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6c8dd8cc6f-mkg2m"] Dec 11 10:18:47 crc kubenswrapper[4953]: I1211 10:18:47.237312 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-c7fcc49c-mg84z"] Dec 11 10:18:47 crc kubenswrapper[4953]: I1211 10:18:47.240889 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-c7fcc49c-mg84z"] Dec 11 10:18:47 crc kubenswrapper[4953]: I1211 10:18:47.270587 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-rrz7v"] Dec 11 10:18:47 crc kubenswrapper[4953]: E1211 10:18:47.270952 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="785253df-afd8-4bc1-b7d0-282c1549daef" containerName="controller-manager" Dec 11 10:18:47 crc kubenswrapper[4953]: I1211 10:18:47.270978 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="785253df-afd8-4bc1-b7d0-282c1549daef" containerName="controller-manager" Dec 11 10:18:47 crc kubenswrapper[4953]: I1211 10:18:47.271106 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="785253df-afd8-4bc1-b7d0-282c1549daef" containerName="controller-manager" Dec 11 10:18:47 crc kubenswrapper[4953]: I1211 10:18:47.271964 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rrz7v" Dec 11 10:18:47 crc kubenswrapper[4953]: I1211 10:18:47.274308 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 11 10:18:47 crc kubenswrapper[4953]: I1211 10:18:47.275941 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rrz7v"] Dec 11 10:18:47 crc kubenswrapper[4953]: I1211 10:18:47.395856 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8prg9\" (UniqueName: \"kubernetes.io/projected/3e2c3944-72ff-4162-88c6-3e8583aa5065-kube-api-access-8prg9\") pod \"certified-operators-rrz7v\" (UID: \"3e2c3944-72ff-4162-88c6-3e8583aa5065\") " pod="openshift-marketplace/certified-operators-rrz7v" Dec 11 10:18:47 crc kubenswrapper[4953]: I1211 10:18:47.396325 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e2c3944-72ff-4162-88c6-3e8583aa5065-utilities\") pod \"certified-operators-rrz7v\" (UID: \"3e2c3944-72ff-4162-88c6-3e8583aa5065\") " pod="openshift-marketplace/certified-operators-rrz7v" Dec 11 10:18:47 crc kubenswrapper[4953]: I1211 10:18:47.396385 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e2c3944-72ff-4162-88c6-3e8583aa5065-catalog-content\") pod \"certified-operators-rrz7v\" (UID: \"3e2c3944-72ff-4162-88c6-3e8583aa5065\") " pod="openshift-marketplace/certified-operators-rrz7v" Dec 11 10:18:47 crc kubenswrapper[4953]: I1211 10:18:47.424524 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-69b576f67c-bctl6"] Dec 11 10:18:47 crc kubenswrapper[4953]: I1211 10:18:47.497852 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8prg9\" (UniqueName: \"kubernetes.io/projected/3e2c3944-72ff-4162-88c6-3e8583aa5065-kube-api-access-8prg9\") pod \"certified-operators-rrz7v\" (UID: \"3e2c3944-72ff-4162-88c6-3e8583aa5065\") " pod="openshift-marketplace/certified-operators-rrz7v" Dec 11 10:18:47 crc kubenswrapper[4953]: I1211 10:18:47.497910 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e2c3944-72ff-4162-88c6-3e8583aa5065-utilities\") pod \"certified-operators-rrz7v\" (UID: \"3e2c3944-72ff-4162-88c6-3e8583aa5065\") " pod="openshift-marketplace/certified-operators-rrz7v" Dec 11 10:18:47 crc kubenswrapper[4953]: I1211 10:18:47.497941 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e2c3944-72ff-4162-88c6-3e8583aa5065-catalog-content\") pod \"certified-operators-rrz7v\" (UID: \"3e2c3944-72ff-4162-88c6-3e8583aa5065\") " pod="openshift-marketplace/certified-operators-rrz7v" Dec 11 10:18:47 crc kubenswrapper[4953]: I1211 10:18:47.498552 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e2c3944-72ff-4162-88c6-3e8583aa5065-catalog-content\") pod \"certified-operators-rrz7v\" (UID: \"3e2c3944-72ff-4162-88c6-3e8583aa5065\") " pod="openshift-marketplace/certified-operators-rrz7v" Dec 11 10:18:47 crc kubenswrapper[4953]: I1211 10:18:47.498659 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e2c3944-72ff-4162-88c6-3e8583aa5065-utilities\") pod \"certified-operators-rrz7v\" (UID: \"3e2c3944-72ff-4162-88c6-3e8583aa5065\") " pod="openshift-marketplace/certified-operators-rrz7v" Dec 11 10:18:47 crc kubenswrapper[4953]: I1211 10:18:47.515103 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8prg9\" (UniqueName: \"kubernetes.io/projected/3e2c3944-72ff-4162-88c6-3e8583aa5065-kube-api-access-8prg9\") pod \"certified-operators-rrz7v\" (UID: \"3e2c3944-72ff-4162-88c6-3e8583aa5065\") " pod="openshift-marketplace/certified-operators-rrz7v" Dec 11 10:18:47 crc kubenswrapper[4953]: I1211 10:18:47.590738 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rrz7v" Dec 11 10:18:48 crc kubenswrapper[4953]: I1211 10:18:48.009769 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rrz7v"] Dec 11 10:18:48 crc kubenswrapper[4953]: W1211 10:18:48.013244 4953 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3e2c3944_72ff_4162_88c6_3e8583aa5065.slice/crio-62212278fae6e35fb66842baa18bcd78b7c50e0fc4d906bd66e2af0826e16e51 WatchSource:0}: Error finding container 62212278fae6e35fb66842baa18bcd78b7c50e0fc4d906bd66e2af0826e16e51: Status 404 returned error can't find the container with id 62212278fae6e35fb66842baa18bcd78b7c50e0fc4d906bd66e2af0826e16e51 Dec 11 10:18:48 crc kubenswrapper[4953]: I1211 10:18:48.185258 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-69b576f67c-bctl6" event={"ID":"8a76f213-069a-4c7f-a013-0654ef82d5bf","Type":"ContainerStarted","Data":"e31dd9ca12ad3ce8db59f7977c2e54c0ae14001bc9c18db2afda828448e24d72"} Dec 11 10:18:48 crc kubenswrapper[4953]: I1211 10:18:48.185309 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-69b576f67c-bctl6" event={"ID":"8a76f213-069a-4c7f-a013-0654ef82d5bf","Type":"ContainerStarted","Data":"8e5391aa39cb0e1bfa43b6a135c424895c606ff18c3b8104634c5d03ab812549"} Dec 11 10:18:48 crc kubenswrapper[4953]: I1211 10:18:48.185324 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-69b576f67c-bctl6" Dec 11 10:18:48 crc kubenswrapper[4953]: I1211 10:18:48.186271 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rrz7v" event={"ID":"3e2c3944-72ff-4162-88c6-3e8583aa5065","Type":"ContainerStarted","Data":"62212278fae6e35fb66842baa18bcd78b7c50e0fc4d906bd66e2af0826e16e51"} Dec 11 10:18:48 crc kubenswrapper[4953]: I1211 10:18:48.192802 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-69b576f67c-bctl6" Dec 11 10:18:48 crc kubenswrapper[4953]: I1211 10:18:48.194460 4953 patch_prober.go:28] interesting pod/machine-config-daemon-q2898 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 10:18:48 crc kubenswrapper[4953]: I1211 10:18:48.194524 4953 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 10:18:48 crc kubenswrapper[4953]: I1211 10:18:48.194592 4953 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q2898" Dec 11 10:18:48 crc kubenswrapper[4953]: I1211 10:18:48.195334 4953 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ebcfb015c8d0726744962a05ad3b02d7514b72b3db32d83919120d58d0255b97"} pod="openshift-machine-config-operator/machine-config-daemon-q2898" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 11 10:18:48 crc kubenswrapper[4953]: I1211 10:18:48.195421 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" containerName="machine-config-daemon" containerID="cri-o://ebcfb015c8d0726744962a05ad3b02d7514b72b3db32d83919120d58d0255b97" gracePeriod=600 Dec 11 10:18:48 crc kubenswrapper[4953]: I1211 10:18:48.203635 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-69b576f67c-bctl6" podStartSLOduration=3.203616819 podStartE2EDuration="3.203616819s" podCreationTimestamp="2025-12-11 10:18:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:18:48.200831179 +0000 UTC m=+446.224690212" watchObservedRunningTime="2025-12-11 10:18:48.203616819 +0000 UTC m=+446.227475852" Dec 11 10:18:48 crc kubenswrapper[4953]: I1211 10:18:48.276937 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-kbqqj"] Dec 11 10:18:48 crc kubenswrapper[4953]: I1211 10:18:48.281908 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kbqqj" Dec 11 10:18:48 crc kubenswrapper[4953]: I1211 10:18:48.292895 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 11 10:18:48 crc kubenswrapper[4953]: I1211 10:18:48.298672 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kbqqj"] Dec 11 10:18:48 crc kubenswrapper[4953]: I1211 10:18:48.312921 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fhcv\" (UniqueName: \"kubernetes.io/projected/c7f97b11-818a-4328-8013-3501f45516ef-kube-api-access-9fhcv\") pod \"community-operators-kbqqj\" (UID: \"c7f97b11-818a-4328-8013-3501f45516ef\") " pod="openshift-marketplace/community-operators-kbqqj" Dec 11 10:18:48 crc kubenswrapper[4953]: I1211 10:18:48.313019 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7f97b11-818a-4328-8013-3501f45516ef-catalog-content\") pod \"community-operators-kbqqj\" (UID: \"c7f97b11-818a-4328-8013-3501f45516ef\") " pod="openshift-marketplace/community-operators-kbqqj" Dec 11 10:18:48 crc kubenswrapper[4953]: I1211 10:18:48.313066 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7f97b11-818a-4328-8013-3501f45516ef-utilities\") pod \"community-operators-kbqqj\" (UID: \"c7f97b11-818a-4328-8013-3501f45516ef\") " pod="openshift-marketplace/community-operators-kbqqj" Dec 11 10:18:48 crc kubenswrapper[4953]: I1211 10:18:48.415283 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7f97b11-818a-4328-8013-3501f45516ef-utilities\") pod \"community-operators-kbqqj\" (UID: \"c7f97b11-818a-4328-8013-3501f45516ef\") " pod="openshift-marketplace/community-operators-kbqqj" Dec 11 10:18:48 crc kubenswrapper[4953]: I1211 10:18:48.416003 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7f97b11-818a-4328-8013-3501f45516ef-utilities\") pod \"community-operators-kbqqj\" (UID: \"c7f97b11-818a-4328-8013-3501f45516ef\") " pod="openshift-marketplace/community-operators-kbqqj" Dec 11 10:18:48 crc kubenswrapper[4953]: I1211 10:18:48.416490 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9fhcv\" (UniqueName: \"kubernetes.io/projected/c7f97b11-818a-4328-8013-3501f45516ef-kube-api-access-9fhcv\") pod \"community-operators-kbqqj\" (UID: \"c7f97b11-818a-4328-8013-3501f45516ef\") " pod="openshift-marketplace/community-operators-kbqqj" Dec 11 10:18:48 crc kubenswrapper[4953]: I1211 10:18:48.416717 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7f97b11-818a-4328-8013-3501f45516ef-catalog-content\") pod \"community-operators-kbqqj\" (UID: \"c7f97b11-818a-4328-8013-3501f45516ef\") " pod="openshift-marketplace/community-operators-kbqqj" Dec 11 10:18:48 crc kubenswrapper[4953]: I1211 10:18:48.417322 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7f97b11-818a-4328-8013-3501f45516ef-catalog-content\") pod \"community-operators-kbqqj\" (UID: \"c7f97b11-818a-4328-8013-3501f45516ef\") " pod="openshift-marketplace/community-operators-kbqqj" Dec 11 10:18:48 crc kubenswrapper[4953]: I1211 10:18:48.457376 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9fhcv\" (UniqueName: \"kubernetes.io/projected/c7f97b11-818a-4328-8013-3501f45516ef-kube-api-access-9fhcv\") pod \"community-operators-kbqqj\" (UID: \"c7f97b11-818a-4328-8013-3501f45516ef\") " pod="openshift-marketplace/community-operators-kbqqj" Dec 11 10:18:48 crc kubenswrapper[4953]: I1211 10:18:48.481956 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d39e897-a654-48d2-95ac-f99002a740b7" path="/var/lib/kubelet/pods/2d39e897-a654-48d2-95ac-f99002a740b7/volumes" Dec 11 10:18:48 crc kubenswrapper[4953]: I1211 10:18:48.482673 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="785253df-afd8-4bc1-b7d0-282c1549daef" path="/var/lib/kubelet/pods/785253df-afd8-4bc1-b7d0-282c1549daef/volumes" Dec 11 10:18:48 crc kubenswrapper[4953]: I1211 10:18:48.612227 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kbqqj" Dec 11 10:18:48 crc kubenswrapper[4953]: I1211 10:18:48.792801 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kbqqj"] Dec 11 10:18:48 crc kubenswrapper[4953]: W1211 10:18:48.796144 4953 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc7f97b11_818a_4328_8013_3501f45516ef.slice/crio-eef48b48dd2204fce038822fc5491d2b4a694d5f0d83a60995650c3134cd5e19 WatchSource:0}: Error finding container eef48b48dd2204fce038822fc5491d2b4a694d5f0d83a60995650c3134cd5e19: Status 404 returned error can't find the container with id eef48b48dd2204fce038822fc5491d2b4a694d5f0d83a60995650c3134cd5e19 Dec 11 10:18:49 crc kubenswrapper[4953]: I1211 10:18:49.104550 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7cd458d5f5-hb8gx"] Dec 11 10:18:49 crc kubenswrapper[4953]: I1211 10:18:49.105671 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7cd458d5f5-hb8gx" Dec 11 10:18:49 crc kubenswrapper[4953]: I1211 10:18:49.108211 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 11 10:18:49 crc kubenswrapper[4953]: I1211 10:18:49.108246 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 11 10:18:49 crc kubenswrapper[4953]: I1211 10:18:49.109909 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 11 10:18:49 crc kubenswrapper[4953]: I1211 10:18:49.110462 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 11 10:18:49 crc kubenswrapper[4953]: I1211 10:18:49.110684 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 11 10:18:49 crc kubenswrapper[4953]: I1211 10:18:49.112024 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 11 10:18:49 crc kubenswrapper[4953]: I1211 10:18:49.117604 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 11 10:18:49 crc kubenswrapper[4953]: I1211 10:18:49.122173 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7cd458d5f5-hb8gx"] Dec 11 10:18:49 crc kubenswrapper[4953]: I1211 10:18:49.200012 4953 generic.go:334] "Generic (PLEG): container finished" podID="3e2c3944-72ff-4162-88c6-3e8583aa5065" containerID="fa75007dfd353a951c423b0cb3df9b218cc881780b15fc9ba884e91e2fde8c6b" exitCode=0 Dec 11 10:18:49 crc kubenswrapper[4953]: I1211 10:18:49.200126 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rrz7v" event={"ID":"3e2c3944-72ff-4162-88c6-3e8583aa5065","Type":"ContainerDied","Data":"fa75007dfd353a951c423b0cb3df9b218cc881780b15fc9ba884e91e2fde8c6b"} Dec 11 10:18:49 crc kubenswrapper[4953]: I1211 10:18:49.204082 4953 generic.go:334] "Generic (PLEG): container finished" podID="ed741fb7-1326-48b7-a713-17c9f0243eac" containerID="ebcfb015c8d0726744962a05ad3b02d7514b72b3db32d83919120d58d0255b97" exitCode=0 Dec 11 10:18:49 crc kubenswrapper[4953]: I1211 10:18:49.204209 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q2898" event={"ID":"ed741fb7-1326-48b7-a713-17c9f0243eac","Type":"ContainerDied","Data":"ebcfb015c8d0726744962a05ad3b02d7514b72b3db32d83919120d58d0255b97"} Dec 11 10:18:49 crc kubenswrapper[4953]: I1211 10:18:49.204250 4953 scope.go:117] "RemoveContainer" containerID="bd6810974250266a6a2efbea13db5cb6f52a4bbdec05955f7b9f58e55d7a8c4a" Dec 11 10:18:49 crc kubenswrapper[4953]: I1211 10:18:49.206037 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kbqqj" event={"ID":"c7f97b11-818a-4328-8013-3501f45516ef","Type":"ContainerStarted","Data":"eef48b48dd2204fce038822fc5491d2b4a694d5f0d83a60995650c3134cd5e19"} Dec 11 10:18:49 crc kubenswrapper[4953]: I1211 10:18:49.276083 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dde32845-0e0f-4611-b7a4-ab84897de113-serving-cert\") pod \"controller-manager-7cd458d5f5-hb8gx\" (UID: \"dde32845-0e0f-4611-b7a4-ab84897de113\") " pod="openshift-controller-manager/controller-manager-7cd458d5f5-hb8gx" Dec 11 10:18:49 crc kubenswrapper[4953]: I1211 10:18:49.276452 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dde32845-0e0f-4611-b7a4-ab84897de113-config\") pod \"controller-manager-7cd458d5f5-hb8gx\" (UID: \"dde32845-0e0f-4611-b7a4-ab84897de113\") " pod="openshift-controller-manager/controller-manager-7cd458d5f5-hb8gx" Dec 11 10:18:49 crc kubenswrapper[4953]: I1211 10:18:49.277778 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdv5g\" (UniqueName: \"kubernetes.io/projected/dde32845-0e0f-4611-b7a4-ab84897de113-kube-api-access-cdv5g\") pod \"controller-manager-7cd458d5f5-hb8gx\" (UID: \"dde32845-0e0f-4611-b7a4-ab84897de113\") " pod="openshift-controller-manager/controller-manager-7cd458d5f5-hb8gx" Dec 11 10:18:49 crc kubenswrapper[4953]: I1211 10:18:49.277950 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dde32845-0e0f-4611-b7a4-ab84897de113-proxy-ca-bundles\") pod \"controller-manager-7cd458d5f5-hb8gx\" (UID: \"dde32845-0e0f-4611-b7a4-ab84897de113\") " pod="openshift-controller-manager/controller-manager-7cd458d5f5-hb8gx" Dec 11 10:18:49 crc kubenswrapper[4953]: I1211 10:18:49.278096 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dde32845-0e0f-4611-b7a4-ab84897de113-client-ca\") pod \"controller-manager-7cd458d5f5-hb8gx\" (UID: \"dde32845-0e0f-4611-b7a4-ab84897de113\") " pod="openshift-controller-manager/controller-manager-7cd458d5f5-hb8gx" Dec 11 10:18:49 crc kubenswrapper[4953]: I1211 10:18:49.379132 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dde32845-0e0f-4611-b7a4-ab84897de113-serving-cert\") pod \"controller-manager-7cd458d5f5-hb8gx\" (UID: \"dde32845-0e0f-4611-b7a4-ab84897de113\") " pod="openshift-controller-manager/controller-manager-7cd458d5f5-hb8gx" Dec 11 10:18:49 crc kubenswrapper[4953]: I1211 10:18:49.379947 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dde32845-0e0f-4611-b7a4-ab84897de113-config\") pod \"controller-manager-7cd458d5f5-hb8gx\" (UID: \"dde32845-0e0f-4611-b7a4-ab84897de113\") " pod="openshift-controller-manager/controller-manager-7cd458d5f5-hb8gx" Dec 11 10:18:49 crc kubenswrapper[4953]: I1211 10:18:49.380113 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cdv5g\" (UniqueName: \"kubernetes.io/projected/dde32845-0e0f-4611-b7a4-ab84897de113-kube-api-access-cdv5g\") pod \"controller-manager-7cd458d5f5-hb8gx\" (UID: \"dde32845-0e0f-4611-b7a4-ab84897de113\") " pod="openshift-controller-manager/controller-manager-7cd458d5f5-hb8gx" Dec 11 10:18:49 crc kubenswrapper[4953]: I1211 10:18:49.380215 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dde32845-0e0f-4611-b7a4-ab84897de113-proxy-ca-bundles\") pod \"controller-manager-7cd458d5f5-hb8gx\" (UID: \"dde32845-0e0f-4611-b7a4-ab84897de113\") " pod="openshift-controller-manager/controller-manager-7cd458d5f5-hb8gx" Dec 11 10:18:49 crc kubenswrapper[4953]: I1211 10:18:49.380294 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dde32845-0e0f-4611-b7a4-ab84897de113-client-ca\") pod \"controller-manager-7cd458d5f5-hb8gx\" (UID: \"dde32845-0e0f-4611-b7a4-ab84897de113\") " pod="openshift-controller-manager/controller-manager-7cd458d5f5-hb8gx" Dec 11 10:18:49 crc kubenswrapper[4953]: I1211 10:18:49.381335 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dde32845-0e0f-4611-b7a4-ab84897de113-client-ca\") pod \"controller-manager-7cd458d5f5-hb8gx\" (UID: \"dde32845-0e0f-4611-b7a4-ab84897de113\") " pod="openshift-controller-manager/controller-manager-7cd458d5f5-hb8gx" Dec 11 10:18:49 crc kubenswrapper[4953]: I1211 10:18:49.381847 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dde32845-0e0f-4611-b7a4-ab84897de113-config\") pod \"controller-manager-7cd458d5f5-hb8gx\" (UID: \"dde32845-0e0f-4611-b7a4-ab84897de113\") " pod="openshift-controller-manager/controller-manager-7cd458d5f5-hb8gx" Dec 11 10:18:49 crc kubenswrapper[4953]: I1211 10:18:49.381883 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dde32845-0e0f-4611-b7a4-ab84897de113-proxy-ca-bundles\") pod \"controller-manager-7cd458d5f5-hb8gx\" (UID: \"dde32845-0e0f-4611-b7a4-ab84897de113\") " pod="openshift-controller-manager/controller-manager-7cd458d5f5-hb8gx" Dec 11 10:18:49 crc kubenswrapper[4953]: I1211 10:18:49.386620 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dde32845-0e0f-4611-b7a4-ab84897de113-serving-cert\") pod \"controller-manager-7cd458d5f5-hb8gx\" (UID: \"dde32845-0e0f-4611-b7a4-ab84897de113\") " pod="openshift-controller-manager/controller-manager-7cd458d5f5-hb8gx" Dec 11 10:18:49 crc kubenswrapper[4953]: I1211 10:18:49.402097 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cdv5g\" (UniqueName: \"kubernetes.io/projected/dde32845-0e0f-4611-b7a4-ab84897de113-kube-api-access-cdv5g\") pod \"controller-manager-7cd458d5f5-hb8gx\" (UID: \"dde32845-0e0f-4611-b7a4-ab84897de113\") " pod="openshift-controller-manager/controller-manager-7cd458d5f5-hb8gx" Dec 11 10:18:49 crc kubenswrapper[4953]: I1211 10:18:49.420896 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7cd458d5f5-hb8gx" Dec 11 10:18:50 crc kubenswrapper[4953]: I1211 10:18:50.214050 4953 generic.go:334] "Generic (PLEG): container finished" podID="c7f97b11-818a-4328-8013-3501f45516ef" containerID="d39c4d27544802a80e1e4f37b860cbbad7fc15bb94ce8285e64255978843f4a1" exitCode=0 Dec 11 10:18:50 crc kubenswrapper[4953]: I1211 10:18:50.214169 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kbqqj" event={"ID":"c7f97b11-818a-4328-8013-3501f45516ef","Type":"ContainerDied","Data":"d39c4d27544802a80e1e4f37b860cbbad7fc15bb94ce8285e64255978843f4a1"} Dec 11 10:18:50 crc kubenswrapper[4953]: I1211 10:18:50.756874 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7cd458d5f5-hb8gx"] Dec 11 10:18:51 crc kubenswrapper[4953]: I1211 10:18:51.270222 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jdxzs" event={"ID":"65c394e3-db2a-449c-9963-a880e17adbb2","Type":"ContainerStarted","Data":"f056e9a77cd692aa3244c5cddd9534fde8892d2bd2c63b4720579418d6405e55"} Dec 11 10:18:51 crc kubenswrapper[4953]: I1211 10:18:51.274677 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7cd458d5f5-hb8gx" event={"ID":"dde32845-0e0f-4611-b7a4-ab84897de113","Type":"ContainerStarted","Data":"96a0a9c58daea20c7c46bbdb3e7b78d92577341e748bf8cc1f429abbbc1dda80"} Dec 11 10:18:51 crc kubenswrapper[4953]: I1211 10:18:51.274730 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7cd458d5f5-hb8gx" event={"ID":"dde32845-0e0f-4611-b7a4-ab84897de113","Type":"ContainerStarted","Data":"f2aeae263022542e90cdf78d247375596339454e2678a58ae29cda7d05e3d173"} Dec 11 10:18:51 crc kubenswrapper[4953]: I1211 10:18:51.274899 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7cd458d5f5-hb8gx" Dec 11 10:18:51 crc kubenswrapper[4953]: I1211 10:18:51.277201 4953 generic.go:334] "Generic (PLEG): container finished" podID="b0483db9-edf7-4df4-8cd4-b64966d77014" containerID="2033ae9141190649dfa73f6a0f7eade48cffa9d7ccf633ab6cf2b1d88271009b" exitCode=0 Dec 11 10:18:51 crc kubenswrapper[4953]: I1211 10:18:51.277303 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jtjgm" event={"ID":"b0483db9-edf7-4df4-8cd4-b64966d77014","Type":"ContainerDied","Data":"2033ae9141190649dfa73f6a0f7eade48cffa9d7ccf633ab6cf2b1d88271009b"} Dec 11 10:18:51 crc kubenswrapper[4953]: I1211 10:18:51.279805 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rrz7v" event={"ID":"3e2c3944-72ff-4162-88c6-3e8583aa5065","Type":"ContainerStarted","Data":"0a071c74c5045f228b9600cf1b63ddf9fa00bd79550644f9142a31586be178ad"} Dec 11 10:18:51 crc kubenswrapper[4953]: I1211 10:18:51.286209 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7cd458d5f5-hb8gx" Dec 11 10:18:51 crc kubenswrapper[4953]: I1211 10:18:51.295189 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q2898" event={"ID":"ed741fb7-1326-48b7-a713-17c9f0243eac","Type":"ContainerStarted","Data":"142b8bb384b24715cd1ba95ad576a70c2c8e1fafe4e31f75f980739d852f35b1"} Dec 11 10:18:51 crc kubenswrapper[4953]: I1211 10:18:51.443454 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7cd458d5f5-hb8gx" podStartSLOduration=6.443428304 podStartE2EDuration="6.443428304s" podCreationTimestamp="2025-12-11 10:18:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:18:51.370826828 +0000 UTC m=+449.394685861" watchObservedRunningTime="2025-12-11 10:18:51.443428304 +0000 UTC m=+449.467287337" Dec 11 10:18:52 crc kubenswrapper[4953]: I1211 10:18:52.302745 4953 generic.go:334] "Generic (PLEG): container finished" podID="65c394e3-db2a-449c-9963-a880e17adbb2" containerID="f056e9a77cd692aa3244c5cddd9534fde8892d2bd2c63b4720579418d6405e55" exitCode=0 Dec 11 10:18:52 crc kubenswrapper[4953]: I1211 10:18:52.302877 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jdxzs" event={"ID":"65c394e3-db2a-449c-9963-a880e17adbb2","Type":"ContainerDied","Data":"f056e9a77cd692aa3244c5cddd9534fde8892d2bd2c63b4720579418d6405e55"} Dec 11 10:18:52 crc kubenswrapper[4953]: I1211 10:18:52.305521 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kbqqj" event={"ID":"c7f97b11-818a-4328-8013-3501f45516ef","Type":"ContainerStarted","Data":"2b8b1450502613e015c153d1c2d3c9e95bdaca719f1b1bafc5eb1f456c4222f8"} Dec 11 10:18:52 crc kubenswrapper[4953]: I1211 10:18:52.308008 4953 generic.go:334] "Generic (PLEG): container finished" podID="3e2c3944-72ff-4162-88c6-3e8583aa5065" containerID="0a071c74c5045f228b9600cf1b63ddf9fa00bd79550644f9142a31586be178ad" exitCode=0 Dec 11 10:18:52 crc kubenswrapper[4953]: I1211 10:18:52.308069 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rrz7v" event={"ID":"3e2c3944-72ff-4162-88c6-3e8583aa5065","Type":"ContainerDied","Data":"0a071c74c5045f228b9600cf1b63ddf9fa00bd79550644f9142a31586be178ad"} Dec 11 10:18:53 crc kubenswrapper[4953]: I1211 10:18:53.166940 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-dpqbw"] Dec 11 10:18:53 crc kubenswrapper[4953]: I1211 10:18:53.168055 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-dpqbw" Dec 11 10:18:53 crc kubenswrapper[4953]: I1211 10:18:53.187367 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-dpqbw"] Dec 11 10:18:53 crc kubenswrapper[4953]: I1211 10:18:53.316350 4953 generic.go:334] "Generic (PLEG): container finished" podID="c7f97b11-818a-4328-8013-3501f45516ef" containerID="2b8b1450502613e015c153d1c2d3c9e95bdaca719f1b1bafc5eb1f456c4222f8" exitCode=0 Dec 11 10:18:53 crc kubenswrapper[4953]: I1211 10:18:53.316452 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kbqqj" event={"ID":"c7f97b11-818a-4328-8013-3501f45516ef","Type":"ContainerDied","Data":"2b8b1450502613e015c153d1c2d3c9e95bdaca719f1b1bafc5eb1f456c4222f8"} Dec 11 10:18:53 crc kubenswrapper[4953]: I1211 10:18:53.323584 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c784ea39-4c0a-409b-8842-e7cfe8ad4b7f-installation-pull-secrets\") pod \"image-registry-66df7c8f76-dpqbw\" (UID: \"c784ea39-4c0a-409b-8842-e7cfe8ad4b7f\") " pod="openshift-image-registry/image-registry-66df7c8f76-dpqbw" Dec 11 10:18:53 crc kubenswrapper[4953]: I1211 10:18:53.323688 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c784ea39-4c0a-409b-8842-e7cfe8ad4b7f-registry-tls\") pod \"image-registry-66df7c8f76-dpqbw\" (UID: \"c784ea39-4c0a-409b-8842-e7cfe8ad4b7f\") " pod="openshift-image-registry/image-registry-66df7c8f76-dpqbw" Dec 11 10:18:53 crc kubenswrapper[4953]: I1211 10:18:53.323747 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c784ea39-4c0a-409b-8842-e7cfe8ad4b7f-registry-certificates\") pod \"image-registry-66df7c8f76-dpqbw\" (UID: \"c784ea39-4c0a-409b-8842-e7cfe8ad4b7f\") " pod="openshift-image-registry/image-registry-66df7c8f76-dpqbw" Dec 11 10:18:53 crc kubenswrapper[4953]: I1211 10:18:53.323987 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t45l2\" (UniqueName: \"kubernetes.io/projected/c784ea39-4c0a-409b-8842-e7cfe8ad4b7f-kube-api-access-t45l2\") pod \"image-registry-66df7c8f76-dpqbw\" (UID: \"c784ea39-4c0a-409b-8842-e7cfe8ad4b7f\") " pod="openshift-image-registry/image-registry-66df7c8f76-dpqbw" Dec 11 10:18:53 crc kubenswrapper[4953]: I1211 10:18:53.324070 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c784ea39-4c0a-409b-8842-e7cfe8ad4b7f-ca-trust-extracted\") pod \"image-registry-66df7c8f76-dpqbw\" (UID: \"c784ea39-4c0a-409b-8842-e7cfe8ad4b7f\") " pod="openshift-image-registry/image-registry-66df7c8f76-dpqbw" Dec 11 10:18:53 crc kubenswrapper[4953]: I1211 10:18:53.324093 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c784ea39-4c0a-409b-8842-e7cfe8ad4b7f-bound-sa-token\") pod \"image-registry-66df7c8f76-dpqbw\" (UID: \"c784ea39-4c0a-409b-8842-e7cfe8ad4b7f\") " pod="openshift-image-registry/image-registry-66df7c8f76-dpqbw" Dec 11 10:18:53 crc kubenswrapper[4953]: I1211 10:18:53.324129 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-dpqbw\" (UID: \"c784ea39-4c0a-409b-8842-e7cfe8ad4b7f\") " pod="openshift-image-registry/image-registry-66df7c8f76-dpqbw" Dec 11 10:18:53 crc kubenswrapper[4953]: I1211 10:18:53.324274 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c784ea39-4c0a-409b-8842-e7cfe8ad4b7f-trusted-ca\") pod \"image-registry-66df7c8f76-dpqbw\" (UID: \"c784ea39-4c0a-409b-8842-e7cfe8ad4b7f\") " pod="openshift-image-registry/image-registry-66df7c8f76-dpqbw" Dec 11 10:18:53 crc kubenswrapper[4953]: I1211 10:18:53.347671 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-dpqbw\" (UID: \"c784ea39-4c0a-409b-8842-e7cfe8ad4b7f\") " pod="openshift-image-registry/image-registry-66df7c8f76-dpqbw" Dec 11 10:18:53 crc kubenswrapper[4953]: I1211 10:18:53.425269 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c784ea39-4c0a-409b-8842-e7cfe8ad4b7f-ca-trust-extracted\") pod \"image-registry-66df7c8f76-dpqbw\" (UID: \"c784ea39-4c0a-409b-8842-e7cfe8ad4b7f\") " pod="openshift-image-registry/image-registry-66df7c8f76-dpqbw" Dec 11 10:18:53 crc kubenswrapper[4953]: I1211 10:18:53.425326 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c784ea39-4c0a-409b-8842-e7cfe8ad4b7f-bound-sa-token\") pod \"image-registry-66df7c8f76-dpqbw\" (UID: \"c784ea39-4c0a-409b-8842-e7cfe8ad4b7f\") " pod="openshift-image-registry/image-registry-66df7c8f76-dpqbw" Dec 11 10:18:53 crc kubenswrapper[4953]: I1211 10:18:53.425415 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c784ea39-4c0a-409b-8842-e7cfe8ad4b7f-trusted-ca\") pod \"image-registry-66df7c8f76-dpqbw\" (UID: \"c784ea39-4c0a-409b-8842-e7cfe8ad4b7f\") " pod="openshift-image-registry/image-registry-66df7c8f76-dpqbw" Dec 11 10:18:53 crc kubenswrapper[4953]: I1211 10:18:53.425474 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c784ea39-4c0a-409b-8842-e7cfe8ad4b7f-installation-pull-secrets\") pod \"image-registry-66df7c8f76-dpqbw\" (UID: \"c784ea39-4c0a-409b-8842-e7cfe8ad4b7f\") " pod="openshift-image-registry/image-registry-66df7c8f76-dpqbw" Dec 11 10:18:53 crc kubenswrapper[4953]: I1211 10:18:53.425529 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c784ea39-4c0a-409b-8842-e7cfe8ad4b7f-registry-tls\") pod \"image-registry-66df7c8f76-dpqbw\" (UID: \"c784ea39-4c0a-409b-8842-e7cfe8ad4b7f\") " pod="openshift-image-registry/image-registry-66df7c8f76-dpqbw" Dec 11 10:18:53 crc kubenswrapper[4953]: I1211 10:18:53.425558 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c784ea39-4c0a-409b-8842-e7cfe8ad4b7f-registry-certificates\") pod \"image-registry-66df7c8f76-dpqbw\" (UID: \"c784ea39-4c0a-409b-8842-e7cfe8ad4b7f\") " pod="openshift-image-registry/image-registry-66df7c8f76-dpqbw" Dec 11 10:18:53 crc kubenswrapper[4953]: I1211 10:18:53.425642 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t45l2\" (UniqueName: \"kubernetes.io/projected/c784ea39-4c0a-409b-8842-e7cfe8ad4b7f-kube-api-access-t45l2\") pod \"image-registry-66df7c8f76-dpqbw\" (UID: \"c784ea39-4c0a-409b-8842-e7cfe8ad4b7f\") " pod="openshift-image-registry/image-registry-66df7c8f76-dpqbw" Dec 11 10:18:53 crc kubenswrapper[4953]: I1211 10:18:53.426874 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c784ea39-4c0a-409b-8842-e7cfe8ad4b7f-ca-trust-extracted\") pod \"image-registry-66df7c8f76-dpqbw\" (UID: \"c784ea39-4c0a-409b-8842-e7cfe8ad4b7f\") " pod="openshift-image-registry/image-registry-66df7c8f76-dpqbw" Dec 11 10:18:53 crc kubenswrapper[4953]: I1211 10:18:53.428313 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c784ea39-4c0a-409b-8842-e7cfe8ad4b7f-registry-certificates\") pod \"image-registry-66df7c8f76-dpqbw\" (UID: \"c784ea39-4c0a-409b-8842-e7cfe8ad4b7f\") " pod="openshift-image-registry/image-registry-66df7c8f76-dpqbw" Dec 11 10:18:53 crc kubenswrapper[4953]: I1211 10:18:53.430009 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c784ea39-4c0a-409b-8842-e7cfe8ad4b7f-trusted-ca\") pod \"image-registry-66df7c8f76-dpqbw\" (UID: \"c784ea39-4c0a-409b-8842-e7cfe8ad4b7f\") " pod="openshift-image-registry/image-registry-66df7c8f76-dpqbw" Dec 11 10:18:53 crc kubenswrapper[4953]: I1211 10:18:53.434749 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c784ea39-4c0a-409b-8842-e7cfe8ad4b7f-registry-tls\") pod \"image-registry-66df7c8f76-dpqbw\" (UID: \"c784ea39-4c0a-409b-8842-e7cfe8ad4b7f\") " pod="openshift-image-registry/image-registry-66df7c8f76-dpqbw" Dec 11 10:18:53 crc kubenswrapper[4953]: I1211 10:18:53.435893 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c784ea39-4c0a-409b-8842-e7cfe8ad4b7f-installation-pull-secrets\") pod \"image-registry-66df7c8f76-dpqbw\" (UID: \"c784ea39-4c0a-409b-8842-e7cfe8ad4b7f\") " pod="openshift-image-registry/image-registry-66df7c8f76-dpqbw" Dec 11 10:18:53 crc kubenswrapper[4953]: I1211 10:18:53.444258 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t45l2\" (UniqueName: \"kubernetes.io/projected/c784ea39-4c0a-409b-8842-e7cfe8ad4b7f-kube-api-access-t45l2\") pod \"image-registry-66df7c8f76-dpqbw\" (UID: \"c784ea39-4c0a-409b-8842-e7cfe8ad4b7f\") " pod="openshift-image-registry/image-registry-66df7c8f76-dpqbw" Dec 11 10:18:53 crc kubenswrapper[4953]: I1211 10:18:53.445498 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c784ea39-4c0a-409b-8842-e7cfe8ad4b7f-bound-sa-token\") pod \"image-registry-66df7c8f76-dpqbw\" (UID: \"c784ea39-4c0a-409b-8842-e7cfe8ad4b7f\") " pod="openshift-image-registry/image-registry-66df7c8f76-dpqbw" Dec 11 10:18:53 crc kubenswrapper[4953]: I1211 10:18:53.489394 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-dpqbw" Dec 11 10:18:54 crc kubenswrapper[4953]: I1211 10:18:54.139012 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-dpqbw"] Dec 11 10:18:54 crc kubenswrapper[4953]: W1211 10:18:54.147403 4953 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc784ea39_4c0a_409b_8842_e7cfe8ad4b7f.slice/crio-0e3f9e0e72df36c90ff8f983fb31b002ad4bf7880b71c0753dc7a7b396293be2 WatchSource:0}: Error finding container 0e3f9e0e72df36c90ff8f983fb31b002ad4bf7880b71c0753dc7a7b396293be2: Status 404 returned error can't find the container with id 0e3f9e0e72df36c90ff8f983fb31b002ad4bf7880b71c0753dc7a7b396293be2 Dec 11 10:18:54 crc kubenswrapper[4953]: I1211 10:18:54.481458 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jtjgm" event={"ID":"b0483db9-edf7-4df4-8cd4-b64966d77014","Type":"ContainerStarted","Data":"92fa892f6be9d959677a36cf45d0b18d4d7e784bfcb3643166efc29f44b4e20b"} Dec 11 10:18:54 crc kubenswrapper[4953]: I1211 10:18:54.481894 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-dpqbw" event={"ID":"c784ea39-4c0a-409b-8842-e7cfe8ad4b7f","Type":"ContainerStarted","Data":"0e3f9e0e72df36c90ff8f983fb31b002ad4bf7880b71c0753dc7a7b396293be2"} Dec 11 10:18:54 crc kubenswrapper[4953]: I1211 10:18:54.523950 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-jtjgm" podStartSLOduration=3.23568208 podStartE2EDuration="9.523925852s" podCreationTimestamp="2025-12-11 10:18:45 +0000 UTC" firstStartedPulling="2025-12-11 10:18:47.172057347 +0000 UTC m=+445.195916380" lastFinishedPulling="2025-12-11 10:18:53.460301119 +0000 UTC m=+451.484160152" observedRunningTime="2025-12-11 10:18:54.523131637 +0000 UTC m=+452.546990670" watchObservedRunningTime="2025-12-11 10:18:54.523925852 +0000 UTC m=+452.547784885" Dec 11 10:18:55 crc kubenswrapper[4953]: I1211 10:18:55.802396 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-jtjgm" Dec 11 10:18:55 crc kubenswrapper[4953]: I1211 10:18:55.802469 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-jtjgm" Dec 11 10:18:55 crc kubenswrapper[4953]: I1211 10:18:55.852274 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-jtjgm" Dec 11 10:18:56 crc kubenswrapper[4953]: I1211 10:18:56.545058 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-dpqbw" event={"ID":"c784ea39-4c0a-409b-8842-e7cfe8ad4b7f","Type":"ContainerStarted","Data":"6fe1129c25aa7dd6f6729c2f5a009c638417c949ff491acd8c61e30d1fa8eaaf"} Dec 11 10:18:56 crc kubenswrapper[4953]: I1211 10:18:56.545464 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-dpqbw" Dec 11 10:18:56 crc kubenswrapper[4953]: I1211 10:18:56.547437 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jdxzs" event={"ID":"65c394e3-db2a-449c-9963-a880e17adbb2","Type":"ContainerStarted","Data":"c6d9d6c6db96490592314b26ed46f03af0fe5f40c18d71dd37313bf583798c84"} Dec 11 10:18:56 crc kubenswrapper[4953]: I1211 10:18:56.710732 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-jdxzs" podStartSLOduration=5.198196367 podStartE2EDuration="11.710708036s" podCreationTimestamp="2025-12-11 10:18:45 +0000 UTC" firstStartedPulling="2025-12-11 10:18:47.187792893 +0000 UTC m=+445.211651926" lastFinishedPulling="2025-12-11 10:18:53.700304562 +0000 UTC m=+451.724163595" observedRunningTime="2025-12-11 10:18:56.709541869 +0000 UTC m=+454.733400922" watchObservedRunningTime="2025-12-11 10:18:56.710708036 +0000 UTC m=+454.734567069" Dec 11 10:18:56 crc kubenswrapper[4953]: I1211 10:18:56.715692 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-dpqbw" podStartSLOduration=3.715673506 podStartE2EDuration="3.715673506s" podCreationTimestamp="2025-12-11 10:18:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:18:56.572066835 +0000 UTC m=+454.595925888" watchObservedRunningTime="2025-12-11 10:18:56.715673506 +0000 UTC m=+454.739532539" Dec 11 10:18:58 crc kubenswrapper[4953]: I1211 10:18:58.661030 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rrz7v" event={"ID":"3e2c3944-72ff-4162-88c6-3e8583aa5065","Type":"ContainerStarted","Data":"08043763bcd936c27f4c543bf1bb5bebe6691edf79488ba883109dd0fbf509c4"} Dec 11 10:19:05 crc kubenswrapper[4953]: I1211 10:19:05.602710 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-jdxzs" Dec 11 10:19:05 crc kubenswrapper[4953]: I1211 10:19:05.603372 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-jdxzs" Dec 11 10:19:05 crc kubenswrapper[4953]: I1211 10:19:05.653010 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-jdxzs" Dec 11 10:19:05 crc kubenswrapper[4953]: I1211 10:19:05.841045 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-jtjgm" Dec 11 10:19:05 crc kubenswrapper[4953]: I1211 10:19:05.847079 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-rrz7v" podStartSLOduration=12.915358441 podStartE2EDuration="18.847060161s" podCreationTimestamp="2025-12-11 10:18:47 +0000 UTC" firstStartedPulling="2025-12-11 10:18:50.110854897 +0000 UTC m=+448.134713950" lastFinishedPulling="2025-12-11 10:18:56.042556637 +0000 UTC m=+454.066415670" observedRunningTime="2025-12-11 10:19:05.845347116 +0000 UTC m=+463.869206169" watchObservedRunningTime="2025-12-11 10:19:05.847060161 +0000 UTC m=+463.870919204" Dec 11 10:19:05 crc kubenswrapper[4953]: I1211 10:19:05.875410 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-jdxzs" Dec 11 10:19:07 crc kubenswrapper[4953]: I1211 10:19:07.591490 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-rrz7v" Dec 11 10:19:07 crc kubenswrapper[4953]: I1211 10:19:07.591864 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-rrz7v" Dec 11 10:19:07 crc kubenswrapper[4953]: I1211 10:19:07.647171 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-rrz7v" Dec 11 10:19:07 crc kubenswrapper[4953]: I1211 10:19:07.877351 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-rrz7v" Dec 11 10:19:10 crc kubenswrapper[4953]: I1211 10:19:10.891857 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kbqqj" event={"ID":"c7f97b11-818a-4328-8013-3501f45516ef","Type":"ContainerStarted","Data":"d7b2efe8d5917f94424513709a1358ad7b3e925bbe672e4feaa55fc8b1e00ae3"} Dec 11 10:19:10 crc kubenswrapper[4953]: I1211 10:19:10.916977 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-kbqqj" podStartSLOduration=3.156061207 podStartE2EDuration="22.916958572s" podCreationTimestamp="2025-12-11 10:18:48 +0000 UTC" firstStartedPulling="2025-12-11 10:18:50.215485463 +0000 UTC m=+448.239344496" lastFinishedPulling="2025-12-11 10:19:09.976382828 +0000 UTC m=+468.000241861" observedRunningTime="2025-12-11 10:19:10.91378521 +0000 UTC m=+468.937644253" watchObservedRunningTime="2025-12-11 10:19:10.916958572 +0000 UTC m=+468.940817595" Dec 11 10:19:13 crc kubenswrapper[4953]: I1211 10:19:13.494422 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-dpqbw" Dec 11 10:19:13 crc kubenswrapper[4953]: I1211 10:19:13.545013 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-r99w9"] Dec 11 10:19:18 crc kubenswrapper[4953]: I1211 10:19:18.612361 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-kbqqj" Dec 11 10:19:18 crc kubenswrapper[4953]: I1211 10:19:18.612922 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-kbqqj" Dec 11 10:19:18 crc kubenswrapper[4953]: I1211 10:19:18.660704 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-kbqqj" Dec 11 10:19:18 crc kubenswrapper[4953]: I1211 10:19:18.983000 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-kbqqj" Dec 11 10:19:38 crc kubenswrapper[4953]: I1211 10:19:38.584801 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-r99w9" podUID="c1a4e773-6467-424c-935e-40ef82e5fa99" containerName="registry" containerID="cri-o://7900dc2ff05af76712d5c26ccf8f4c4c0c180a0a9e1fb8896d7d2ec165f1c25f" gracePeriod=30 Dec 11 10:19:39 crc kubenswrapper[4953]: I1211 10:19:39.036848 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-r99w9" Dec 11 10:19:39 crc kubenswrapper[4953]: I1211 10:19:39.154211 4953 generic.go:334] "Generic (PLEG): container finished" podID="c1a4e773-6467-424c-935e-40ef82e5fa99" containerID="7900dc2ff05af76712d5c26ccf8f4c4c0c180a0a9e1fb8896d7d2ec165f1c25f" exitCode=0 Dec 11 10:19:39 crc kubenswrapper[4953]: I1211 10:19:39.154307 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-r99w9" event={"ID":"c1a4e773-6467-424c-935e-40ef82e5fa99","Type":"ContainerDied","Data":"7900dc2ff05af76712d5c26ccf8f4c4c0c180a0a9e1fb8896d7d2ec165f1c25f"} Dec 11 10:19:39 crc kubenswrapper[4953]: I1211 10:19:39.154341 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-r99w9" Dec 11 10:19:39 crc kubenswrapper[4953]: I1211 10:19:39.154355 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-r99w9" event={"ID":"c1a4e773-6467-424c-935e-40ef82e5fa99","Type":"ContainerDied","Data":"225b422f5717617bb95a223a7e97fe0fb54d44b880eb5910a89d074815223079"} Dec 11 10:19:39 crc kubenswrapper[4953]: I1211 10:19:39.154371 4953 scope.go:117] "RemoveContainer" containerID="7900dc2ff05af76712d5c26ccf8f4c4c0c180a0a9e1fb8896d7d2ec165f1c25f" Dec 11 10:19:39 crc kubenswrapper[4953]: I1211 10:19:39.175866 4953 scope.go:117] "RemoveContainer" containerID="7900dc2ff05af76712d5c26ccf8f4c4c0c180a0a9e1fb8896d7d2ec165f1c25f" Dec 11 10:19:39 crc kubenswrapper[4953]: E1211 10:19:39.176343 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7900dc2ff05af76712d5c26ccf8f4c4c0c180a0a9e1fb8896d7d2ec165f1c25f\": container with ID starting with 7900dc2ff05af76712d5c26ccf8f4c4c0c180a0a9e1fb8896d7d2ec165f1c25f not found: ID does not exist" containerID="7900dc2ff05af76712d5c26ccf8f4c4c0c180a0a9e1fb8896d7d2ec165f1c25f" Dec 11 10:19:39 crc kubenswrapper[4953]: I1211 10:19:39.176385 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7900dc2ff05af76712d5c26ccf8f4c4c0c180a0a9e1fb8896d7d2ec165f1c25f"} err="failed to get container status \"7900dc2ff05af76712d5c26ccf8f4c4c0c180a0a9e1fb8896d7d2ec165f1c25f\": rpc error: code = NotFound desc = could not find container \"7900dc2ff05af76712d5c26ccf8f4c4c0c180a0a9e1fb8896d7d2ec165f1c25f\": container with ID starting with 7900dc2ff05af76712d5c26ccf8f4c4c0c180a0a9e1fb8896d7d2ec165f1c25f not found: ID does not exist" Dec 11 10:19:39 crc kubenswrapper[4953]: I1211 10:19:39.187938 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c1a4e773-6467-424c-935e-40ef82e5fa99-installation-pull-secrets\") pod \"c1a4e773-6467-424c-935e-40ef82e5fa99\" (UID: \"c1a4e773-6467-424c-935e-40ef82e5fa99\") " Dec 11 10:19:39 crc kubenswrapper[4953]: I1211 10:19:39.187996 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-drxnt\" (UniqueName: \"kubernetes.io/projected/c1a4e773-6467-424c-935e-40ef82e5fa99-kube-api-access-drxnt\") pod \"c1a4e773-6467-424c-935e-40ef82e5fa99\" (UID: \"c1a4e773-6467-424c-935e-40ef82e5fa99\") " Dec 11 10:19:39 crc kubenswrapper[4953]: I1211 10:19:39.188024 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c1a4e773-6467-424c-935e-40ef82e5fa99-bound-sa-token\") pod \"c1a4e773-6467-424c-935e-40ef82e5fa99\" (UID: \"c1a4e773-6467-424c-935e-40ef82e5fa99\") " Dec 11 10:19:39 crc kubenswrapper[4953]: I1211 10:19:39.188045 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c1a4e773-6467-424c-935e-40ef82e5fa99-registry-tls\") pod \"c1a4e773-6467-424c-935e-40ef82e5fa99\" (UID: \"c1a4e773-6467-424c-935e-40ef82e5fa99\") " Dec 11 10:19:39 crc kubenswrapper[4953]: I1211 10:19:39.188078 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c1a4e773-6467-424c-935e-40ef82e5fa99-trusted-ca\") pod \"c1a4e773-6467-424c-935e-40ef82e5fa99\" (UID: \"c1a4e773-6467-424c-935e-40ef82e5fa99\") " Dec 11 10:19:39 crc kubenswrapper[4953]: I1211 10:19:39.188102 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c1a4e773-6467-424c-935e-40ef82e5fa99-ca-trust-extracted\") pod \"c1a4e773-6467-424c-935e-40ef82e5fa99\" (UID: \"c1a4e773-6467-424c-935e-40ef82e5fa99\") " Dec 11 10:19:39 crc kubenswrapper[4953]: I1211 10:19:39.188361 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"c1a4e773-6467-424c-935e-40ef82e5fa99\" (UID: \"c1a4e773-6467-424c-935e-40ef82e5fa99\") " Dec 11 10:19:39 crc kubenswrapper[4953]: I1211 10:19:39.188404 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c1a4e773-6467-424c-935e-40ef82e5fa99-registry-certificates\") pod \"c1a4e773-6467-424c-935e-40ef82e5fa99\" (UID: \"c1a4e773-6467-424c-935e-40ef82e5fa99\") " Dec 11 10:19:39 crc kubenswrapper[4953]: I1211 10:19:39.190404 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1a4e773-6467-424c-935e-40ef82e5fa99-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "c1a4e773-6467-424c-935e-40ef82e5fa99" (UID: "c1a4e773-6467-424c-935e-40ef82e5fa99"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:19:39 crc kubenswrapper[4953]: I1211 10:19:39.191719 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1a4e773-6467-424c-935e-40ef82e5fa99-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "c1a4e773-6467-424c-935e-40ef82e5fa99" (UID: "c1a4e773-6467-424c-935e-40ef82e5fa99"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:19:39 crc kubenswrapper[4953]: I1211 10:19:39.195532 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1a4e773-6467-424c-935e-40ef82e5fa99-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "c1a4e773-6467-424c-935e-40ef82e5fa99" (UID: "c1a4e773-6467-424c-935e-40ef82e5fa99"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:19:39 crc kubenswrapper[4953]: I1211 10:19:39.200720 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1a4e773-6467-424c-935e-40ef82e5fa99-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "c1a4e773-6467-424c-935e-40ef82e5fa99" (UID: "c1a4e773-6467-424c-935e-40ef82e5fa99"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:19:39 crc kubenswrapper[4953]: I1211 10:19:39.200839 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1a4e773-6467-424c-935e-40ef82e5fa99-kube-api-access-drxnt" (OuterVolumeSpecName: "kube-api-access-drxnt") pod "c1a4e773-6467-424c-935e-40ef82e5fa99" (UID: "c1a4e773-6467-424c-935e-40ef82e5fa99"). InnerVolumeSpecName "kube-api-access-drxnt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:19:39 crc kubenswrapper[4953]: I1211 10:19:39.201276 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1a4e773-6467-424c-935e-40ef82e5fa99-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "c1a4e773-6467-424c-935e-40ef82e5fa99" (UID: "c1a4e773-6467-424c-935e-40ef82e5fa99"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:19:39 crc kubenswrapper[4953]: I1211 10:19:39.201301 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "c1a4e773-6467-424c-935e-40ef82e5fa99" (UID: "c1a4e773-6467-424c-935e-40ef82e5fa99"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 11 10:19:39 crc kubenswrapper[4953]: I1211 10:19:39.205985 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c1a4e773-6467-424c-935e-40ef82e5fa99-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "c1a4e773-6467-424c-935e-40ef82e5fa99" (UID: "c1a4e773-6467-424c-935e-40ef82e5fa99"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:19:39 crc kubenswrapper[4953]: I1211 10:19:39.290283 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-drxnt\" (UniqueName: \"kubernetes.io/projected/c1a4e773-6467-424c-935e-40ef82e5fa99-kube-api-access-drxnt\") on node \"crc\" DevicePath \"\"" Dec 11 10:19:39 crc kubenswrapper[4953]: I1211 10:19:39.290353 4953 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c1a4e773-6467-424c-935e-40ef82e5fa99-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Dec 11 10:19:39 crc kubenswrapper[4953]: I1211 10:19:39.290366 4953 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c1a4e773-6467-424c-935e-40ef82e5fa99-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 11 10:19:39 crc kubenswrapper[4953]: I1211 10:19:39.290377 4953 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c1a4e773-6467-424c-935e-40ef82e5fa99-registry-tls\") on node \"crc\" DevicePath \"\"" Dec 11 10:19:39 crc kubenswrapper[4953]: I1211 10:19:39.290411 4953 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c1a4e773-6467-424c-935e-40ef82e5fa99-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 11 10:19:39 crc kubenswrapper[4953]: I1211 10:19:39.290420 4953 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c1a4e773-6467-424c-935e-40ef82e5fa99-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Dec 11 10:19:39 crc kubenswrapper[4953]: I1211 10:19:39.290429 4953 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c1a4e773-6467-424c-935e-40ef82e5fa99-registry-certificates\") on node \"crc\" DevicePath \"\"" Dec 11 10:19:39 crc kubenswrapper[4953]: I1211 10:19:39.490776 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-r99w9"] Dec 11 10:19:39 crc kubenswrapper[4953]: I1211 10:19:39.495236 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-r99w9"] Dec 11 10:19:40 crc kubenswrapper[4953]: I1211 10:19:40.483087 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1a4e773-6467-424c-935e-40ef82e5fa99" path="/var/lib/kubelet/pods/c1a4e773-6467-424c-935e-40ef82e5fa99/volumes" Dec 11 10:20:29 crc kubenswrapper[4953]: I1211 10:20:29.195541 4953 scope.go:117] "RemoveContainer" containerID="de7282db68c7e4cd525175aaf5bfef924902be60be49e4f2e490bf6f4e88f9c3" Dec 11 10:21:18 crc kubenswrapper[4953]: I1211 10:21:18.194475 4953 patch_prober.go:28] interesting pod/machine-config-daemon-q2898 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 10:21:18 crc kubenswrapper[4953]: I1211 10:21:18.195266 4953 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 10:21:48 crc kubenswrapper[4953]: I1211 10:21:48.193610 4953 patch_prober.go:28] interesting pod/machine-config-daemon-q2898 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 10:21:48 crc kubenswrapper[4953]: I1211 10:21:48.194174 4953 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 10:22:18 crc kubenswrapper[4953]: I1211 10:22:18.202730 4953 patch_prober.go:28] interesting pod/machine-config-daemon-q2898 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 10:22:18 crc kubenswrapper[4953]: I1211 10:22:18.203246 4953 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 10:22:18 crc kubenswrapper[4953]: I1211 10:22:18.203341 4953 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q2898" Dec 11 10:22:18 crc kubenswrapper[4953]: I1211 10:22:18.203995 4953 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"142b8bb384b24715cd1ba95ad576a70c2c8e1fafe4e31f75f980739d852f35b1"} pod="openshift-machine-config-operator/machine-config-daemon-q2898" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 11 10:22:18 crc kubenswrapper[4953]: I1211 10:22:18.204078 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" containerName="machine-config-daemon" containerID="cri-o://142b8bb384b24715cd1ba95ad576a70c2c8e1fafe4e31f75f980739d852f35b1" gracePeriod=600 Dec 11 10:22:18 crc kubenswrapper[4953]: I1211 10:22:18.339217 4953 generic.go:334] "Generic (PLEG): container finished" podID="ed741fb7-1326-48b7-a713-17c9f0243eac" containerID="142b8bb384b24715cd1ba95ad576a70c2c8e1fafe4e31f75f980739d852f35b1" exitCode=0 Dec 11 10:22:18 crc kubenswrapper[4953]: I1211 10:22:18.339274 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q2898" event={"ID":"ed741fb7-1326-48b7-a713-17c9f0243eac","Type":"ContainerDied","Data":"142b8bb384b24715cd1ba95ad576a70c2c8e1fafe4e31f75f980739d852f35b1"} Dec 11 10:22:18 crc kubenswrapper[4953]: I1211 10:22:18.339327 4953 scope.go:117] "RemoveContainer" containerID="ebcfb015c8d0726744962a05ad3b02d7514b72b3db32d83919120d58d0255b97" Dec 11 10:22:19 crc kubenswrapper[4953]: I1211 10:22:19.348278 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q2898" event={"ID":"ed741fb7-1326-48b7-a713-17c9f0243eac","Type":"ContainerStarted","Data":"3ca59c50b35b5c8d77fc457ff5e5a06ef5ae754b46ae582746445b4e7704377c"} Dec 11 10:22:29 crc kubenswrapper[4953]: I1211 10:22:29.247211 4953 scope.go:117] "RemoveContainer" containerID="9658dbfb5befa8e7bc2c911bd90c45f78fa16822dc206266d4071f5e6712292b" Dec 11 10:22:29 crc kubenswrapper[4953]: I1211 10:22:29.270516 4953 scope.go:117] "RemoveContainer" containerID="108fd85f0f1ab0a5ec256d6544f68f5afc24a7358df6435f00e3e39642eff318" Dec 11 10:24:18 crc kubenswrapper[4953]: I1211 10:24:18.194445 4953 patch_prober.go:28] interesting pod/machine-config-daemon-q2898 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 10:24:18 crc kubenswrapper[4953]: I1211 10:24:18.195149 4953 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 10:24:21 crc kubenswrapper[4953]: I1211 10:24:21.429487 4953 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 11 10:24:29 crc kubenswrapper[4953]: I1211 10:24:29.335634 4953 scope.go:117] "RemoveContainer" containerID="1f1cba4b2ceae3cc5efa767970e16315726d22becdfd736a493fa858bbbfa616" Dec 11 10:24:48 crc kubenswrapper[4953]: I1211 10:24:48.193687 4953 patch_prober.go:28] interesting pod/machine-config-daemon-q2898 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 10:24:48 crc kubenswrapper[4953]: I1211 10:24:48.194172 4953 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 10:25:00 crc kubenswrapper[4953]: I1211 10:25:00.385042 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-x6f57"] Dec 11 10:25:00 crc kubenswrapper[4953]: I1211 10:25:00.390312 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-x6f57" podUID="c09d8243-6693-433e-bce1-8a99e5e37b95" containerName="ovn-controller" containerID="cri-o://99f628cfb5844a18bd0013d2cd5f7f545ef7a561017498bdfa4a6b6fc4686e54" gracePeriod=30 Dec 11 10:25:00 crc kubenswrapper[4953]: I1211 10:25:00.390434 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-x6f57" podUID="c09d8243-6693-433e-bce1-8a99e5e37b95" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://622e6781ae06491000d6c0e3d2922942c3709bb09483a8e0d2ab723972c2aa78" gracePeriod=30 Dec 11 10:25:00 crc kubenswrapper[4953]: I1211 10:25:00.390431 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-x6f57" podUID="c09d8243-6693-433e-bce1-8a99e5e37b95" containerName="nbdb" containerID="cri-o://c34fbadf5e4606e6ad395137eff9b9edc19b5d02125e818a6d8a19d0d20649e1" gracePeriod=30 Dec 11 10:25:00 crc kubenswrapper[4953]: I1211 10:25:00.390445 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-x6f57" podUID="c09d8243-6693-433e-bce1-8a99e5e37b95" containerName="northd" containerID="cri-o://b2587a8f43682f31a01fae073769712321abb001f975f5ac5413264489a110f7" gracePeriod=30 Dec 11 10:25:00 crc kubenswrapper[4953]: I1211 10:25:00.390567 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-x6f57" podUID="c09d8243-6693-433e-bce1-8a99e5e37b95" containerName="ovn-acl-logging" containerID="cri-o://42759c1ba178a28ddf9b11b221249364f6993c1288309dff6b1c50be13c3b6fc" gracePeriod=30 Dec 11 10:25:00 crc kubenswrapper[4953]: I1211 10:25:00.390663 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-x6f57" podUID="c09d8243-6693-433e-bce1-8a99e5e37b95" containerName="sbdb" containerID="cri-o://8da892281a7b8dea449aee461dcb35bef204e2a768689e751abcb21204091aaa" gracePeriod=30 Dec 11 10:25:00 crc kubenswrapper[4953]: I1211 10:25:00.390608 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-x6f57" podUID="c09d8243-6693-433e-bce1-8a99e5e37b95" containerName="kube-rbac-proxy-node" containerID="cri-o://b88a1ca7bd533da2486525e0a6c3dc45272433e743146be01eb8013265f02d93" gracePeriod=30 Dec 11 10:25:00 crc kubenswrapper[4953]: I1211 10:25:00.437710 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-x6f57" podUID="c09d8243-6693-433e-bce1-8a99e5e37b95" containerName="ovnkube-controller" containerID="cri-o://e1e0a7a3ed79a4ad164a0949259cb9d143376d0563f58526ab941a2f87b272f6" gracePeriod=30 Dec 11 10:25:00 crc kubenswrapper[4953]: I1211 10:25:00.778845 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x6f57_c09d8243-6693-433e-bce1-8a99e5e37b95/ovnkube-controller/3.log" Dec 11 10:25:00 crc kubenswrapper[4953]: I1211 10:25:00.781901 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x6f57_c09d8243-6693-433e-bce1-8a99e5e37b95/ovn-acl-logging/0.log" Dec 11 10:25:00 crc kubenswrapper[4953]: I1211 10:25:00.782608 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x6f57_c09d8243-6693-433e-bce1-8a99e5e37b95/ovn-controller/0.log" Dec 11 10:25:00 crc kubenswrapper[4953]: I1211 10:25:00.783362 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-x6f57" Dec 11 10:25:00 crc kubenswrapper[4953]: I1211 10:25:00.847558 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-cls25"] Dec 11 10:25:00 crc kubenswrapper[4953]: E1211 10:25:00.847875 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c09d8243-6693-433e-bce1-8a99e5e37b95" containerName="ovnkube-controller" Dec 11 10:25:00 crc kubenswrapper[4953]: I1211 10:25:00.847905 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="c09d8243-6693-433e-bce1-8a99e5e37b95" containerName="ovnkube-controller" Dec 11 10:25:00 crc kubenswrapper[4953]: E1211 10:25:00.847917 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c09d8243-6693-433e-bce1-8a99e5e37b95" containerName="ovn-controller" Dec 11 10:25:00 crc kubenswrapper[4953]: I1211 10:25:00.847925 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="c09d8243-6693-433e-bce1-8a99e5e37b95" containerName="ovn-controller" Dec 11 10:25:00 crc kubenswrapper[4953]: E1211 10:25:00.847937 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c09d8243-6693-433e-bce1-8a99e5e37b95" containerName="northd" Dec 11 10:25:00 crc kubenswrapper[4953]: I1211 10:25:00.847945 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="c09d8243-6693-433e-bce1-8a99e5e37b95" containerName="northd" Dec 11 10:25:00 crc kubenswrapper[4953]: E1211 10:25:00.847954 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c09d8243-6693-433e-bce1-8a99e5e37b95" containerName="ovnkube-controller" Dec 11 10:25:00 crc kubenswrapper[4953]: I1211 10:25:00.847962 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="c09d8243-6693-433e-bce1-8a99e5e37b95" containerName="ovnkube-controller" Dec 11 10:25:00 crc kubenswrapper[4953]: E1211 10:25:00.847969 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c09d8243-6693-433e-bce1-8a99e5e37b95" containerName="ovnkube-controller" Dec 11 10:25:00 crc kubenswrapper[4953]: I1211 10:25:00.847977 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="c09d8243-6693-433e-bce1-8a99e5e37b95" containerName="ovnkube-controller" Dec 11 10:25:00 crc kubenswrapper[4953]: E1211 10:25:00.847986 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c09d8243-6693-433e-bce1-8a99e5e37b95" containerName="kube-rbac-proxy-ovn-metrics" Dec 11 10:25:00 crc kubenswrapper[4953]: I1211 10:25:00.847995 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="c09d8243-6693-433e-bce1-8a99e5e37b95" containerName="kube-rbac-proxy-ovn-metrics" Dec 11 10:25:00 crc kubenswrapper[4953]: E1211 10:25:00.848006 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c09d8243-6693-433e-bce1-8a99e5e37b95" containerName="sbdb" Dec 11 10:25:00 crc kubenswrapper[4953]: I1211 10:25:00.848015 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="c09d8243-6693-433e-bce1-8a99e5e37b95" containerName="sbdb" Dec 11 10:25:00 crc kubenswrapper[4953]: E1211 10:25:00.848028 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1a4e773-6467-424c-935e-40ef82e5fa99" containerName="registry" Dec 11 10:25:00 crc kubenswrapper[4953]: I1211 10:25:00.848036 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1a4e773-6467-424c-935e-40ef82e5fa99" containerName="registry" Dec 11 10:25:00 crc kubenswrapper[4953]: E1211 10:25:00.848047 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c09d8243-6693-433e-bce1-8a99e5e37b95" containerName="nbdb" Dec 11 10:25:00 crc kubenswrapper[4953]: I1211 10:25:00.848054 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="c09d8243-6693-433e-bce1-8a99e5e37b95" containerName="nbdb" Dec 11 10:25:00 crc kubenswrapper[4953]: E1211 10:25:00.848071 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c09d8243-6693-433e-bce1-8a99e5e37b95" containerName="kubecfg-setup" Dec 11 10:25:00 crc kubenswrapper[4953]: I1211 10:25:00.848078 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="c09d8243-6693-433e-bce1-8a99e5e37b95" containerName="kubecfg-setup" Dec 11 10:25:00 crc kubenswrapper[4953]: E1211 10:25:00.848086 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c09d8243-6693-433e-bce1-8a99e5e37b95" containerName="ovn-acl-logging" Dec 11 10:25:00 crc kubenswrapper[4953]: I1211 10:25:00.848093 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="c09d8243-6693-433e-bce1-8a99e5e37b95" containerName="ovn-acl-logging" Dec 11 10:25:00 crc kubenswrapper[4953]: E1211 10:25:00.848102 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c09d8243-6693-433e-bce1-8a99e5e37b95" containerName="kube-rbac-proxy-node" Dec 11 10:25:00 crc kubenswrapper[4953]: I1211 10:25:00.848110 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="c09d8243-6693-433e-bce1-8a99e5e37b95" containerName="kube-rbac-proxy-node" Dec 11 10:25:00 crc kubenswrapper[4953]: I1211 10:25:00.848204 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="c09d8243-6693-433e-bce1-8a99e5e37b95" containerName="nbdb" Dec 11 10:25:00 crc kubenswrapper[4953]: I1211 10:25:00.848217 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="c09d8243-6693-433e-bce1-8a99e5e37b95" containerName="kube-rbac-proxy-node" Dec 11 10:25:00 crc kubenswrapper[4953]: I1211 10:25:00.848228 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="c09d8243-6693-433e-bce1-8a99e5e37b95" containerName="sbdb" Dec 11 10:25:00 crc kubenswrapper[4953]: I1211 10:25:00.848245 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1a4e773-6467-424c-935e-40ef82e5fa99" containerName="registry" Dec 11 10:25:00 crc kubenswrapper[4953]: I1211 10:25:00.848252 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="c09d8243-6693-433e-bce1-8a99e5e37b95" containerName="ovnkube-controller" Dec 11 10:25:00 crc kubenswrapper[4953]: I1211 10:25:00.848262 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="c09d8243-6693-433e-bce1-8a99e5e37b95" containerName="northd" Dec 11 10:25:00 crc kubenswrapper[4953]: I1211 10:25:00.848273 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="c09d8243-6693-433e-bce1-8a99e5e37b95" containerName="ovnkube-controller" Dec 11 10:25:00 crc kubenswrapper[4953]: I1211 10:25:00.848283 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="c09d8243-6693-433e-bce1-8a99e5e37b95" containerName="ovnkube-controller" Dec 11 10:25:00 crc kubenswrapper[4953]: I1211 10:25:00.848290 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="c09d8243-6693-433e-bce1-8a99e5e37b95" containerName="ovnkube-controller" Dec 11 10:25:00 crc kubenswrapper[4953]: I1211 10:25:00.848299 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="c09d8243-6693-433e-bce1-8a99e5e37b95" containerName="kube-rbac-proxy-ovn-metrics" Dec 11 10:25:00 crc kubenswrapper[4953]: I1211 10:25:00.848309 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="c09d8243-6693-433e-bce1-8a99e5e37b95" containerName="ovn-controller" Dec 11 10:25:00 crc kubenswrapper[4953]: I1211 10:25:00.848321 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="c09d8243-6693-433e-bce1-8a99e5e37b95" containerName="ovn-acl-logging" Dec 11 10:25:00 crc kubenswrapper[4953]: E1211 10:25:00.848451 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c09d8243-6693-433e-bce1-8a99e5e37b95" containerName="ovnkube-controller" Dec 11 10:25:00 crc kubenswrapper[4953]: I1211 10:25:00.848465 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="c09d8243-6693-433e-bce1-8a99e5e37b95" containerName="ovnkube-controller" Dec 11 10:25:00 crc kubenswrapper[4953]: E1211 10:25:00.848477 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c09d8243-6693-433e-bce1-8a99e5e37b95" containerName="ovnkube-controller" Dec 11 10:25:00 crc kubenswrapper[4953]: I1211 10:25:00.848484 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="c09d8243-6693-433e-bce1-8a99e5e37b95" containerName="ovnkube-controller" Dec 11 10:25:00 crc kubenswrapper[4953]: I1211 10:25:00.848623 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="c09d8243-6693-433e-bce1-8a99e5e37b95" containerName="ovnkube-controller" Dec 11 10:25:00 crc kubenswrapper[4953]: I1211 10:25:00.850813 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-cls25" Dec 11 10:25:00 crc kubenswrapper[4953]: I1211 10:25:00.947509 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c09d8243-6693-433e-bce1-8a99e5e37b95-host-cni-netd\") pod \"c09d8243-6693-433e-bce1-8a99e5e37b95\" (UID: \"c09d8243-6693-433e-bce1-8a99e5e37b95\") " Dec 11 10:25:00 crc kubenswrapper[4953]: I1211 10:25:00.947556 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c09d8243-6693-433e-bce1-8a99e5e37b95-systemd-units\") pod \"c09d8243-6693-433e-bce1-8a99e5e37b95\" (UID: \"c09d8243-6693-433e-bce1-8a99e5e37b95\") " Dec 11 10:25:00 crc kubenswrapper[4953]: I1211 10:25:00.947610 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c09d8243-6693-433e-bce1-8a99e5e37b95-ovn-node-metrics-cert\") pod \"c09d8243-6693-433e-bce1-8a99e5e37b95\" (UID: \"c09d8243-6693-433e-bce1-8a99e5e37b95\") " Dec 11 10:25:00 crc kubenswrapper[4953]: I1211 10:25:00.947684 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c09d8243-6693-433e-bce1-8a99e5e37b95-log-socket\") pod \"c09d8243-6693-433e-bce1-8a99e5e37b95\" (UID: \"c09d8243-6693-433e-bce1-8a99e5e37b95\") " Dec 11 10:25:00 crc kubenswrapper[4953]: I1211 10:25:00.947708 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c09d8243-6693-433e-bce1-8a99e5e37b95-etc-openvswitch\") pod \"c09d8243-6693-433e-bce1-8a99e5e37b95\" (UID: \"c09d8243-6693-433e-bce1-8a99e5e37b95\") " Dec 11 10:25:00 crc kubenswrapper[4953]: I1211 10:25:00.947711 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c09d8243-6693-433e-bce1-8a99e5e37b95-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "c09d8243-6693-433e-bce1-8a99e5e37b95" (UID: "c09d8243-6693-433e-bce1-8a99e5e37b95"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 10:25:00 crc kubenswrapper[4953]: I1211 10:25:00.947733 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c09d8243-6693-433e-bce1-8a99e5e37b95-env-overrides\") pod \"c09d8243-6693-433e-bce1-8a99e5e37b95\" (UID: \"c09d8243-6693-433e-bce1-8a99e5e37b95\") " Dec 11 10:25:00 crc kubenswrapper[4953]: I1211 10:25:00.947841 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c09d8243-6693-433e-bce1-8a99e5e37b95-host-kubelet\") pod \"c09d8243-6693-433e-bce1-8a99e5e37b95\" (UID: \"c09d8243-6693-433e-bce1-8a99e5e37b95\") " Dec 11 10:25:00 crc kubenswrapper[4953]: I1211 10:25:00.947867 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c09d8243-6693-433e-bce1-8a99e5e37b95-host-cni-bin\") pod \"c09d8243-6693-433e-bce1-8a99e5e37b95\" (UID: \"c09d8243-6693-433e-bce1-8a99e5e37b95\") " Dec 11 10:25:00 crc kubenswrapper[4953]: I1211 10:25:00.947908 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c09d8243-6693-433e-bce1-8a99e5e37b95-run-ovn\") pod \"c09d8243-6693-433e-bce1-8a99e5e37b95\" (UID: \"c09d8243-6693-433e-bce1-8a99e5e37b95\") " Dec 11 10:25:00 crc kubenswrapper[4953]: I1211 10:25:00.947937 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c09d8243-6693-433e-bce1-8a99e5e37b95-run-systemd\") pod \"c09d8243-6693-433e-bce1-8a99e5e37b95\" (UID: \"c09d8243-6693-433e-bce1-8a99e5e37b95\") " Dec 11 10:25:00 crc kubenswrapper[4953]: I1211 10:25:00.947966 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c09d8243-6693-433e-bce1-8a99e5e37b95-host-var-lib-cni-networks-ovn-kubernetes\") pod \"c09d8243-6693-433e-bce1-8a99e5e37b95\" (UID: \"c09d8243-6693-433e-bce1-8a99e5e37b95\") " Dec 11 10:25:00 crc kubenswrapper[4953]: I1211 10:25:00.947999 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c09d8243-6693-433e-bce1-8a99e5e37b95-var-lib-openvswitch\") pod \"c09d8243-6693-433e-bce1-8a99e5e37b95\" (UID: \"c09d8243-6693-433e-bce1-8a99e5e37b95\") " Dec 11 10:25:00 crc kubenswrapper[4953]: I1211 10:25:00.948020 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c09d8243-6693-433e-bce1-8a99e5e37b95-run-openvswitch\") pod \"c09d8243-6693-433e-bce1-8a99e5e37b95\" (UID: \"c09d8243-6693-433e-bce1-8a99e5e37b95\") " Dec 11 10:25:00 crc kubenswrapper[4953]: I1211 10:25:00.948043 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c09d8243-6693-433e-bce1-8a99e5e37b95-host-run-ovn-kubernetes\") pod \"c09d8243-6693-433e-bce1-8a99e5e37b95\" (UID: \"c09d8243-6693-433e-bce1-8a99e5e37b95\") " Dec 11 10:25:00 crc kubenswrapper[4953]: I1211 10:25:00.948089 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c09d8243-6693-433e-bce1-8a99e5e37b95-host-run-netns\") pod \"c09d8243-6693-433e-bce1-8a99e5e37b95\" (UID: \"c09d8243-6693-433e-bce1-8a99e5e37b95\") " Dec 11 10:25:00 crc kubenswrapper[4953]: I1211 10:25:00.948159 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c09d8243-6693-433e-bce1-8a99e5e37b95-ovnkube-script-lib\") pod \"c09d8243-6693-433e-bce1-8a99e5e37b95\" (UID: \"c09d8243-6693-433e-bce1-8a99e5e37b95\") " Dec 11 10:25:00 crc kubenswrapper[4953]: I1211 10:25:00.948191 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c09d8243-6693-433e-bce1-8a99e5e37b95-ovnkube-config\") pod \"c09d8243-6693-433e-bce1-8a99e5e37b95\" (UID: \"c09d8243-6693-433e-bce1-8a99e5e37b95\") " Dec 11 10:25:00 crc kubenswrapper[4953]: I1211 10:25:00.948214 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-78fl9\" (UniqueName: \"kubernetes.io/projected/c09d8243-6693-433e-bce1-8a99e5e37b95-kube-api-access-78fl9\") pod \"c09d8243-6693-433e-bce1-8a99e5e37b95\" (UID: \"c09d8243-6693-433e-bce1-8a99e5e37b95\") " Dec 11 10:25:00 crc kubenswrapper[4953]: I1211 10:25:00.948219 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c09d8243-6693-433e-bce1-8a99e5e37b95-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "c09d8243-6693-433e-bce1-8a99e5e37b95" (UID: "c09d8243-6693-433e-bce1-8a99e5e37b95"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:25:00 crc kubenswrapper[4953]: I1211 10:25:00.948241 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c09d8243-6693-433e-bce1-8a99e5e37b95-host-slash\") pod \"c09d8243-6693-433e-bce1-8a99e5e37b95\" (UID: \"c09d8243-6693-433e-bce1-8a99e5e37b95\") " Dec 11 10:25:00 crc kubenswrapper[4953]: I1211 10:25:00.948260 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c09d8243-6693-433e-bce1-8a99e5e37b95-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "c09d8243-6693-433e-bce1-8a99e5e37b95" (UID: "c09d8243-6693-433e-bce1-8a99e5e37b95"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 10:25:00 crc kubenswrapper[4953]: I1211 10:25:00.948281 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c09d8243-6693-433e-bce1-8a99e5e37b95-node-log\") pod \"c09d8243-6693-433e-bce1-8a99e5e37b95\" (UID: \"c09d8243-6693-433e-bce1-8a99e5e37b95\") " Dec 11 10:25:00 crc kubenswrapper[4953]: I1211 10:25:00.948698 4953 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c09d8243-6693-433e-bce1-8a99e5e37b95-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 11 10:25:00 crc kubenswrapper[4953]: I1211 10:25:00.948706 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c09d8243-6693-433e-bce1-8a99e5e37b95-log-socket" (OuterVolumeSpecName: "log-socket") pod "c09d8243-6693-433e-bce1-8a99e5e37b95" (UID: "c09d8243-6693-433e-bce1-8a99e5e37b95"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 10:25:00 crc kubenswrapper[4953]: I1211 10:25:00.948723 4953 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c09d8243-6693-433e-bce1-8a99e5e37b95-host-cni-netd\") on node \"crc\" DevicePath \"\"" Dec 11 10:25:00 crc kubenswrapper[4953]: I1211 10:25:00.948752 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c09d8243-6693-433e-bce1-8a99e5e37b95-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "c09d8243-6693-433e-bce1-8a99e5e37b95" (UID: "c09d8243-6693-433e-bce1-8a99e5e37b95"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 10:25:00 crc kubenswrapper[4953]: I1211 10:25:00.948765 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c09d8243-6693-433e-bce1-8a99e5e37b95-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "c09d8243-6693-433e-bce1-8a99e5e37b95" (UID: "c09d8243-6693-433e-bce1-8a99e5e37b95"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 10:25:00 crc kubenswrapper[4953]: I1211 10:25:00.948776 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c09d8243-6693-433e-bce1-8a99e5e37b95-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "c09d8243-6693-433e-bce1-8a99e5e37b95" (UID: "c09d8243-6693-433e-bce1-8a99e5e37b95"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 10:25:00 crc kubenswrapper[4953]: I1211 10:25:00.948793 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c09d8243-6693-433e-bce1-8a99e5e37b95-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "c09d8243-6693-433e-bce1-8a99e5e37b95" (UID: "c09d8243-6693-433e-bce1-8a99e5e37b95"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 10:25:00 crc kubenswrapper[4953]: I1211 10:25:00.948801 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c09d8243-6693-433e-bce1-8a99e5e37b95-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "c09d8243-6693-433e-bce1-8a99e5e37b95" (UID: "c09d8243-6693-433e-bce1-8a99e5e37b95"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 10:25:00 crc kubenswrapper[4953]: I1211 10:25:00.948819 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c09d8243-6693-433e-bce1-8a99e5e37b95-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "c09d8243-6693-433e-bce1-8a99e5e37b95" (UID: "c09d8243-6693-433e-bce1-8a99e5e37b95"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 10:25:00 crc kubenswrapper[4953]: I1211 10:25:00.948839 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c09d8243-6693-433e-bce1-8a99e5e37b95-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "c09d8243-6693-433e-bce1-8a99e5e37b95" (UID: "c09d8243-6693-433e-bce1-8a99e5e37b95"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 10:25:00 crc kubenswrapper[4953]: I1211 10:25:00.949260 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c09d8243-6693-433e-bce1-8a99e5e37b95-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "c09d8243-6693-433e-bce1-8a99e5e37b95" (UID: "c09d8243-6693-433e-bce1-8a99e5e37b95"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 10:25:00 crc kubenswrapper[4953]: I1211 10:25:00.949300 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c09d8243-6693-433e-bce1-8a99e5e37b95-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "c09d8243-6693-433e-bce1-8a99e5e37b95" (UID: "c09d8243-6693-433e-bce1-8a99e5e37b95"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 10:25:00 crc kubenswrapper[4953]: I1211 10:25:00.949342 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c09d8243-6693-433e-bce1-8a99e5e37b95-host-slash" (OuterVolumeSpecName: "host-slash") pod "c09d8243-6693-433e-bce1-8a99e5e37b95" (UID: "c09d8243-6693-433e-bce1-8a99e5e37b95"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 10:25:00 crc kubenswrapper[4953]: I1211 10:25:00.949368 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c09d8243-6693-433e-bce1-8a99e5e37b95-node-log" (OuterVolumeSpecName: "node-log") pod "c09d8243-6693-433e-bce1-8a99e5e37b95" (UID: "c09d8243-6693-433e-bce1-8a99e5e37b95"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 10:25:00 crc kubenswrapper[4953]: I1211 10:25:00.949396 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c09d8243-6693-433e-bce1-8a99e5e37b95-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "c09d8243-6693-433e-bce1-8a99e5e37b95" (UID: "c09d8243-6693-433e-bce1-8a99e5e37b95"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:25:00 crc kubenswrapper[4953]: I1211 10:25:00.949618 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c09d8243-6693-433e-bce1-8a99e5e37b95-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "c09d8243-6693-433e-bce1-8a99e5e37b95" (UID: "c09d8243-6693-433e-bce1-8a99e5e37b95"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:25:00 crc kubenswrapper[4953]: I1211 10:25:00.953737 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c09d8243-6693-433e-bce1-8a99e5e37b95-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "c09d8243-6693-433e-bce1-8a99e5e37b95" (UID: "c09d8243-6693-433e-bce1-8a99e5e37b95"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:25:00 crc kubenswrapper[4953]: I1211 10:25:00.953749 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c09d8243-6693-433e-bce1-8a99e5e37b95-kube-api-access-78fl9" (OuterVolumeSpecName: "kube-api-access-78fl9") pod "c09d8243-6693-433e-bce1-8a99e5e37b95" (UID: "c09d8243-6693-433e-bce1-8a99e5e37b95"). InnerVolumeSpecName "kube-api-access-78fl9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:25:00 crc kubenswrapper[4953]: I1211 10:25:00.961992 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c09d8243-6693-433e-bce1-8a99e5e37b95-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "c09d8243-6693-433e-bce1-8a99e5e37b95" (UID: "c09d8243-6693-433e-bce1-8a99e5e37b95"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.049747 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2ebb69c3-5d3f-42be-b799-47a62b31fdd6-host-run-ovn-kubernetes\") pod \"ovnkube-node-cls25\" (UID: \"2ebb69c3-5d3f-42be-b799-47a62b31fdd6\") " pod="openshift-ovn-kubernetes/ovnkube-node-cls25" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.049811 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2ebb69c3-5d3f-42be-b799-47a62b31fdd6-host-kubelet\") pod \"ovnkube-node-cls25\" (UID: \"2ebb69c3-5d3f-42be-b799-47a62b31fdd6\") " pod="openshift-ovn-kubernetes/ovnkube-node-cls25" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.049828 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2ebb69c3-5d3f-42be-b799-47a62b31fdd6-run-ovn\") pod \"ovnkube-node-cls25\" (UID: \"2ebb69c3-5d3f-42be-b799-47a62b31fdd6\") " pod="openshift-ovn-kubernetes/ovnkube-node-cls25" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.049851 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2ebb69c3-5d3f-42be-b799-47a62b31fdd6-run-openvswitch\") pod \"ovnkube-node-cls25\" (UID: \"2ebb69c3-5d3f-42be-b799-47a62b31fdd6\") " pod="openshift-ovn-kubernetes/ovnkube-node-cls25" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.049868 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwtg4\" (UniqueName: \"kubernetes.io/projected/2ebb69c3-5d3f-42be-b799-47a62b31fdd6-kube-api-access-qwtg4\") pod \"ovnkube-node-cls25\" (UID: \"2ebb69c3-5d3f-42be-b799-47a62b31fdd6\") " pod="openshift-ovn-kubernetes/ovnkube-node-cls25" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.049888 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2ebb69c3-5d3f-42be-b799-47a62b31fdd6-host-cni-bin\") pod \"ovnkube-node-cls25\" (UID: \"2ebb69c3-5d3f-42be-b799-47a62b31fdd6\") " pod="openshift-ovn-kubernetes/ovnkube-node-cls25" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.049908 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2ebb69c3-5d3f-42be-b799-47a62b31fdd6-log-socket\") pod \"ovnkube-node-cls25\" (UID: \"2ebb69c3-5d3f-42be-b799-47a62b31fdd6\") " pod="openshift-ovn-kubernetes/ovnkube-node-cls25" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.049988 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2ebb69c3-5d3f-42be-b799-47a62b31fdd6-node-log\") pod \"ovnkube-node-cls25\" (UID: \"2ebb69c3-5d3f-42be-b799-47a62b31fdd6\") " pod="openshift-ovn-kubernetes/ovnkube-node-cls25" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.050080 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2ebb69c3-5d3f-42be-b799-47a62b31fdd6-env-overrides\") pod \"ovnkube-node-cls25\" (UID: \"2ebb69c3-5d3f-42be-b799-47a62b31fdd6\") " pod="openshift-ovn-kubernetes/ovnkube-node-cls25" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.050114 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2ebb69c3-5d3f-42be-b799-47a62b31fdd6-systemd-units\") pod \"ovnkube-node-cls25\" (UID: \"2ebb69c3-5d3f-42be-b799-47a62b31fdd6\") " pod="openshift-ovn-kubernetes/ovnkube-node-cls25" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.050163 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2ebb69c3-5d3f-42be-b799-47a62b31fdd6-ovnkube-config\") pod \"ovnkube-node-cls25\" (UID: \"2ebb69c3-5d3f-42be-b799-47a62b31fdd6\") " pod="openshift-ovn-kubernetes/ovnkube-node-cls25" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.050195 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2ebb69c3-5d3f-42be-b799-47a62b31fdd6-ovn-node-metrics-cert\") pod \"ovnkube-node-cls25\" (UID: \"2ebb69c3-5d3f-42be-b799-47a62b31fdd6\") " pod="openshift-ovn-kubernetes/ovnkube-node-cls25" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.050247 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2ebb69c3-5d3f-42be-b799-47a62b31fdd6-ovnkube-script-lib\") pod \"ovnkube-node-cls25\" (UID: \"2ebb69c3-5d3f-42be-b799-47a62b31fdd6\") " pod="openshift-ovn-kubernetes/ovnkube-node-cls25" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.050273 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2ebb69c3-5d3f-42be-b799-47a62b31fdd6-host-run-netns\") pod \"ovnkube-node-cls25\" (UID: \"2ebb69c3-5d3f-42be-b799-47a62b31fdd6\") " pod="openshift-ovn-kubernetes/ovnkube-node-cls25" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.050316 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2ebb69c3-5d3f-42be-b799-47a62b31fdd6-run-systemd\") pod \"ovnkube-node-cls25\" (UID: \"2ebb69c3-5d3f-42be-b799-47a62b31fdd6\") " pod="openshift-ovn-kubernetes/ovnkube-node-cls25" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.050352 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2ebb69c3-5d3f-42be-b799-47a62b31fdd6-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-cls25\" (UID: \"2ebb69c3-5d3f-42be-b799-47a62b31fdd6\") " pod="openshift-ovn-kubernetes/ovnkube-node-cls25" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.050391 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2ebb69c3-5d3f-42be-b799-47a62b31fdd6-host-slash\") pod \"ovnkube-node-cls25\" (UID: \"2ebb69c3-5d3f-42be-b799-47a62b31fdd6\") " pod="openshift-ovn-kubernetes/ovnkube-node-cls25" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.050419 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2ebb69c3-5d3f-42be-b799-47a62b31fdd6-host-cni-netd\") pod \"ovnkube-node-cls25\" (UID: \"2ebb69c3-5d3f-42be-b799-47a62b31fdd6\") " pod="openshift-ovn-kubernetes/ovnkube-node-cls25" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.050464 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2ebb69c3-5d3f-42be-b799-47a62b31fdd6-etc-openvswitch\") pod \"ovnkube-node-cls25\" (UID: \"2ebb69c3-5d3f-42be-b799-47a62b31fdd6\") " pod="openshift-ovn-kubernetes/ovnkube-node-cls25" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.050504 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2ebb69c3-5d3f-42be-b799-47a62b31fdd6-var-lib-openvswitch\") pod \"ovnkube-node-cls25\" (UID: \"2ebb69c3-5d3f-42be-b799-47a62b31fdd6\") " pod="openshift-ovn-kubernetes/ovnkube-node-cls25" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.050661 4953 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c09d8243-6693-433e-bce1-8a99e5e37b95-host-slash\") on node \"crc\" DevicePath \"\"" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.050714 4953 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c09d8243-6693-433e-bce1-8a99e5e37b95-node-log\") on node \"crc\" DevicePath \"\"" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.050731 4953 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c09d8243-6693-433e-bce1-8a99e5e37b95-systemd-units\") on node \"crc\" DevicePath \"\"" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.050746 4953 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c09d8243-6693-433e-bce1-8a99e5e37b95-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.050760 4953 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c09d8243-6693-433e-bce1-8a99e5e37b95-log-socket\") on node \"crc\" DevicePath \"\"" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.050772 4953 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c09d8243-6693-433e-bce1-8a99e5e37b95-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.050792 4953 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c09d8243-6693-433e-bce1-8a99e5e37b95-host-kubelet\") on node \"crc\" DevicePath \"\"" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.050839 4953 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c09d8243-6693-433e-bce1-8a99e5e37b95-host-cni-bin\") on node \"crc\" DevicePath \"\"" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.050906 4953 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c09d8243-6693-433e-bce1-8a99e5e37b95-run-ovn\") on node \"crc\" DevicePath \"\"" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.050922 4953 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c09d8243-6693-433e-bce1-8a99e5e37b95-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.050948 4953 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c09d8243-6693-433e-bce1-8a99e5e37b95-run-systemd\") on node \"crc\" DevicePath \"\"" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.050959 4953 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c09d8243-6693-433e-bce1-8a99e5e37b95-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.050975 4953 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c09d8243-6693-433e-bce1-8a99e5e37b95-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.050987 4953 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c09d8243-6693-433e-bce1-8a99e5e37b95-run-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.050999 4953 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c09d8243-6693-433e-bce1-8a99e5e37b95-host-run-netns\") on node \"crc\" DevicePath \"\"" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.051011 4953 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c09d8243-6693-433e-bce1-8a99e5e37b95-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.051023 4953 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c09d8243-6693-433e-bce1-8a99e5e37b95-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.051033 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-78fl9\" (UniqueName: \"kubernetes.io/projected/c09d8243-6693-433e-bce1-8a99e5e37b95-kube-api-access-78fl9\") on node \"crc\" DevicePath \"\"" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.152754 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2ebb69c3-5d3f-42be-b799-47a62b31fdd6-ovn-node-metrics-cert\") pod \"ovnkube-node-cls25\" (UID: \"2ebb69c3-5d3f-42be-b799-47a62b31fdd6\") " pod="openshift-ovn-kubernetes/ovnkube-node-cls25" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.152870 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2ebb69c3-5d3f-42be-b799-47a62b31fdd6-ovnkube-script-lib\") pod \"ovnkube-node-cls25\" (UID: \"2ebb69c3-5d3f-42be-b799-47a62b31fdd6\") " pod="openshift-ovn-kubernetes/ovnkube-node-cls25" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.152914 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2ebb69c3-5d3f-42be-b799-47a62b31fdd6-host-run-netns\") pod \"ovnkube-node-cls25\" (UID: \"2ebb69c3-5d3f-42be-b799-47a62b31fdd6\") " pod="openshift-ovn-kubernetes/ovnkube-node-cls25" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.152986 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2ebb69c3-5d3f-42be-b799-47a62b31fdd6-run-systemd\") pod \"ovnkube-node-cls25\" (UID: \"2ebb69c3-5d3f-42be-b799-47a62b31fdd6\") " pod="openshift-ovn-kubernetes/ovnkube-node-cls25" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.153032 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2ebb69c3-5d3f-42be-b799-47a62b31fdd6-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-cls25\" (UID: \"2ebb69c3-5d3f-42be-b799-47a62b31fdd6\") " pod="openshift-ovn-kubernetes/ovnkube-node-cls25" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.153084 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2ebb69c3-5d3f-42be-b799-47a62b31fdd6-host-slash\") pod \"ovnkube-node-cls25\" (UID: \"2ebb69c3-5d3f-42be-b799-47a62b31fdd6\") " pod="openshift-ovn-kubernetes/ovnkube-node-cls25" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.153117 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2ebb69c3-5d3f-42be-b799-47a62b31fdd6-host-cni-netd\") pod \"ovnkube-node-cls25\" (UID: \"2ebb69c3-5d3f-42be-b799-47a62b31fdd6\") " pod="openshift-ovn-kubernetes/ovnkube-node-cls25" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.153157 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2ebb69c3-5d3f-42be-b799-47a62b31fdd6-etc-openvswitch\") pod \"ovnkube-node-cls25\" (UID: \"2ebb69c3-5d3f-42be-b799-47a62b31fdd6\") " pod="openshift-ovn-kubernetes/ovnkube-node-cls25" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.153201 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2ebb69c3-5d3f-42be-b799-47a62b31fdd6-var-lib-openvswitch\") pod \"ovnkube-node-cls25\" (UID: \"2ebb69c3-5d3f-42be-b799-47a62b31fdd6\") " pod="openshift-ovn-kubernetes/ovnkube-node-cls25" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.153258 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2ebb69c3-5d3f-42be-b799-47a62b31fdd6-host-run-netns\") pod \"ovnkube-node-cls25\" (UID: \"2ebb69c3-5d3f-42be-b799-47a62b31fdd6\") " pod="openshift-ovn-kubernetes/ovnkube-node-cls25" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.153278 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2ebb69c3-5d3f-42be-b799-47a62b31fdd6-host-run-ovn-kubernetes\") pod \"ovnkube-node-cls25\" (UID: \"2ebb69c3-5d3f-42be-b799-47a62b31fdd6\") " pod="openshift-ovn-kubernetes/ovnkube-node-cls25" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.153370 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2ebb69c3-5d3f-42be-b799-47a62b31fdd6-run-systemd\") pod \"ovnkube-node-cls25\" (UID: \"2ebb69c3-5d3f-42be-b799-47a62b31fdd6\") " pod="openshift-ovn-kubernetes/ovnkube-node-cls25" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.153448 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2ebb69c3-5d3f-42be-b799-47a62b31fdd6-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-cls25\" (UID: \"2ebb69c3-5d3f-42be-b799-47a62b31fdd6\") " pod="openshift-ovn-kubernetes/ovnkube-node-cls25" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.153874 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2ebb69c3-5d3f-42be-b799-47a62b31fdd6-host-slash\") pod \"ovnkube-node-cls25\" (UID: \"2ebb69c3-5d3f-42be-b799-47a62b31fdd6\") " pod="openshift-ovn-kubernetes/ovnkube-node-cls25" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.153920 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2ebb69c3-5d3f-42be-b799-47a62b31fdd6-host-run-ovn-kubernetes\") pod \"ovnkube-node-cls25\" (UID: \"2ebb69c3-5d3f-42be-b799-47a62b31fdd6\") " pod="openshift-ovn-kubernetes/ovnkube-node-cls25" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.153952 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2ebb69c3-5d3f-42be-b799-47a62b31fdd6-host-cni-netd\") pod \"ovnkube-node-cls25\" (UID: \"2ebb69c3-5d3f-42be-b799-47a62b31fdd6\") " pod="openshift-ovn-kubernetes/ovnkube-node-cls25" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.153969 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2ebb69c3-5d3f-42be-b799-47a62b31fdd6-etc-openvswitch\") pod \"ovnkube-node-cls25\" (UID: \"2ebb69c3-5d3f-42be-b799-47a62b31fdd6\") " pod="openshift-ovn-kubernetes/ovnkube-node-cls25" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.153987 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2ebb69c3-5d3f-42be-b799-47a62b31fdd6-var-lib-openvswitch\") pod \"ovnkube-node-cls25\" (UID: \"2ebb69c3-5d3f-42be-b799-47a62b31fdd6\") " pod="openshift-ovn-kubernetes/ovnkube-node-cls25" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.154085 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2ebb69c3-5d3f-42be-b799-47a62b31fdd6-host-kubelet\") pod \"ovnkube-node-cls25\" (UID: \"2ebb69c3-5d3f-42be-b799-47a62b31fdd6\") " pod="openshift-ovn-kubernetes/ovnkube-node-cls25" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.154548 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2ebb69c3-5d3f-42be-b799-47a62b31fdd6-run-ovn\") pod \"ovnkube-node-cls25\" (UID: \"2ebb69c3-5d3f-42be-b799-47a62b31fdd6\") " pod="openshift-ovn-kubernetes/ovnkube-node-cls25" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.154709 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2ebb69c3-5d3f-42be-b799-47a62b31fdd6-run-openvswitch\") pod \"ovnkube-node-cls25\" (UID: \"2ebb69c3-5d3f-42be-b799-47a62b31fdd6\") " pod="openshift-ovn-kubernetes/ovnkube-node-cls25" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.154831 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwtg4\" (UniqueName: \"kubernetes.io/projected/2ebb69c3-5d3f-42be-b799-47a62b31fdd6-kube-api-access-qwtg4\") pod \"ovnkube-node-cls25\" (UID: \"2ebb69c3-5d3f-42be-b799-47a62b31fdd6\") " pod="openshift-ovn-kubernetes/ovnkube-node-cls25" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.154951 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2ebb69c3-5d3f-42be-b799-47a62b31fdd6-host-cni-bin\") pod \"ovnkube-node-cls25\" (UID: \"2ebb69c3-5d3f-42be-b799-47a62b31fdd6\") " pod="openshift-ovn-kubernetes/ovnkube-node-cls25" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.155051 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2ebb69c3-5d3f-42be-b799-47a62b31fdd6-host-cni-bin\") pod \"ovnkube-node-cls25\" (UID: \"2ebb69c3-5d3f-42be-b799-47a62b31fdd6\") " pod="openshift-ovn-kubernetes/ovnkube-node-cls25" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.154705 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2ebb69c3-5d3f-42be-b799-47a62b31fdd6-run-ovn\") pod \"ovnkube-node-cls25\" (UID: \"2ebb69c3-5d3f-42be-b799-47a62b31fdd6\") " pod="openshift-ovn-kubernetes/ovnkube-node-cls25" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.154755 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2ebb69c3-5d3f-42be-b799-47a62b31fdd6-run-openvswitch\") pod \"ovnkube-node-cls25\" (UID: \"2ebb69c3-5d3f-42be-b799-47a62b31fdd6\") " pod="openshift-ovn-kubernetes/ovnkube-node-cls25" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.154127 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2ebb69c3-5d3f-42be-b799-47a62b31fdd6-host-kubelet\") pod \"ovnkube-node-cls25\" (UID: \"2ebb69c3-5d3f-42be-b799-47a62b31fdd6\") " pod="openshift-ovn-kubernetes/ovnkube-node-cls25" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.155059 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2ebb69c3-5d3f-42be-b799-47a62b31fdd6-log-socket\") pod \"ovnkube-node-cls25\" (UID: \"2ebb69c3-5d3f-42be-b799-47a62b31fdd6\") " pod="openshift-ovn-kubernetes/ovnkube-node-cls25" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.155248 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2ebb69c3-5d3f-42be-b799-47a62b31fdd6-node-log\") pod \"ovnkube-node-cls25\" (UID: \"2ebb69c3-5d3f-42be-b799-47a62b31fdd6\") " pod="openshift-ovn-kubernetes/ovnkube-node-cls25" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.154848 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2ebb69c3-5d3f-42be-b799-47a62b31fdd6-ovnkube-script-lib\") pod \"ovnkube-node-cls25\" (UID: \"2ebb69c3-5d3f-42be-b799-47a62b31fdd6\") " pod="openshift-ovn-kubernetes/ovnkube-node-cls25" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.155276 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2ebb69c3-5d3f-42be-b799-47a62b31fdd6-node-log\") pod \"ovnkube-node-cls25\" (UID: \"2ebb69c3-5d3f-42be-b799-47a62b31fdd6\") " pod="openshift-ovn-kubernetes/ovnkube-node-cls25" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.155305 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2ebb69c3-5d3f-42be-b799-47a62b31fdd6-env-overrides\") pod \"ovnkube-node-cls25\" (UID: \"2ebb69c3-5d3f-42be-b799-47a62b31fdd6\") " pod="openshift-ovn-kubernetes/ovnkube-node-cls25" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.155349 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2ebb69c3-5d3f-42be-b799-47a62b31fdd6-systemd-units\") pod \"ovnkube-node-cls25\" (UID: \"2ebb69c3-5d3f-42be-b799-47a62b31fdd6\") " pod="openshift-ovn-kubernetes/ovnkube-node-cls25" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.155402 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2ebb69c3-5d3f-42be-b799-47a62b31fdd6-ovnkube-config\") pod \"ovnkube-node-cls25\" (UID: \"2ebb69c3-5d3f-42be-b799-47a62b31fdd6\") " pod="openshift-ovn-kubernetes/ovnkube-node-cls25" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.155454 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2ebb69c3-5d3f-42be-b799-47a62b31fdd6-systemd-units\") pod \"ovnkube-node-cls25\" (UID: \"2ebb69c3-5d3f-42be-b799-47a62b31fdd6\") " pod="openshift-ovn-kubernetes/ovnkube-node-cls25" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.155767 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2ebb69c3-5d3f-42be-b799-47a62b31fdd6-log-socket\") pod \"ovnkube-node-cls25\" (UID: \"2ebb69c3-5d3f-42be-b799-47a62b31fdd6\") " pod="openshift-ovn-kubernetes/ovnkube-node-cls25" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.156239 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2ebb69c3-5d3f-42be-b799-47a62b31fdd6-env-overrides\") pod \"ovnkube-node-cls25\" (UID: \"2ebb69c3-5d3f-42be-b799-47a62b31fdd6\") " pod="openshift-ovn-kubernetes/ovnkube-node-cls25" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.157031 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2ebb69c3-5d3f-42be-b799-47a62b31fdd6-ovnkube-config\") pod \"ovnkube-node-cls25\" (UID: \"2ebb69c3-5d3f-42be-b799-47a62b31fdd6\") " pod="openshift-ovn-kubernetes/ovnkube-node-cls25" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.159455 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2ebb69c3-5d3f-42be-b799-47a62b31fdd6-ovn-node-metrics-cert\") pod \"ovnkube-node-cls25\" (UID: \"2ebb69c3-5d3f-42be-b799-47a62b31fdd6\") " pod="openshift-ovn-kubernetes/ovnkube-node-cls25" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.298492 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qwtg4\" (UniqueName: \"kubernetes.io/projected/2ebb69c3-5d3f-42be-b799-47a62b31fdd6-kube-api-access-qwtg4\") pod \"ovnkube-node-cls25\" (UID: \"2ebb69c3-5d3f-42be-b799-47a62b31fdd6\") " pod="openshift-ovn-kubernetes/ovnkube-node-cls25" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.453926 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x6f57_c09d8243-6693-433e-bce1-8a99e5e37b95/ovnkube-controller/3.log" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.457660 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x6f57_c09d8243-6693-433e-bce1-8a99e5e37b95/ovn-acl-logging/0.log" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.458251 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x6f57_c09d8243-6693-433e-bce1-8a99e5e37b95/ovn-controller/0.log" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.458919 4953 generic.go:334] "Generic (PLEG): container finished" podID="c09d8243-6693-433e-bce1-8a99e5e37b95" containerID="e1e0a7a3ed79a4ad164a0949259cb9d143376d0563f58526ab941a2f87b272f6" exitCode=0 Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.458985 4953 generic.go:334] "Generic (PLEG): container finished" podID="c09d8243-6693-433e-bce1-8a99e5e37b95" containerID="8da892281a7b8dea449aee461dcb35bef204e2a768689e751abcb21204091aaa" exitCode=0 Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.459001 4953 generic.go:334] "Generic (PLEG): container finished" podID="c09d8243-6693-433e-bce1-8a99e5e37b95" containerID="c34fbadf5e4606e6ad395137eff9b9edc19b5d02125e818a6d8a19d0d20649e1" exitCode=0 Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.459015 4953 generic.go:334] "Generic (PLEG): container finished" podID="c09d8243-6693-433e-bce1-8a99e5e37b95" containerID="b2587a8f43682f31a01fae073769712321abb001f975f5ac5413264489a110f7" exitCode=0 Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.459028 4953 generic.go:334] "Generic (PLEG): container finished" podID="c09d8243-6693-433e-bce1-8a99e5e37b95" containerID="622e6781ae06491000d6c0e3d2922942c3709bb09483a8e0d2ab723972c2aa78" exitCode=0 Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.459040 4953 generic.go:334] "Generic (PLEG): container finished" podID="c09d8243-6693-433e-bce1-8a99e5e37b95" containerID="b88a1ca7bd533da2486525e0a6c3dc45272433e743146be01eb8013265f02d93" exitCode=0 Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.459054 4953 generic.go:334] "Generic (PLEG): container finished" podID="c09d8243-6693-433e-bce1-8a99e5e37b95" containerID="42759c1ba178a28ddf9b11b221249364f6993c1288309dff6b1c50be13c3b6fc" exitCode=143 Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.459067 4953 generic.go:334] "Generic (PLEG): container finished" podID="c09d8243-6693-433e-bce1-8a99e5e37b95" containerID="99f628cfb5844a18bd0013d2cd5f7f545ef7a561017498bdfa4a6b6fc4686e54" exitCode=143 Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.459139 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x6f57" event={"ID":"c09d8243-6693-433e-bce1-8a99e5e37b95","Type":"ContainerDied","Data":"e1e0a7a3ed79a4ad164a0949259cb9d143376d0563f58526ab941a2f87b272f6"} Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.459196 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x6f57" event={"ID":"c09d8243-6693-433e-bce1-8a99e5e37b95","Type":"ContainerDied","Data":"8da892281a7b8dea449aee461dcb35bef204e2a768689e751abcb21204091aaa"} Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.459219 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x6f57" event={"ID":"c09d8243-6693-433e-bce1-8a99e5e37b95","Type":"ContainerDied","Data":"c34fbadf5e4606e6ad395137eff9b9edc19b5d02125e818a6d8a19d0d20649e1"} Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.459239 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x6f57" event={"ID":"c09d8243-6693-433e-bce1-8a99e5e37b95","Type":"ContainerDied","Data":"b2587a8f43682f31a01fae073769712321abb001f975f5ac5413264489a110f7"} Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.459258 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x6f57" event={"ID":"c09d8243-6693-433e-bce1-8a99e5e37b95","Type":"ContainerDied","Data":"622e6781ae06491000d6c0e3d2922942c3709bb09483a8e0d2ab723972c2aa78"} Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.459277 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x6f57" event={"ID":"c09d8243-6693-433e-bce1-8a99e5e37b95","Type":"ContainerDied","Data":"b88a1ca7bd533da2486525e0a6c3dc45272433e743146be01eb8013265f02d93"} Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.459306 4953 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7dc0cdbe5f1b125694bc32b6055f6f98ac803834f27c54f96be12ec7c359b5c1"} Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.459326 4953 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8da892281a7b8dea449aee461dcb35bef204e2a768689e751abcb21204091aaa"} Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.459337 4953 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c34fbadf5e4606e6ad395137eff9b9edc19b5d02125e818a6d8a19d0d20649e1"} Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.459348 4953 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b2587a8f43682f31a01fae073769712321abb001f975f5ac5413264489a110f7"} Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.459360 4953 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"622e6781ae06491000d6c0e3d2922942c3709bb09483a8e0d2ab723972c2aa78"} Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.459371 4953 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b88a1ca7bd533da2486525e0a6c3dc45272433e743146be01eb8013265f02d93"} Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.459382 4953 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"42759c1ba178a28ddf9b11b221249364f6993c1288309dff6b1c50be13c3b6fc"} Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.459393 4953 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"99f628cfb5844a18bd0013d2cd5f7f545ef7a561017498bdfa4a6b6fc4686e54"} Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.459403 4953 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c84dce3b218bcc630289f0bff286bad09d4481e1787ef2c048799bfc6c97108f"} Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.459419 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x6f57" event={"ID":"c09d8243-6693-433e-bce1-8a99e5e37b95","Type":"ContainerDied","Data":"42759c1ba178a28ddf9b11b221249364f6993c1288309dff6b1c50be13c3b6fc"} Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.459435 4953 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e1e0a7a3ed79a4ad164a0949259cb9d143376d0563f58526ab941a2f87b272f6"} Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.459450 4953 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7dc0cdbe5f1b125694bc32b6055f6f98ac803834f27c54f96be12ec7c359b5c1"} Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.459461 4953 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8da892281a7b8dea449aee461dcb35bef204e2a768689e751abcb21204091aaa"} Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.459472 4953 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c34fbadf5e4606e6ad395137eff9b9edc19b5d02125e818a6d8a19d0d20649e1"} Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.459483 4953 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b2587a8f43682f31a01fae073769712321abb001f975f5ac5413264489a110f7"} Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.459493 4953 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"622e6781ae06491000d6c0e3d2922942c3709bb09483a8e0d2ab723972c2aa78"} Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.459504 4953 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b88a1ca7bd533da2486525e0a6c3dc45272433e743146be01eb8013265f02d93"} Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.459515 4953 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"42759c1ba178a28ddf9b11b221249364f6993c1288309dff6b1c50be13c3b6fc"} Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.459525 4953 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"99f628cfb5844a18bd0013d2cd5f7f545ef7a561017498bdfa4a6b6fc4686e54"} Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.459535 4953 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c84dce3b218bcc630289f0bff286bad09d4481e1787ef2c048799bfc6c97108f"} Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.459550 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x6f57" event={"ID":"c09d8243-6693-433e-bce1-8a99e5e37b95","Type":"ContainerDied","Data":"99f628cfb5844a18bd0013d2cd5f7f545ef7a561017498bdfa4a6b6fc4686e54"} Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.459566 4953 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e1e0a7a3ed79a4ad164a0949259cb9d143376d0563f58526ab941a2f87b272f6"} Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.459613 4953 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7dc0cdbe5f1b125694bc32b6055f6f98ac803834f27c54f96be12ec7c359b5c1"} Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.459625 4953 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8da892281a7b8dea449aee461dcb35bef204e2a768689e751abcb21204091aaa"} Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.459636 4953 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c34fbadf5e4606e6ad395137eff9b9edc19b5d02125e818a6d8a19d0d20649e1"} Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.459647 4953 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b2587a8f43682f31a01fae073769712321abb001f975f5ac5413264489a110f7"} Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.459658 4953 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"622e6781ae06491000d6c0e3d2922942c3709bb09483a8e0d2ab723972c2aa78"} Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.459669 4953 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b88a1ca7bd533da2486525e0a6c3dc45272433e743146be01eb8013265f02d93"} Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.459680 4953 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"42759c1ba178a28ddf9b11b221249364f6993c1288309dff6b1c50be13c3b6fc"} Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.459691 4953 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"99f628cfb5844a18bd0013d2cd5f7f545ef7a561017498bdfa4a6b6fc4686e54"} Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.459702 4953 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c84dce3b218bcc630289f0bff286bad09d4481e1787ef2c048799bfc6c97108f"} Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.459716 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x6f57" event={"ID":"c09d8243-6693-433e-bce1-8a99e5e37b95","Type":"ContainerDied","Data":"1de50d676eb0b99c7d8a715b183ed3da13b81401140b684ae7ae1967be20b7c9"} Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.459733 4953 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e1e0a7a3ed79a4ad164a0949259cb9d143376d0563f58526ab941a2f87b272f6"} Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.459746 4953 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7dc0cdbe5f1b125694bc32b6055f6f98ac803834f27c54f96be12ec7c359b5c1"} Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.459757 4953 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8da892281a7b8dea449aee461dcb35bef204e2a768689e751abcb21204091aaa"} Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.459769 4953 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c34fbadf5e4606e6ad395137eff9b9edc19b5d02125e818a6d8a19d0d20649e1"} Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.459780 4953 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b2587a8f43682f31a01fae073769712321abb001f975f5ac5413264489a110f7"} Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.459790 4953 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"622e6781ae06491000d6c0e3d2922942c3709bb09483a8e0d2ab723972c2aa78"} Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.459801 4953 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b88a1ca7bd533da2486525e0a6c3dc45272433e743146be01eb8013265f02d93"} Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.459812 4953 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"42759c1ba178a28ddf9b11b221249364f6993c1288309dff6b1c50be13c3b6fc"} Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.459823 4953 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"99f628cfb5844a18bd0013d2cd5f7f545ef7a561017498bdfa4a6b6fc4686e54"} Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.459835 4953 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c84dce3b218bcc630289f0bff286bad09d4481e1787ef2c048799bfc6c97108f"} Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.459860 4953 scope.go:117] "RemoveContainer" containerID="e1e0a7a3ed79a4ad164a0949259cb9d143376d0563f58526ab941a2f87b272f6" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.460105 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-x6f57" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.465435 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-h4dvx_644e1d40-ab80-469e-94b4-540e52b8e2c0/kube-multus/2.log" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.466211 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-h4dvx_644e1d40-ab80-469e-94b4-540e52b8e2c0/kube-multus/1.log" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.466288 4953 generic.go:334] "Generic (PLEG): container finished" podID="644e1d40-ab80-469e-94b4-540e52b8e2c0" containerID="9b6eb9191a87c2ce29c9393a9132ddb691923181877779b571678fb5a93b9feb" exitCode=2 Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.466348 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-h4dvx" event={"ID":"644e1d40-ab80-469e-94b4-540e52b8e2c0","Type":"ContainerDied","Data":"9b6eb9191a87c2ce29c9393a9132ddb691923181877779b571678fb5a93b9feb"} Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.466434 4953 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bc80f2149ec8320584aa8fd55223ba13d53848232acd659a71bb35fdea7a043f"} Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.467240 4953 scope.go:117] "RemoveContainer" containerID="9b6eb9191a87c2ce29c9393a9132ddb691923181877779b571678fb5a93b9feb" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.473515 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-cls25" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.557153 4953 scope.go:117] "RemoveContainer" containerID="7dc0cdbe5f1b125694bc32b6055f6f98ac803834f27c54f96be12ec7c359b5c1" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.600953 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-x6f57"] Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.604995 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-x6f57"] Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.605683 4953 scope.go:117] "RemoveContainer" containerID="8da892281a7b8dea449aee461dcb35bef204e2a768689e751abcb21204091aaa" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.652490 4953 scope.go:117] "RemoveContainer" containerID="c34fbadf5e4606e6ad395137eff9b9edc19b5d02125e818a6d8a19d0d20649e1" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.676357 4953 scope.go:117] "RemoveContainer" containerID="b2587a8f43682f31a01fae073769712321abb001f975f5ac5413264489a110f7" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.704464 4953 scope.go:117] "RemoveContainer" containerID="622e6781ae06491000d6c0e3d2922942c3709bb09483a8e0d2ab723972c2aa78" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.718774 4953 scope.go:117] "RemoveContainer" containerID="b88a1ca7bd533da2486525e0a6c3dc45272433e743146be01eb8013265f02d93" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.860241 4953 scope.go:117] "RemoveContainer" containerID="42759c1ba178a28ddf9b11b221249364f6993c1288309dff6b1c50be13c3b6fc" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.918890 4953 scope.go:117] "RemoveContainer" containerID="99f628cfb5844a18bd0013d2cd5f7f545ef7a561017498bdfa4a6b6fc4686e54" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.932639 4953 scope.go:117] "RemoveContainer" containerID="c84dce3b218bcc630289f0bff286bad09d4481e1787ef2c048799bfc6c97108f" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.946421 4953 scope.go:117] "RemoveContainer" containerID="e1e0a7a3ed79a4ad164a0949259cb9d143376d0563f58526ab941a2f87b272f6" Dec 11 10:25:01 crc kubenswrapper[4953]: E1211 10:25:01.947043 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e1e0a7a3ed79a4ad164a0949259cb9d143376d0563f58526ab941a2f87b272f6\": container with ID starting with e1e0a7a3ed79a4ad164a0949259cb9d143376d0563f58526ab941a2f87b272f6 not found: ID does not exist" containerID="e1e0a7a3ed79a4ad164a0949259cb9d143376d0563f58526ab941a2f87b272f6" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.947094 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1e0a7a3ed79a4ad164a0949259cb9d143376d0563f58526ab941a2f87b272f6"} err="failed to get container status \"e1e0a7a3ed79a4ad164a0949259cb9d143376d0563f58526ab941a2f87b272f6\": rpc error: code = NotFound desc = could not find container \"e1e0a7a3ed79a4ad164a0949259cb9d143376d0563f58526ab941a2f87b272f6\": container with ID starting with e1e0a7a3ed79a4ad164a0949259cb9d143376d0563f58526ab941a2f87b272f6 not found: ID does not exist" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.947130 4953 scope.go:117] "RemoveContainer" containerID="7dc0cdbe5f1b125694bc32b6055f6f98ac803834f27c54f96be12ec7c359b5c1" Dec 11 10:25:01 crc kubenswrapper[4953]: E1211 10:25:01.947550 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7dc0cdbe5f1b125694bc32b6055f6f98ac803834f27c54f96be12ec7c359b5c1\": container with ID starting with 7dc0cdbe5f1b125694bc32b6055f6f98ac803834f27c54f96be12ec7c359b5c1 not found: ID does not exist" containerID="7dc0cdbe5f1b125694bc32b6055f6f98ac803834f27c54f96be12ec7c359b5c1" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.947610 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7dc0cdbe5f1b125694bc32b6055f6f98ac803834f27c54f96be12ec7c359b5c1"} err="failed to get container status \"7dc0cdbe5f1b125694bc32b6055f6f98ac803834f27c54f96be12ec7c359b5c1\": rpc error: code = NotFound desc = could not find container \"7dc0cdbe5f1b125694bc32b6055f6f98ac803834f27c54f96be12ec7c359b5c1\": container with ID starting with 7dc0cdbe5f1b125694bc32b6055f6f98ac803834f27c54f96be12ec7c359b5c1 not found: ID does not exist" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.947645 4953 scope.go:117] "RemoveContainer" containerID="8da892281a7b8dea449aee461dcb35bef204e2a768689e751abcb21204091aaa" Dec 11 10:25:01 crc kubenswrapper[4953]: E1211 10:25:01.948002 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8da892281a7b8dea449aee461dcb35bef204e2a768689e751abcb21204091aaa\": container with ID starting with 8da892281a7b8dea449aee461dcb35bef204e2a768689e751abcb21204091aaa not found: ID does not exist" containerID="8da892281a7b8dea449aee461dcb35bef204e2a768689e751abcb21204091aaa" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.948062 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8da892281a7b8dea449aee461dcb35bef204e2a768689e751abcb21204091aaa"} err="failed to get container status \"8da892281a7b8dea449aee461dcb35bef204e2a768689e751abcb21204091aaa\": rpc error: code = NotFound desc = could not find container \"8da892281a7b8dea449aee461dcb35bef204e2a768689e751abcb21204091aaa\": container with ID starting with 8da892281a7b8dea449aee461dcb35bef204e2a768689e751abcb21204091aaa not found: ID does not exist" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.948078 4953 scope.go:117] "RemoveContainer" containerID="c34fbadf5e4606e6ad395137eff9b9edc19b5d02125e818a6d8a19d0d20649e1" Dec 11 10:25:01 crc kubenswrapper[4953]: E1211 10:25:01.948333 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c34fbadf5e4606e6ad395137eff9b9edc19b5d02125e818a6d8a19d0d20649e1\": container with ID starting with c34fbadf5e4606e6ad395137eff9b9edc19b5d02125e818a6d8a19d0d20649e1 not found: ID does not exist" containerID="c34fbadf5e4606e6ad395137eff9b9edc19b5d02125e818a6d8a19d0d20649e1" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.948386 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c34fbadf5e4606e6ad395137eff9b9edc19b5d02125e818a6d8a19d0d20649e1"} err="failed to get container status \"c34fbadf5e4606e6ad395137eff9b9edc19b5d02125e818a6d8a19d0d20649e1\": rpc error: code = NotFound desc = could not find container \"c34fbadf5e4606e6ad395137eff9b9edc19b5d02125e818a6d8a19d0d20649e1\": container with ID starting with c34fbadf5e4606e6ad395137eff9b9edc19b5d02125e818a6d8a19d0d20649e1 not found: ID does not exist" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.948409 4953 scope.go:117] "RemoveContainer" containerID="b2587a8f43682f31a01fae073769712321abb001f975f5ac5413264489a110f7" Dec 11 10:25:01 crc kubenswrapper[4953]: E1211 10:25:01.948728 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b2587a8f43682f31a01fae073769712321abb001f975f5ac5413264489a110f7\": container with ID starting with b2587a8f43682f31a01fae073769712321abb001f975f5ac5413264489a110f7 not found: ID does not exist" containerID="b2587a8f43682f31a01fae073769712321abb001f975f5ac5413264489a110f7" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.948757 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2587a8f43682f31a01fae073769712321abb001f975f5ac5413264489a110f7"} err="failed to get container status \"b2587a8f43682f31a01fae073769712321abb001f975f5ac5413264489a110f7\": rpc error: code = NotFound desc = could not find container \"b2587a8f43682f31a01fae073769712321abb001f975f5ac5413264489a110f7\": container with ID starting with b2587a8f43682f31a01fae073769712321abb001f975f5ac5413264489a110f7 not found: ID does not exist" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.948784 4953 scope.go:117] "RemoveContainer" containerID="622e6781ae06491000d6c0e3d2922942c3709bb09483a8e0d2ab723972c2aa78" Dec 11 10:25:01 crc kubenswrapper[4953]: E1211 10:25:01.949879 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"622e6781ae06491000d6c0e3d2922942c3709bb09483a8e0d2ab723972c2aa78\": container with ID starting with 622e6781ae06491000d6c0e3d2922942c3709bb09483a8e0d2ab723972c2aa78 not found: ID does not exist" containerID="622e6781ae06491000d6c0e3d2922942c3709bb09483a8e0d2ab723972c2aa78" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.949946 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"622e6781ae06491000d6c0e3d2922942c3709bb09483a8e0d2ab723972c2aa78"} err="failed to get container status \"622e6781ae06491000d6c0e3d2922942c3709bb09483a8e0d2ab723972c2aa78\": rpc error: code = NotFound desc = could not find container \"622e6781ae06491000d6c0e3d2922942c3709bb09483a8e0d2ab723972c2aa78\": container with ID starting with 622e6781ae06491000d6c0e3d2922942c3709bb09483a8e0d2ab723972c2aa78 not found: ID does not exist" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.949963 4953 scope.go:117] "RemoveContainer" containerID="b88a1ca7bd533da2486525e0a6c3dc45272433e743146be01eb8013265f02d93" Dec 11 10:25:01 crc kubenswrapper[4953]: E1211 10:25:01.950291 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b88a1ca7bd533da2486525e0a6c3dc45272433e743146be01eb8013265f02d93\": container with ID starting with b88a1ca7bd533da2486525e0a6c3dc45272433e743146be01eb8013265f02d93 not found: ID does not exist" containerID="b88a1ca7bd533da2486525e0a6c3dc45272433e743146be01eb8013265f02d93" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.950317 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b88a1ca7bd533da2486525e0a6c3dc45272433e743146be01eb8013265f02d93"} err="failed to get container status \"b88a1ca7bd533da2486525e0a6c3dc45272433e743146be01eb8013265f02d93\": rpc error: code = NotFound desc = could not find container \"b88a1ca7bd533da2486525e0a6c3dc45272433e743146be01eb8013265f02d93\": container with ID starting with b88a1ca7bd533da2486525e0a6c3dc45272433e743146be01eb8013265f02d93 not found: ID does not exist" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.950332 4953 scope.go:117] "RemoveContainer" containerID="42759c1ba178a28ddf9b11b221249364f6993c1288309dff6b1c50be13c3b6fc" Dec 11 10:25:01 crc kubenswrapper[4953]: E1211 10:25:01.950686 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42759c1ba178a28ddf9b11b221249364f6993c1288309dff6b1c50be13c3b6fc\": container with ID starting with 42759c1ba178a28ddf9b11b221249364f6993c1288309dff6b1c50be13c3b6fc not found: ID does not exist" containerID="42759c1ba178a28ddf9b11b221249364f6993c1288309dff6b1c50be13c3b6fc" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.950738 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42759c1ba178a28ddf9b11b221249364f6993c1288309dff6b1c50be13c3b6fc"} err="failed to get container status \"42759c1ba178a28ddf9b11b221249364f6993c1288309dff6b1c50be13c3b6fc\": rpc error: code = NotFound desc = could not find container \"42759c1ba178a28ddf9b11b221249364f6993c1288309dff6b1c50be13c3b6fc\": container with ID starting with 42759c1ba178a28ddf9b11b221249364f6993c1288309dff6b1c50be13c3b6fc not found: ID does not exist" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.950757 4953 scope.go:117] "RemoveContainer" containerID="99f628cfb5844a18bd0013d2cd5f7f545ef7a561017498bdfa4a6b6fc4686e54" Dec 11 10:25:01 crc kubenswrapper[4953]: E1211 10:25:01.951105 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"99f628cfb5844a18bd0013d2cd5f7f545ef7a561017498bdfa4a6b6fc4686e54\": container with ID starting with 99f628cfb5844a18bd0013d2cd5f7f545ef7a561017498bdfa4a6b6fc4686e54 not found: ID does not exist" containerID="99f628cfb5844a18bd0013d2cd5f7f545ef7a561017498bdfa4a6b6fc4686e54" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.951142 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99f628cfb5844a18bd0013d2cd5f7f545ef7a561017498bdfa4a6b6fc4686e54"} err="failed to get container status \"99f628cfb5844a18bd0013d2cd5f7f545ef7a561017498bdfa4a6b6fc4686e54\": rpc error: code = NotFound desc = could not find container \"99f628cfb5844a18bd0013d2cd5f7f545ef7a561017498bdfa4a6b6fc4686e54\": container with ID starting with 99f628cfb5844a18bd0013d2cd5f7f545ef7a561017498bdfa4a6b6fc4686e54 not found: ID does not exist" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.951161 4953 scope.go:117] "RemoveContainer" containerID="c84dce3b218bcc630289f0bff286bad09d4481e1787ef2c048799bfc6c97108f" Dec 11 10:25:01 crc kubenswrapper[4953]: E1211 10:25:01.951439 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c84dce3b218bcc630289f0bff286bad09d4481e1787ef2c048799bfc6c97108f\": container with ID starting with c84dce3b218bcc630289f0bff286bad09d4481e1787ef2c048799bfc6c97108f not found: ID does not exist" containerID="c84dce3b218bcc630289f0bff286bad09d4481e1787ef2c048799bfc6c97108f" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.951498 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c84dce3b218bcc630289f0bff286bad09d4481e1787ef2c048799bfc6c97108f"} err="failed to get container status \"c84dce3b218bcc630289f0bff286bad09d4481e1787ef2c048799bfc6c97108f\": rpc error: code = NotFound desc = could not find container \"c84dce3b218bcc630289f0bff286bad09d4481e1787ef2c048799bfc6c97108f\": container with ID starting with c84dce3b218bcc630289f0bff286bad09d4481e1787ef2c048799bfc6c97108f not found: ID does not exist" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.951527 4953 scope.go:117] "RemoveContainer" containerID="e1e0a7a3ed79a4ad164a0949259cb9d143376d0563f58526ab941a2f87b272f6" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.951979 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1e0a7a3ed79a4ad164a0949259cb9d143376d0563f58526ab941a2f87b272f6"} err="failed to get container status \"e1e0a7a3ed79a4ad164a0949259cb9d143376d0563f58526ab941a2f87b272f6\": rpc error: code = NotFound desc = could not find container \"e1e0a7a3ed79a4ad164a0949259cb9d143376d0563f58526ab941a2f87b272f6\": container with ID starting with e1e0a7a3ed79a4ad164a0949259cb9d143376d0563f58526ab941a2f87b272f6 not found: ID does not exist" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.952039 4953 scope.go:117] "RemoveContainer" containerID="7dc0cdbe5f1b125694bc32b6055f6f98ac803834f27c54f96be12ec7c359b5c1" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.952339 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7dc0cdbe5f1b125694bc32b6055f6f98ac803834f27c54f96be12ec7c359b5c1"} err="failed to get container status \"7dc0cdbe5f1b125694bc32b6055f6f98ac803834f27c54f96be12ec7c359b5c1\": rpc error: code = NotFound desc = could not find container \"7dc0cdbe5f1b125694bc32b6055f6f98ac803834f27c54f96be12ec7c359b5c1\": container with ID starting with 7dc0cdbe5f1b125694bc32b6055f6f98ac803834f27c54f96be12ec7c359b5c1 not found: ID does not exist" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.952382 4953 scope.go:117] "RemoveContainer" containerID="8da892281a7b8dea449aee461dcb35bef204e2a768689e751abcb21204091aaa" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.952688 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8da892281a7b8dea449aee461dcb35bef204e2a768689e751abcb21204091aaa"} err="failed to get container status \"8da892281a7b8dea449aee461dcb35bef204e2a768689e751abcb21204091aaa\": rpc error: code = NotFound desc = could not find container \"8da892281a7b8dea449aee461dcb35bef204e2a768689e751abcb21204091aaa\": container with ID starting with 8da892281a7b8dea449aee461dcb35bef204e2a768689e751abcb21204091aaa not found: ID does not exist" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.952740 4953 scope.go:117] "RemoveContainer" containerID="c34fbadf5e4606e6ad395137eff9b9edc19b5d02125e818a6d8a19d0d20649e1" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.953063 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c34fbadf5e4606e6ad395137eff9b9edc19b5d02125e818a6d8a19d0d20649e1"} err="failed to get container status \"c34fbadf5e4606e6ad395137eff9b9edc19b5d02125e818a6d8a19d0d20649e1\": rpc error: code = NotFound desc = could not find container \"c34fbadf5e4606e6ad395137eff9b9edc19b5d02125e818a6d8a19d0d20649e1\": container with ID starting with c34fbadf5e4606e6ad395137eff9b9edc19b5d02125e818a6d8a19d0d20649e1 not found: ID does not exist" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.953085 4953 scope.go:117] "RemoveContainer" containerID="b2587a8f43682f31a01fae073769712321abb001f975f5ac5413264489a110f7" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.953565 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2587a8f43682f31a01fae073769712321abb001f975f5ac5413264489a110f7"} err="failed to get container status \"b2587a8f43682f31a01fae073769712321abb001f975f5ac5413264489a110f7\": rpc error: code = NotFound desc = could not find container \"b2587a8f43682f31a01fae073769712321abb001f975f5ac5413264489a110f7\": container with ID starting with b2587a8f43682f31a01fae073769712321abb001f975f5ac5413264489a110f7 not found: ID does not exist" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.953671 4953 scope.go:117] "RemoveContainer" containerID="622e6781ae06491000d6c0e3d2922942c3709bb09483a8e0d2ab723972c2aa78" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.954597 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"622e6781ae06491000d6c0e3d2922942c3709bb09483a8e0d2ab723972c2aa78"} err="failed to get container status \"622e6781ae06491000d6c0e3d2922942c3709bb09483a8e0d2ab723972c2aa78\": rpc error: code = NotFound desc = could not find container \"622e6781ae06491000d6c0e3d2922942c3709bb09483a8e0d2ab723972c2aa78\": container with ID starting with 622e6781ae06491000d6c0e3d2922942c3709bb09483a8e0d2ab723972c2aa78 not found: ID does not exist" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.954640 4953 scope.go:117] "RemoveContainer" containerID="b88a1ca7bd533da2486525e0a6c3dc45272433e743146be01eb8013265f02d93" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.954999 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b88a1ca7bd533da2486525e0a6c3dc45272433e743146be01eb8013265f02d93"} err="failed to get container status \"b88a1ca7bd533da2486525e0a6c3dc45272433e743146be01eb8013265f02d93\": rpc error: code = NotFound desc = could not find container \"b88a1ca7bd533da2486525e0a6c3dc45272433e743146be01eb8013265f02d93\": container with ID starting with b88a1ca7bd533da2486525e0a6c3dc45272433e743146be01eb8013265f02d93 not found: ID does not exist" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.955039 4953 scope.go:117] "RemoveContainer" containerID="42759c1ba178a28ddf9b11b221249364f6993c1288309dff6b1c50be13c3b6fc" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.955224 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42759c1ba178a28ddf9b11b221249364f6993c1288309dff6b1c50be13c3b6fc"} err="failed to get container status \"42759c1ba178a28ddf9b11b221249364f6993c1288309dff6b1c50be13c3b6fc\": rpc error: code = NotFound desc = could not find container \"42759c1ba178a28ddf9b11b221249364f6993c1288309dff6b1c50be13c3b6fc\": container with ID starting with 42759c1ba178a28ddf9b11b221249364f6993c1288309dff6b1c50be13c3b6fc not found: ID does not exist" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.955243 4953 scope.go:117] "RemoveContainer" containerID="99f628cfb5844a18bd0013d2cd5f7f545ef7a561017498bdfa4a6b6fc4686e54" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.955725 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99f628cfb5844a18bd0013d2cd5f7f545ef7a561017498bdfa4a6b6fc4686e54"} err="failed to get container status \"99f628cfb5844a18bd0013d2cd5f7f545ef7a561017498bdfa4a6b6fc4686e54\": rpc error: code = NotFound desc = could not find container \"99f628cfb5844a18bd0013d2cd5f7f545ef7a561017498bdfa4a6b6fc4686e54\": container with ID starting with 99f628cfb5844a18bd0013d2cd5f7f545ef7a561017498bdfa4a6b6fc4686e54 not found: ID does not exist" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.955784 4953 scope.go:117] "RemoveContainer" containerID="c84dce3b218bcc630289f0bff286bad09d4481e1787ef2c048799bfc6c97108f" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.956214 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c84dce3b218bcc630289f0bff286bad09d4481e1787ef2c048799bfc6c97108f"} err="failed to get container status \"c84dce3b218bcc630289f0bff286bad09d4481e1787ef2c048799bfc6c97108f\": rpc error: code = NotFound desc = could not find container \"c84dce3b218bcc630289f0bff286bad09d4481e1787ef2c048799bfc6c97108f\": container with ID starting with c84dce3b218bcc630289f0bff286bad09d4481e1787ef2c048799bfc6c97108f not found: ID does not exist" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.956284 4953 scope.go:117] "RemoveContainer" containerID="e1e0a7a3ed79a4ad164a0949259cb9d143376d0563f58526ab941a2f87b272f6" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.956647 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1e0a7a3ed79a4ad164a0949259cb9d143376d0563f58526ab941a2f87b272f6"} err="failed to get container status \"e1e0a7a3ed79a4ad164a0949259cb9d143376d0563f58526ab941a2f87b272f6\": rpc error: code = NotFound desc = could not find container \"e1e0a7a3ed79a4ad164a0949259cb9d143376d0563f58526ab941a2f87b272f6\": container with ID starting with e1e0a7a3ed79a4ad164a0949259cb9d143376d0563f58526ab941a2f87b272f6 not found: ID does not exist" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.956733 4953 scope.go:117] "RemoveContainer" containerID="7dc0cdbe5f1b125694bc32b6055f6f98ac803834f27c54f96be12ec7c359b5c1" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.957719 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7dc0cdbe5f1b125694bc32b6055f6f98ac803834f27c54f96be12ec7c359b5c1"} err="failed to get container status \"7dc0cdbe5f1b125694bc32b6055f6f98ac803834f27c54f96be12ec7c359b5c1\": rpc error: code = NotFound desc = could not find container \"7dc0cdbe5f1b125694bc32b6055f6f98ac803834f27c54f96be12ec7c359b5c1\": container with ID starting with 7dc0cdbe5f1b125694bc32b6055f6f98ac803834f27c54f96be12ec7c359b5c1 not found: ID does not exist" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.957744 4953 scope.go:117] "RemoveContainer" containerID="8da892281a7b8dea449aee461dcb35bef204e2a768689e751abcb21204091aaa" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.958565 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8da892281a7b8dea449aee461dcb35bef204e2a768689e751abcb21204091aaa"} err="failed to get container status \"8da892281a7b8dea449aee461dcb35bef204e2a768689e751abcb21204091aaa\": rpc error: code = NotFound desc = could not find container \"8da892281a7b8dea449aee461dcb35bef204e2a768689e751abcb21204091aaa\": container with ID starting with 8da892281a7b8dea449aee461dcb35bef204e2a768689e751abcb21204091aaa not found: ID does not exist" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.958613 4953 scope.go:117] "RemoveContainer" containerID="c34fbadf5e4606e6ad395137eff9b9edc19b5d02125e818a6d8a19d0d20649e1" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.959090 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c34fbadf5e4606e6ad395137eff9b9edc19b5d02125e818a6d8a19d0d20649e1"} err="failed to get container status \"c34fbadf5e4606e6ad395137eff9b9edc19b5d02125e818a6d8a19d0d20649e1\": rpc error: code = NotFound desc = could not find container \"c34fbadf5e4606e6ad395137eff9b9edc19b5d02125e818a6d8a19d0d20649e1\": container with ID starting with c34fbadf5e4606e6ad395137eff9b9edc19b5d02125e818a6d8a19d0d20649e1 not found: ID does not exist" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.959158 4953 scope.go:117] "RemoveContainer" containerID="b2587a8f43682f31a01fae073769712321abb001f975f5ac5413264489a110f7" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.959458 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2587a8f43682f31a01fae073769712321abb001f975f5ac5413264489a110f7"} err="failed to get container status \"b2587a8f43682f31a01fae073769712321abb001f975f5ac5413264489a110f7\": rpc error: code = NotFound desc = could not find container \"b2587a8f43682f31a01fae073769712321abb001f975f5ac5413264489a110f7\": container with ID starting with b2587a8f43682f31a01fae073769712321abb001f975f5ac5413264489a110f7 not found: ID does not exist" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.959480 4953 scope.go:117] "RemoveContainer" containerID="622e6781ae06491000d6c0e3d2922942c3709bb09483a8e0d2ab723972c2aa78" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.959781 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"622e6781ae06491000d6c0e3d2922942c3709bb09483a8e0d2ab723972c2aa78"} err="failed to get container status \"622e6781ae06491000d6c0e3d2922942c3709bb09483a8e0d2ab723972c2aa78\": rpc error: code = NotFound desc = could not find container \"622e6781ae06491000d6c0e3d2922942c3709bb09483a8e0d2ab723972c2aa78\": container with ID starting with 622e6781ae06491000d6c0e3d2922942c3709bb09483a8e0d2ab723972c2aa78 not found: ID does not exist" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.959803 4953 scope.go:117] "RemoveContainer" containerID="b88a1ca7bd533da2486525e0a6c3dc45272433e743146be01eb8013265f02d93" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.960075 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b88a1ca7bd533da2486525e0a6c3dc45272433e743146be01eb8013265f02d93"} err="failed to get container status \"b88a1ca7bd533da2486525e0a6c3dc45272433e743146be01eb8013265f02d93\": rpc error: code = NotFound desc = could not find container \"b88a1ca7bd533da2486525e0a6c3dc45272433e743146be01eb8013265f02d93\": container with ID starting with b88a1ca7bd533da2486525e0a6c3dc45272433e743146be01eb8013265f02d93 not found: ID does not exist" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.960097 4953 scope.go:117] "RemoveContainer" containerID="42759c1ba178a28ddf9b11b221249364f6993c1288309dff6b1c50be13c3b6fc" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.960418 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42759c1ba178a28ddf9b11b221249364f6993c1288309dff6b1c50be13c3b6fc"} err="failed to get container status \"42759c1ba178a28ddf9b11b221249364f6993c1288309dff6b1c50be13c3b6fc\": rpc error: code = NotFound desc = could not find container \"42759c1ba178a28ddf9b11b221249364f6993c1288309dff6b1c50be13c3b6fc\": container with ID starting with 42759c1ba178a28ddf9b11b221249364f6993c1288309dff6b1c50be13c3b6fc not found: ID does not exist" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.960439 4953 scope.go:117] "RemoveContainer" containerID="99f628cfb5844a18bd0013d2cd5f7f545ef7a561017498bdfa4a6b6fc4686e54" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.960881 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99f628cfb5844a18bd0013d2cd5f7f545ef7a561017498bdfa4a6b6fc4686e54"} err="failed to get container status \"99f628cfb5844a18bd0013d2cd5f7f545ef7a561017498bdfa4a6b6fc4686e54\": rpc error: code = NotFound desc = could not find container \"99f628cfb5844a18bd0013d2cd5f7f545ef7a561017498bdfa4a6b6fc4686e54\": container with ID starting with 99f628cfb5844a18bd0013d2cd5f7f545ef7a561017498bdfa4a6b6fc4686e54 not found: ID does not exist" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.960907 4953 scope.go:117] "RemoveContainer" containerID="c84dce3b218bcc630289f0bff286bad09d4481e1787ef2c048799bfc6c97108f" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.961147 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c84dce3b218bcc630289f0bff286bad09d4481e1787ef2c048799bfc6c97108f"} err="failed to get container status \"c84dce3b218bcc630289f0bff286bad09d4481e1787ef2c048799bfc6c97108f\": rpc error: code = NotFound desc = could not find container \"c84dce3b218bcc630289f0bff286bad09d4481e1787ef2c048799bfc6c97108f\": container with ID starting with c84dce3b218bcc630289f0bff286bad09d4481e1787ef2c048799bfc6c97108f not found: ID does not exist" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.961168 4953 scope.go:117] "RemoveContainer" containerID="e1e0a7a3ed79a4ad164a0949259cb9d143376d0563f58526ab941a2f87b272f6" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.961428 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1e0a7a3ed79a4ad164a0949259cb9d143376d0563f58526ab941a2f87b272f6"} err="failed to get container status \"e1e0a7a3ed79a4ad164a0949259cb9d143376d0563f58526ab941a2f87b272f6\": rpc error: code = NotFound desc = could not find container \"e1e0a7a3ed79a4ad164a0949259cb9d143376d0563f58526ab941a2f87b272f6\": container with ID starting with e1e0a7a3ed79a4ad164a0949259cb9d143376d0563f58526ab941a2f87b272f6 not found: ID does not exist" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.961459 4953 scope.go:117] "RemoveContainer" containerID="7dc0cdbe5f1b125694bc32b6055f6f98ac803834f27c54f96be12ec7c359b5c1" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.961689 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7dc0cdbe5f1b125694bc32b6055f6f98ac803834f27c54f96be12ec7c359b5c1"} err="failed to get container status \"7dc0cdbe5f1b125694bc32b6055f6f98ac803834f27c54f96be12ec7c359b5c1\": rpc error: code = NotFound desc = could not find container \"7dc0cdbe5f1b125694bc32b6055f6f98ac803834f27c54f96be12ec7c359b5c1\": container with ID starting with 7dc0cdbe5f1b125694bc32b6055f6f98ac803834f27c54f96be12ec7c359b5c1 not found: ID does not exist" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.961704 4953 scope.go:117] "RemoveContainer" containerID="8da892281a7b8dea449aee461dcb35bef204e2a768689e751abcb21204091aaa" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.961966 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8da892281a7b8dea449aee461dcb35bef204e2a768689e751abcb21204091aaa"} err="failed to get container status \"8da892281a7b8dea449aee461dcb35bef204e2a768689e751abcb21204091aaa\": rpc error: code = NotFound desc = could not find container \"8da892281a7b8dea449aee461dcb35bef204e2a768689e751abcb21204091aaa\": container with ID starting with 8da892281a7b8dea449aee461dcb35bef204e2a768689e751abcb21204091aaa not found: ID does not exist" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.961983 4953 scope.go:117] "RemoveContainer" containerID="c34fbadf5e4606e6ad395137eff9b9edc19b5d02125e818a6d8a19d0d20649e1" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.962191 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c34fbadf5e4606e6ad395137eff9b9edc19b5d02125e818a6d8a19d0d20649e1"} err="failed to get container status \"c34fbadf5e4606e6ad395137eff9b9edc19b5d02125e818a6d8a19d0d20649e1\": rpc error: code = NotFound desc = could not find container \"c34fbadf5e4606e6ad395137eff9b9edc19b5d02125e818a6d8a19d0d20649e1\": container with ID starting with c34fbadf5e4606e6ad395137eff9b9edc19b5d02125e818a6d8a19d0d20649e1 not found: ID does not exist" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.962206 4953 scope.go:117] "RemoveContainer" containerID="b2587a8f43682f31a01fae073769712321abb001f975f5ac5413264489a110f7" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.962369 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2587a8f43682f31a01fae073769712321abb001f975f5ac5413264489a110f7"} err="failed to get container status \"b2587a8f43682f31a01fae073769712321abb001f975f5ac5413264489a110f7\": rpc error: code = NotFound desc = could not find container \"b2587a8f43682f31a01fae073769712321abb001f975f5ac5413264489a110f7\": container with ID starting with b2587a8f43682f31a01fae073769712321abb001f975f5ac5413264489a110f7 not found: ID does not exist" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.962382 4953 scope.go:117] "RemoveContainer" containerID="622e6781ae06491000d6c0e3d2922942c3709bb09483a8e0d2ab723972c2aa78" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.962651 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"622e6781ae06491000d6c0e3d2922942c3709bb09483a8e0d2ab723972c2aa78"} err="failed to get container status \"622e6781ae06491000d6c0e3d2922942c3709bb09483a8e0d2ab723972c2aa78\": rpc error: code = NotFound desc = could not find container \"622e6781ae06491000d6c0e3d2922942c3709bb09483a8e0d2ab723972c2aa78\": container with ID starting with 622e6781ae06491000d6c0e3d2922942c3709bb09483a8e0d2ab723972c2aa78 not found: ID does not exist" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.962687 4953 scope.go:117] "RemoveContainer" containerID="b88a1ca7bd533da2486525e0a6c3dc45272433e743146be01eb8013265f02d93" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.962944 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b88a1ca7bd533da2486525e0a6c3dc45272433e743146be01eb8013265f02d93"} err="failed to get container status \"b88a1ca7bd533da2486525e0a6c3dc45272433e743146be01eb8013265f02d93\": rpc error: code = NotFound desc = could not find container \"b88a1ca7bd533da2486525e0a6c3dc45272433e743146be01eb8013265f02d93\": container with ID starting with b88a1ca7bd533da2486525e0a6c3dc45272433e743146be01eb8013265f02d93 not found: ID does not exist" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.962964 4953 scope.go:117] "RemoveContainer" containerID="42759c1ba178a28ddf9b11b221249364f6993c1288309dff6b1c50be13c3b6fc" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.963159 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42759c1ba178a28ddf9b11b221249364f6993c1288309dff6b1c50be13c3b6fc"} err="failed to get container status \"42759c1ba178a28ddf9b11b221249364f6993c1288309dff6b1c50be13c3b6fc\": rpc error: code = NotFound desc = could not find container \"42759c1ba178a28ddf9b11b221249364f6993c1288309dff6b1c50be13c3b6fc\": container with ID starting with 42759c1ba178a28ddf9b11b221249364f6993c1288309dff6b1c50be13c3b6fc not found: ID does not exist" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.963191 4953 scope.go:117] "RemoveContainer" containerID="99f628cfb5844a18bd0013d2cd5f7f545ef7a561017498bdfa4a6b6fc4686e54" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.963455 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99f628cfb5844a18bd0013d2cd5f7f545ef7a561017498bdfa4a6b6fc4686e54"} err="failed to get container status \"99f628cfb5844a18bd0013d2cd5f7f545ef7a561017498bdfa4a6b6fc4686e54\": rpc error: code = NotFound desc = could not find container \"99f628cfb5844a18bd0013d2cd5f7f545ef7a561017498bdfa4a6b6fc4686e54\": container with ID starting with 99f628cfb5844a18bd0013d2cd5f7f545ef7a561017498bdfa4a6b6fc4686e54 not found: ID does not exist" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.963472 4953 scope.go:117] "RemoveContainer" containerID="c84dce3b218bcc630289f0bff286bad09d4481e1787ef2c048799bfc6c97108f" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.963767 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c84dce3b218bcc630289f0bff286bad09d4481e1787ef2c048799bfc6c97108f"} err="failed to get container status \"c84dce3b218bcc630289f0bff286bad09d4481e1787ef2c048799bfc6c97108f\": rpc error: code = NotFound desc = could not find container \"c84dce3b218bcc630289f0bff286bad09d4481e1787ef2c048799bfc6c97108f\": container with ID starting with c84dce3b218bcc630289f0bff286bad09d4481e1787ef2c048799bfc6c97108f not found: ID does not exist" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.963805 4953 scope.go:117] "RemoveContainer" containerID="e1e0a7a3ed79a4ad164a0949259cb9d143376d0563f58526ab941a2f87b272f6" Dec 11 10:25:01 crc kubenswrapper[4953]: I1211 10:25:01.964129 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1e0a7a3ed79a4ad164a0949259cb9d143376d0563f58526ab941a2f87b272f6"} err="failed to get container status \"e1e0a7a3ed79a4ad164a0949259cb9d143376d0563f58526ab941a2f87b272f6\": rpc error: code = NotFound desc = could not find container \"e1e0a7a3ed79a4ad164a0949259cb9d143376d0563f58526ab941a2f87b272f6\": container with ID starting with e1e0a7a3ed79a4ad164a0949259cb9d143376d0563f58526ab941a2f87b272f6 not found: ID does not exist" Dec 11 10:25:02 crc kubenswrapper[4953]: I1211 10:25:02.498500 4953 generic.go:334] "Generic (PLEG): container finished" podID="2ebb69c3-5d3f-42be-b799-47a62b31fdd6" containerID="06956b7a30896f20868de07db34f29d16a7cda12cfba1e06136926c411bbc9dd" exitCode=0 Dec 11 10:25:02 crc kubenswrapper[4953]: I1211 10:25:02.498643 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c09d8243-6693-433e-bce1-8a99e5e37b95" path="/var/lib/kubelet/pods/c09d8243-6693-433e-bce1-8a99e5e37b95/volumes" Dec 11 10:25:02 crc kubenswrapper[4953]: I1211 10:25:02.499746 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cls25" event={"ID":"2ebb69c3-5d3f-42be-b799-47a62b31fdd6","Type":"ContainerDied","Data":"06956b7a30896f20868de07db34f29d16a7cda12cfba1e06136926c411bbc9dd"} Dec 11 10:25:02 crc kubenswrapper[4953]: I1211 10:25:02.499769 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cls25" event={"ID":"2ebb69c3-5d3f-42be-b799-47a62b31fdd6","Type":"ContainerStarted","Data":"41e45ebe0ec31013303e4943fc0427fc363c68edd2023c8bfb92430e60b70e60"} Dec 11 10:25:02 crc kubenswrapper[4953]: I1211 10:25:02.501496 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-h4dvx_644e1d40-ab80-469e-94b4-540e52b8e2c0/kube-multus/2.log" Dec 11 10:25:02 crc kubenswrapper[4953]: I1211 10:25:02.501928 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-h4dvx_644e1d40-ab80-469e-94b4-540e52b8e2c0/kube-multus/1.log" Dec 11 10:25:02 crc kubenswrapper[4953]: I1211 10:25:02.502113 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-h4dvx" event={"ID":"644e1d40-ab80-469e-94b4-540e52b8e2c0","Type":"ContainerStarted","Data":"c37748658573843b2bc5a416e06ff3f7ca8ad4f301e88381a1e7a55d324cbb96"} Dec 11 10:25:03 crc kubenswrapper[4953]: I1211 10:25:03.622352 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cls25" event={"ID":"2ebb69c3-5d3f-42be-b799-47a62b31fdd6","Type":"ContainerStarted","Data":"071fd5909deae3063b33d0fbd5d891a91b5271903ad5739129b0f6094e60c727"} Dec 11 10:25:03 crc kubenswrapper[4953]: I1211 10:25:03.622608 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cls25" event={"ID":"2ebb69c3-5d3f-42be-b799-47a62b31fdd6","Type":"ContainerStarted","Data":"a55038a11fdec726e0318cb2c0068832e88d5e5dc60cbe5d1c8d4ad341eaa057"} Dec 11 10:25:03 crc kubenswrapper[4953]: I1211 10:25:03.622620 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cls25" event={"ID":"2ebb69c3-5d3f-42be-b799-47a62b31fdd6","Type":"ContainerStarted","Data":"48270cadb45815778bd95266764ef9b2a69a3fe11bd24358a5ecd76462415fb5"} Dec 11 10:25:03 crc kubenswrapper[4953]: I1211 10:25:03.622629 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cls25" event={"ID":"2ebb69c3-5d3f-42be-b799-47a62b31fdd6","Type":"ContainerStarted","Data":"9f7c5dd478a5460c8b7d8ac60ad95105b8bf39cd70b12cdc0e4a26d3dd2b9168"} Dec 11 10:25:03 crc kubenswrapper[4953]: I1211 10:25:03.622638 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cls25" event={"ID":"2ebb69c3-5d3f-42be-b799-47a62b31fdd6","Type":"ContainerStarted","Data":"6a72973f6096fcf4df17afdf684a6573dd699f92a7a10a845afda124b53ae86e"} Dec 11 10:25:03 crc kubenswrapper[4953]: I1211 10:25:03.622646 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cls25" event={"ID":"2ebb69c3-5d3f-42be-b799-47a62b31fdd6","Type":"ContainerStarted","Data":"e55dc9d188975bb315ac3d40f054f4272d56ffbb1cefb48bb4d3adb28f3ac084"} Dec 11 10:25:05 crc kubenswrapper[4953]: I1211 10:25:05.678272 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cls25" event={"ID":"2ebb69c3-5d3f-42be-b799-47a62b31fdd6","Type":"ContainerStarted","Data":"8f8edaa67658f9867d515341742c7bbb4c0ffb7463a4f3ad27bb1f3f5aa27aa7"} Dec 11 10:25:07 crc kubenswrapper[4953]: I1211 10:25:07.694420 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-slksb"] Dec 11 10:25:07 crc kubenswrapper[4953]: I1211 10:25:07.696292 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-slksb" Dec 11 10:25:07 crc kubenswrapper[4953]: I1211 10:25:07.698848 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Dec 11 10:25:07 crc kubenswrapper[4953]: I1211 10:25:07.698880 4953 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-sksp6" Dec 11 10:25:07 crc kubenswrapper[4953]: I1211 10:25:07.700304 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Dec 11 10:25:07 crc kubenswrapper[4953]: I1211 10:25:07.700837 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Dec 11 10:25:07 crc kubenswrapper[4953]: I1211 10:25:07.725557 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/7b00572e-609f-47dd-8c0c-aeced1633b3e-node-mnt\") pod \"crc-storage-crc-slksb\" (UID: \"7b00572e-609f-47dd-8c0c-aeced1633b3e\") " pod="crc-storage/crc-storage-crc-slksb" Dec 11 10:25:07 crc kubenswrapper[4953]: I1211 10:25:07.725644 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ht92f\" (UniqueName: \"kubernetes.io/projected/7b00572e-609f-47dd-8c0c-aeced1633b3e-kube-api-access-ht92f\") pod \"crc-storage-crc-slksb\" (UID: \"7b00572e-609f-47dd-8c0c-aeced1633b3e\") " pod="crc-storage/crc-storage-crc-slksb" Dec 11 10:25:07 crc kubenswrapper[4953]: I1211 10:25:07.725682 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/7b00572e-609f-47dd-8c0c-aeced1633b3e-crc-storage\") pod \"crc-storage-crc-slksb\" (UID: \"7b00572e-609f-47dd-8c0c-aeced1633b3e\") " pod="crc-storage/crc-storage-crc-slksb" Dec 11 10:25:07 crc kubenswrapper[4953]: I1211 10:25:07.826836 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/7b00572e-609f-47dd-8c0c-aeced1633b3e-node-mnt\") pod \"crc-storage-crc-slksb\" (UID: \"7b00572e-609f-47dd-8c0c-aeced1633b3e\") " pod="crc-storage/crc-storage-crc-slksb" Dec 11 10:25:07 crc kubenswrapper[4953]: I1211 10:25:07.826898 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ht92f\" (UniqueName: \"kubernetes.io/projected/7b00572e-609f-47dd-8c0c-aeced1633b3e-kube-api-access-ht92f\") pod \"crc-storage-crc-slksb\" (UID: \"7b00572e-609f-47dd-8c0c-aeced1633b3e\") " pod="crc-storage/crc-storage-crc-slksb" Dec 11 10:25:07 crc kubenswrapper[4953]: I1211 10:25:07.826945 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/7b00572e-609f-47dd-8c0c-aeced1633b3e-crc-storage\") pod \"crc-storage-crc-slksb\" (UID: \"7b00572e-609f-47dd-8c0c-aeced1633b3e\") " pod="crc-storage/crc-storage-crc-slksb" Dec 11 10:25:07 crc kubenswrapper[4953]: I1211 10:25:07.827136 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/7b00572e-609f-47dd-8c0c-aeced1633b3e-node-mnt\") pod \"crc-storage-crc-slksb\" (UID: \"7b00572e-609f-47dd-8c0c-aeced1633b3e\") " pod="crc-storage/crc-storage-crc-slksb" Dec 11 10:25:07 crc kubenswrapper[4953]: I1211 10:25:07.827819 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/7b00572e-609f-47dd-8c0c-aeced1633b3e-crc-storage\") pod \"crc-storage-crc-slksb\" (UID: \"7b00572e-609f-47dd-8c0c-aeced1633b3e\") " pod="crc-storage/crc-storage-crc-slksb" Dec 11 10:25:07 crc kubenswrapper[4953]: I1211 10:25:07.843490 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ht92f\" (UniqueName: \"kubernetes.io/projected/7b00572e-609f-47dd-8c0c-aeced1633b3e-kube-api-access-ht92f\") pod \"crc-storage-crc-slksb\" (UID: \"7b00572e-609f-47dd-8c0c-aeced1633b3e\") " pod="crc-storage/crc-storage-crc-slksb" Dec 11 10:25:08 crc kubenswrapper[4953]: I1211 10:25:08.011727 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-slksb" Dec 11 10:25:08 crc kubenswrapper[4953]: E1211 10:25:08.042034 4953 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-slksb_crc-storage_7b00572e-609f-47dd-8c0c-aeced1633b3e_0(1d7bd867d67ab62b3ed83d250a97e4f1686ead06710a28dcac72c5008b2f6161): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 11 10:25:08 crc kubenswrapper[4953]: E1211 10:25:08.042138 4953 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-slksb_crc-storage_7b00572e-609f-47dd-8c0c-aeced1633b3e_0(1d7bd867d67ab62b3ed83d250a97e4f1686ead06710a28dcac72c5008b2f6161): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-slksb" Dec 11 10:25:08 crc kubenswrapper[4953]: E1211 10:25:08.042197 4953 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-slksb_crc-storage_7b00572e-609f-47dd-8c0c-aeced1633b3e_0(1d7bd867d67ab62b3ed83d250a97e4f1686ead06710a28dcac72c5008b2f6161): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-slksb" Dec 11 10:25:08 crc kubenswrapper[4953]: E1211 10:25:08.042275 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"crc-storage-crc-slksb_crc-storage(7b00572e-609f-47dd-8c0c-aeced1633b3e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"crc-storage-crc-slksb_crc-storage(7b00572e-609f-47dd-8c0c-aeced1633b3e)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-slksb_crc-storage_7b00572e-609f-47dd-8c0c-aeced1633b3e_0(1d7bd867d67ab62b3ed83d250a97e4f1686ead06710a28dcac72c5008b2f6161): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="crc-storage/crc-storage-crc-slksb" podUID="7b00572e-609f-47dd-8c0c-aeced1633b3e" Dec 11 10:25:08 crc kubenswrapper[4953]: I1211 10:25:08.609416 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-slksb"] Dec 11 10:25:08 crc kubenswrapper[4953]: I1211 10:25:08.698070 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-slksb" Dec 11 10:25:08 crc kubenswrapper[4953]: I1211 10:25:08.698490 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-slksb" Dec 11 10:25:08 crc kubenswrapper[4953]: I1211 10:25:08.698722 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cls25" event={"ID":"2ebb69c3-5d3f-42be-b799-47a62b31fdd6","Type":"ContainerStarted","Data":"df60e6bd14e342155b1406b212261095d80fe695e73256e158afb9dc79de8f82"} Dec 11 10:25:08 crc kubenswrapper[4953]: I1211 10:25:08.699195 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-cls25" Dec 11 10:25:08 crc kubenswrapper[4953]: I1211 10:25:08.699212 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-cls25" Dec 11 10:25:08 crc kubenswrapper[4953]: I1211 10:25:08.699221 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-cls25" Dec 11 10:25:08 crc kubenswrapper[4953]: I1211 10:25:08.736920 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-cls25" podStartSLOduration=8.736903176 podStartE2EDuration="8.736903176s" podCreationTimestamp="2025-12-11 10:25:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:25:08.732558975 +0000 UTC m=+826.756418028" watchObservedRunningTime="2025-12-11 10:25:08.736903176 +0000 UTC m=+826.760762209" Dec 11 10:25:08 crc kubenswrapper[4953]: I1211 10:25:08.739361 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-cls25" Dec 11 10:25:08 crc kubenswrapper[4953]: E1211 10:25:08.740130 4953 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-slksb_crc-storage_7b00572e-609f-47dd-8c0c-aeced1633b3e_0(055329a8599eeebf721a1db86000b49ad0d7fb09ee3c7187ba49f6cb21715bbc): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 11 10:25:08 crc kubenswrapper[4953]: E1211 10:25:08.740189 4953 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-slksb_crc-storage_7b00572e-609f-47dd-8c0c-aeced1633b3e_0(055329a8599eeebf721a1db86000b49ad0d7fb09ee3c7187ba49f6cb21715bbc): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-slksb" Dec 11 10:25:08 crc kubenswrapper[4953]: E1211 10:25:08.740214 4953 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-slksb_crc-storage_7b00572e-609f-47dd-8c0c-aeced1633b3e_0(055329a8599eeebf721a1db86000b49ad0d7fb09ee3c7187ba49f6cb21715bbc): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-slksb" Dec 11 10:25:08 crc kubenswrapper[4953]: E1211 10:25:08.740256 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"crc-storage-crc-slksb_crc-storage(7b00572e-609f-47dd-8c0c-aeced1633b3e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"crc-storage-crc-slksb_crc-storage(7b00572e-609f-47dd-8c0c-aeced1633b3e)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-slksb_crc-storage_7b00572e-609f-47dd-8c0c-aeced1633b3e_0(055329a8599eeebf721a1db86000b49ad0d7fb09ee3c7187ba49f6cb21715bbc): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="crc-storage/crc-storage-crc-slksb" podUID="7b00572e-609f-47dd-8c0c-aeced1633b3e" Dec 11 10:25:08 crc kubenswrapper[4953]: I1211 10:25:08.743224 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-cls25" Dec 11 10:25:18 crc kubenswrapper[4953]: I1211 10:25:18.193724 4953 patch_prober.go:28] interesting pod/machine-config-daemon-q2898 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 10:25:18 crc kubenswrapper[4953]: I1211 10:25:18.194385 4953 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 10:25:18 crc kubenswrapper[4953]: I1211 10:25:18.194491 4953 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q2898" Dec 11 10:25:18 crc kubenswrapper[4953]: I1211 10:25:18.195126 4953 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3ca59c50b35b5c8d77fc457ff5e5a06ef5ae754b46ae582746445b4e7704377c"} pod="openshift-machine-config-operator/machine-config-daemon-q2898" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 11 10:25:18 crc kubenswrapper[4953]: I1211 10:25:18.195218 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" containerName="machine-config-daemon" containerID="cri-o://3ca59c50b35b5c8d77fc457ff5e5a06ef5ae754b46ae582746445b4e7704377c" gracePeriod=600 Dec 11 10:25:18 crc kubenswrapper[4953]: I1211 10:25:18.766958 4953 generic.go:334] "Generic (PLEG): container finished" podID="ed741fb7-1326-48b7-a713-17c9f0243eac" containerID="3ca59c50b35b5c8d77fc457ff5e5a06ef5ae754b46ae582746445b4e7704377c" exitCode=0 Dec 11 10:25:18 crc kubenswrapper[4953]: I1211 10:25:18.767006 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q2898" event={"ID":"ed741fb7-1326-48b7-a713-17c9f0243eac","Type":"ContainerDied","Data":"3ca59c50b35b5c8d77fc457ff5e5a06ef5ae754b46ae582746445b4e7704377c"} Dec 11 10:25:18 crc kubenswrapper[4953]: I1211 10:25:18.767286 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q2898" event={"ID":"ed741fb7-1326-48b7-a713-17c9f0243eac","Type":"ContainerStarted","Data":"4128485b59765a5f0e1c236093ee311843a19fb26e6f522ba47964eefbd53b75"} Dec 11 10:25:18 crc kubenswrapper[4953]: I1211 10:25:18.767313 4953 scope.go:117] "RemoveContainer" containerID="142b8bb384b24715cd1ba95ad576a70c2c8e1fafe4e31f75f980739d852f35b1" Dec 11 10:25:20 crc kubenswrapper[4953]: I1211 10:25:20.472682 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-slksb" Dec 11 10:25:20 crc kubenswrapper[4953]: I1211 10:25:20.473722 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-slksb" Dec 11 10:25:20 crc kubenswrapper[4953]: I1211 10:25:20.709231 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-slksb"] Dec 11 10:25:20 crc kubenswrapper[4953]: I1211 10:25:20.724777 4953 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 11 10:25:20 crc kubenswrapper[4953]: I1211 10:25:20.781740 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-slksb" event={"ID":"7b00572e-609f-47dd-8c0c-aeced1633b3e","Type":"ContainerStarted","Data":"976c8e1b1a9a5956ccc2b8429732cd8e0ed28979a16733cdb1284257d3dc0576"} Dec 11 10:25:22 crc kubenswrapper[4953]: I1211 10:25:22.796189 4953 generic.go:334] "Generic (PLEG): container finished" podID="7b00572e-609f-47dd-8c0c-aeced1633b3e" containerID="1b6d6a6825e52c2f49b48d7aca8104544adf7c2b86e7e6b4c6e80dd96d457aff" exitCode=0 Dec 11 10:25:22 crc kubenswrapper[4953]: I1211 10:25:22.796325 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-slksb" event={"ID":"7b00572e-609f-47dd-8c0c-aeced1633b3e","Type":"ContainerDied","Data":"1b6d6a6825e52c2f49b48d7aca8104544adf7c2b86e7e6b4c6e80dd96d457aff"} Dec 11 10:25:24 crc kubenswrapper[4953]: I1211 10:25:24.061974 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-slksb" Dec 11 10:25:24 crc kubenswrapper[4953]: I1211 10:25:24.190302 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/7b00572e-609f-47dd-8c0c-aeced1633b3e-node-mnt\") pod \"7b00572e-609f-47dd-8c0c-aeced1633b3e\" (UID: \"7b00572e-609f-47dd-8c0c-aeced1633b3e\") " Dec 11 10:25:24 crc kubenswrapper[4953]: I1211 10:25:24.190377 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/7b00572e-609f-47dd-8c0c-aeced1633b3e-crc-storage\") pod \"7b00572e-609f-47dd-8c0c-aeced1633b3e\" (UID: \"7b00572e-609f-47dd-8c0c-aeced1633b3e\") " Dec 11 10:25:24 crc kubenswrapper[4953]: I1211 10:25:24.190483 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ht92f\" (UniqueName: \"kubernetes.io/projected/7b00572e-609f-47dd-8c0c-aeced1633b3e-kube-api-access-ht92f\") pod \"7b00572e-609f-47dd-8c0c-aeced1633b3e\" (UID: \"7b00572e-609f-47dd-8c0c-aeced1633b3e\") " Dec 11 10:25:24 crc kubenswrapper[4953]: I1211 10:25:24.190495 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7b00572e-609f-47dd-8c0c-aeced1633b3e-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "7b00572e-609f-47dd-8c0c-aeced1633b3e" (UID: "7b00572e-609f-47dd-8c0c-aeced1633b3e"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 10:25:24 crc kubenswrapper[4953]: I1211 10:25:24.190749 4953 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/7b00572e-609f-47dd-8c0c-aeced1633b3e-node-mnt\") on node \"crc\" DevicePath \"\"" Dec 11 10:25:24 crc kubenswrapper[4953]: I1211 10:25:24.198994 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b00572e-609f-47dd-8c0c-aeced1633b3e-kube-api-access-ht92f" (OuterVolumeSpecName: "kube-api-access-ht92f") pod "7b00572e-609f-47dd-8c0c-aeced1633b3e" (UID: "7b00572e-609f-47dd-8c0c-aeced1633b3e"). InnerVolumeSpecName "kube-api-access-ht92f". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:25:24 crc kubenswrapper[4953]: I1211 10:25:24.214247 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b00572e-609f-47dd-8c0c-aeced1633b3e-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "7b00572e-609f-47dd-8c0c-aeced1633b3e" (UID: "7b00572e-609f-47dd-8c0c-aeced1633b3e"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:25:24 crc kubenswrapper[4953]: I1211 10:25:24.291985 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ht92f\" (UniqueName: \"kubernetes.io/projected/7b00572e-609f-47dd-8c0c-aeced1633b3e-kube-api-access-ht92f\") on node \"crc\" DevicePath \"\"" Dec 11 10:25:24 crc kubenswrapper[4953]: I1211 10:25:24.292029 4953 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/7b00572e-609f-47dd-8c0c-aeced1633b3e-crc-storage\") on node \"crc\" DevicePath \"\"" Dec 11 10:25:24 crc kubenswrapper[4953]: I1211 10:25:24.814101 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-slksb" event={"ID":"7b00572e-609f-47dd-8c0c-aeced1633b3e","Type":"ContainerDied","Data":"976c8e1b1a9a5956ccc2b8429732cd8e0ed28979a16733cdb1284257d3dc0576"} Dec 11 10:25:24 crc kubenswrapper[4953]: I1211 10:25:24.814738 4953 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="976c8e1b1a9a5956ccc2b8429732cd8e0ed28979a16733cdb1284257d3dc0576" Dec 11 10:25:24 crc kubenswrapper[4953]: I1211 10:25:24.814379 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-slksb" Dec 11 10:25:29 crc kubenswrapper[4953]: I1211 10:25:29.393748 4953 scope.go:117] "RemoveContainer" containerID="bc80f2149ec8320584aa8fd55223ba13d53848232acd659a71bb35fdea7a043f" Dec 11 10:25:29 crc kubenswrapper[4953]: I1211 10:25:29.851506 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-h4dvx_644e1d40-ab80-469e-94b4-540e52b8e2c0/kube-multus/2.log" Dec 11 10:25:31 crc kubenswrapper[4953]: I1211 10:25:31.503051 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-cls25" Dec 11 10:25:32 crc kubenswrapper[4953]: I1211 10:25:32.272914 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8k2qpg"] Dec 11 10:25:32 crc kubenswrapper[4953]: E1211 10:25:32.273226 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b00572e-609f-47dd-8c0c-aeced1633b3e" containerName="storage" Dec 11 10:25:32 crc kubenswrapper[4953]: I1211 10:25:32.273251 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b00572e-609f-47dd-8c0c-aeced1633b3e" containerName="storage" Dec 11 10:25:32 crc kubenswrapper[4953]: I1211 10:25:32.273403 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b00572e-609f-47dd-8c0c-aeced1633b3e" containerName="storage" Dec 11 10:25:32 crc kubenswrapper[4953]: I1211 10:25:32.274377 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8k2qpg" Dec 11 10:25:32 crc kubenswrapper[4953]: I1211 10:25:32.277091 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Dec 11 10:25:32 crc kubenswrapper[4953]: I1211 10:25:32.287189 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8k2qpg"] Dec 11 10:25:32 crc kubenswrapper[4953]: I1211 10:25:32.463941 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qz79n\" (UniqueName: \"kubernetes.io/projected/e3eae748-2ab6-4203-826e-b7555edb049a-kube-api-access-qz79n\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8k2qpg\" (UID: \"e3eae748-2ab6-4203-826e-b7555edb049a\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8k2qpg" Dec 11 10:25:32 crc kubenswrapper[4953]: I1211 10:25:32.464178 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e3eae748-2ab6-4203-826e-b7555edb049a-bundle\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8k2qpg\" (UID: \"e3eae748-2ab6-4203-826e-b7555edb049a\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8k2qpg" Dec 11 10:25:32 crc kubenswrapper[4953]: I1211 10:25:32.464472 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e3eae748-2ab6-4203-826e-b7555edb049a-util\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8k2qpg\" (UID: \"e3eae748-2ab6-4203-826e-b7555edb049a\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8k2qpg" Dec 11 10:25:32 crc kubenswrapper[4953]: I1211 10:25:32.566237 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qz79n\" (UniqueName: \"kubernetes.io/projected/e3eae748-2ab6-4203-826e-b7555edb049a-kube-api-access-qz79n\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8k2qpg\" (UID: \"e3eae748-2ab6-4203-826e-b7555edb049a\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8k2qpg" Dec 11 10:25:32 crc kubenswrapper[4953]: I1211 10:25:32.566396 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e3eae748-2ab6-4203-826e-b7555edb049a-bundle\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8k2qpg\" (UID: \"e3eae748-2ab6-4203-826e-b7555edb049a\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8k2qpg" Dec 11 10:25:32 crc kubenswrapper[4953]: I1211 10:25:32.566597 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e3eae748-2ab6-4203-826e-b7555edb049a-util\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8k2qpg\" (UID: \"e3eae748-2ab6-4203-826e-b7555edb049a\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8k2qpg" Dec 11 10:25:32 crc kubenswrapper[4953]: I1211 10:25:32.566986 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e3eae748-2ab6-4203-826e-b7555edb049a-bundle\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8k2qpg\" (UID: \"e3eae748-2ab6-4203-826e-b7555edb049a\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8k2qpg" Dec 11 10:25:32 crc kubenswrapper[4953]: I1211 10:25:32.567270 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e3eae748-2ab6-4203-826e-b7555edb049a-util\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8k2qpg\" (UID: \"e3eae748-2ab6-4203-826e-b7555edb049a\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8k2qpg" Dec 11 10:25:32 crc kubenswrapper[4953]: I1211 10:25:32.595398 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qz79n\" (UniqueName: \"kubernetes.io/projected/e3eae748-2ab6-4203-826e-b7555edb049a-kube-api-access-qz79n\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8k2qpg\" (UID: \"e3eae748-2ab6-4203-826e-b7555edb049a\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8k2qpg" Dec 11 10:25:32 crc kubenswrapper[4953]: I1211 10:25:32.890836 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8k2qpg" Dec 11 10:25:33 crc kubenswrapper[4953]: I1211 10:25:33.301591 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8k2qpg"] Dec 11 10:25:33 crc kubenswrapper[4953]: I1211 10:25:33.877306 4953 generic.go:334] "Generic (PLEG): container finished" podID="e3eae748-2ab6-4203-826e-b7555edb049a" containerID="a7d7eac5c42c1e09577a255c2906e44d8af90eccde1072bc5d3f76c31a41079e" exitCode=0 Dec 11 10:25:33 crc kubenswrapper[4953]: I1211 10:25:33.877432 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8k2qpg" event={"ID":"e3eae748-2ab6-4203-826e-b7555edb049a","Type":"ContainerDied","Data":"a7d7eac5c42c1e09577a255c2906e44d8af90eccde1072bc5d3f76c31a41079e"} Dec 11 10:25:33 crc kubenswrapper[4953]: I1211 10:25:33.877558 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8k2qpg" event={"ID":"e3eae748-2ab6-4203-826e-b7555edb049a","Type":"ContainerStarted","Data":"d2abbd24c082e943064c1e71e673c6aa032fc67e356ccbe8d9dcbf5d1a0d5f65"} Dec 11 10:25:34 crc kubenswrapper[4953]: I1211 10:25:34.009617 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-rrs6q"] Dec 11 10:25:34 crc kubenswrapper[4953]: I1211 10:25:34.010623 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rrs6q" Dec 11 10:25:34 crc kubenswrapper[4953]: I1211 10:25:34.031850 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rrs6q"] Dec 11 10:25:34 crc kubenswrapper[4953]: I1211 10:25:34.116152 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrqhj\" (UniqueName: \"kubernetes.io/projected/7e99dc28-22e4-4d3f-8e4d-48f8e9770fe7-kube-api-access-xrqhj\") pod \"redhat-operators-rrs6q\" (UID: \"7e99dc28-22e4-4d3f-8e4d-48f8e9770fe7\") " pod="openshift-marketplace/redhat-operators-rrs6q" Dec 11 10:25:34 crc kubenswrapper[4953]: I1211 10:25:34.116220 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e99dc28-22e4-4d3f-8e4d-48f8e9770fe7-catalog-content\") pod \"redhat-operators-rrs6q\" (UID: \"7e99dc28-22e4-4d3f-8e4d-48f8e9770fe7\") " pod="openshift-marketplace/redhat-operators-rrs6q" Dec 11 10:25:34 crc kubenswrapper[4953]: I1211 10:25:34.116257 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e99dc28-22e4-4d3f-8e4d-48f8e9770fe7-utilities\") pod \"redhat-operators-rrs6q\" (UID: \"7e99dc28-22e4-4d3f-8e4d-48f8e9770fe7\") " pod="openshift-marketplace/redhat-operators-rrs6q" Dec 11 10:25:34 crc kubenswrapper[4953]: I1211 10:25:34.217118 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xrqhj\" (UniqueName: \"kubernetes.io/projected/7e99dc28-22e4-4d3f-8e4d-48f8e9770fe7-kube-api-access-xrqhj\") pod \"redhat-operators-rrs6q\" (UID: \"7e99dc28-22e4-4d3f-8e4d-48f8e9770fe7\") " pod="openshift-marketplace/redhat-operators-rrs6q" Dec 11 10:25:34 crc kubenswrapper[4953]: I1211 10:25:34.217179 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e99dc28-22e4-4d3f-8e4d-48f8e9770fe7-catalog-content\") pod \"redhat-operators-rrs6q\" (UID: \"7e99dc28-22e4-4d3f-8e4d-48f8e9770fe7\") " pod="openshift-marketplace/redhat-operators-rrs6q" Dec 11 10:25:34 crc kubenswrapper[4953]: I1211 10:25:34.217213 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e99dc28-22e4-4d3f-8e4d-48f8e9770fe7-utilities\") pod \"redhat-operators-rrs6q\" (UID: \"7e99dc28-22e4-4d3f-8e4d-48f8e9770fe7\") " pod="openshift-marketplace/redhat-operators-rrs6q" Dec 11 10:25:34 crc kubenswrapper[4953]: I1211 10:25:34.217728 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e99dc28-22e4-4d3f-8e4d-48f8e9770fe7-utilities\") pod \"redhat-operators-rrs6q\" (UID: \"7e99dc28-22e4-4d3f-8e4d-48f8e9770fe7\") " pod="openshift-marketplace/redhat-operators-rrs6q" Dec 11 10:25:34 crc kubenswrapper[4953]: I1211 10:25:34.217774 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e99dc28-22e4-4d3f-8e4d-48f8e9770fe7-catalog-content\") pod \"redhat-operators-rrs6q\" (UID: \"7e99dc28-22e4-4d3f-8e4d-48f8e9770fe7\") " pod="openshift-marketplace/redhat-operators-rrs6q" Dec 11 10:25:34 crc kubenswrapper[4953]: I1211 10:25:34.239451 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xrqhj\" (UniqueName: \"kubernetes.io/projected/7e99dc28-22e4-4d3f-8e4d-48f8e9770fe7-kube-api-access-xrqhj\") pod \"redhat-operators-rrs6q\" (UID: \"7e99dc28-22e4-4d3f-8e4d-48f8e9770fe7\") " pod="openshift-marketplace/redhat-operators-rrs6q" Dec 11 10:25:34 crc kubenswrapper[4953]: I1211 10:25:34.342385 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rrs6q" Dec 11 10:25:34 crc kubenswrapper[4953]: I1211 10:25:34.581957 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rrs6q"] Dec 11 10:25:34 crc kubenswrapper[4953]: I1211 10:25:34.885612 4953 generic.go:334] "Generic (PLEG): container finished" podID="7e99dc28-22e4-4d3f-8e4d-48f8e9770fe7" containerID="ff769c0c890cfb72a8e1bc78678692be2930affeea027d09879baba9ace17364" exitCode=0 Dec 11 10:25:34 crc kubenswrapper[4953]: I1211 10:25:34.885737 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rrs6q" event={"ID":"7e99dc28-22e4-4d3f-8e4d-48f8e9770fe7","Type":"ContainerDied","Data":"ff769c0c890cfb72a8e1bc78678692be2930affeea027d09879baba9ace17364"} Dec 11 10:25:34 crc kubenswrapper[4953]: I1211 10:25:34.886010 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rrs6q" event={"ID":"7e99dc28-22e4-4d3f-8e4d-48f8e9770fe7","Type":"ContainerStarted","Data":"0f97f9edbd7e4d88195745c3d55ccbdc48824e477d5adeaa5bb51a47b6ecf4d1"} Dec 11 10:25:36 crc kubenswrapper[4953]: I1211 10:25:36.900624 4953 generic.go:334] "Generic (PLEG): container finished" podID="e3eae748-2ab6-4203-826e-b7555edb049a" containerID="060748a93270ebf55fce0b14daadccc9cd089eb536ef48f0ec4ab748edcf0f7b" exitCode=0 Dec 11 10:25:36 crc kubenswrapper[4953]: I1211 10:25:36.900686 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8k2qpg" event={"ID":"e3eae748-2ab6-4203-826e-b7555edb049a","Type":"ContainerDied","Data":"060748a93270ebf55fce0b14daadccc9cd089eb536ef48f0ec4ab748edcf0f7b"} Dec 11 10:25:37 crc kubenswrapper[4953]: I1211 10:25:37.910706 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rrs6q" event={"ID":"7e99dc28-22e4-4d3f-8e4d-48f8e9770fe7","Type":"ContainerStarted","Data":"c871cc1ca7e923d3c61f6d526e1a3722409dbb63cb23e374eb66783f8309d05c"} Dec 11 10:25:37 crc kubenswrapper[4953]: I1211 10:25:37.913930 4953 generic.go:334] "Generic (PLEG): container finished" podID="e3eae748-2ab6-4203-826e-b7555edb049a" containerID="cf4db5c3aea7e46102f126885640a8347c19ffe30350138273ea795c8ea457c0" exitCode=0 Dec 11 10:25:37 crc kubenswrapper[4953]: I1211 10:25:37.914026 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8k2qpg" event={"ID":"e3eae748-2ab6-4203-826e-b7555edb049a","Type":"ContainerDied","Data":"cf4db5c3aea7e46102f126885640a8347c19ffe30350138273ea795c8ea457c0"} Dec 11 10:25:38 crc kubenswrapper[4953]: I1211 10:25:38.925884 4953 generic.go:334] "Generic (PLEG): container finished" podID="7e99dc28-22e4-4d3f-8e4d-48f8e9770fe7" containerID="c871cc1ca7e923d3c61f6d526e1a3722409dbb63cb23e374eb66783f8309d05c" exitCode=0 Dec 11 10:25:38 crc kubenswrapper[4953]: I1211 10:25:38.925965 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rrs6q" event={"ID":"7e99dc28-22e4-4d3f-8e4d-48f8e9770fe7","Type":"ContainerDied","Data":"c871cc1ca7e923d3c61f6d526e1a3722409dbb63cb23e374eb66783f8309d05c"} Dec 11 10:25:39 crc kubenswrapper[4953]: I1211 10:25:39.214044 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8k2qpg" Dec 11 10:25:39 crc kubenswrapper[4953]: I1211 10:25:39.325007 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e3eae748-2ab6-4203-826e-b7555edb049a-bundle\") pod \"e3eae748-2ab6-4203-826e-b7555edb049a\" (UID: \"e3eae748-2ab6-4203-826e-b7555edb049a\") " Dec 11 10:25:39 crc kubenswrapper[4953]: I1211 10:25:39.325130 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e3eae748-2ab6-4203-826e-b7555edb049a-util\") pod \"e3eae748-2ab6-4203-826e-b7555edb049a\" (UID: \"e3eae748-2ab6-4203-826e-b7555edb049a\") " Dec 11 10:25:39 crc kubenswrapper[4953]: I1211 10:25:39.325207 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qz79n\" (UniqueName: \"kubernetes.io/projected/e3eae748-2ab6-4203-826e-b7555edb049a-kube-api-access-qz79n\") pod \"e3eae748-2ab6-4203-826e-b7555edb049a\" (UID: \"e3eae748-2ab6-4203-826e-b7555edb049a\") " Dec 11 10:25:39 crc kubenswrapper[4953]: I1211 10:25:39.327814 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3eae748-2ab6-4203-826e-b7555edb049a-bundle" (OuterVolumeSpecName: "bundle") pod "e3eae748-2ab6-4203-826e-b7555edb049a" (UID: "e3eae748-2ab6-4203-826e-b7555edb049a"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:25:39 crc kubenswrapper[4953]: I1211 10:25:39.331331 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3eae748-2ab6-4203-826e-b7555edb049a-kube-api-access-qz79n" (OuterVolumeSpecName: "kube-api-access-qz79n") pod "e3eae748-2ab6-4203-826e-b7555edb049a" (UID: "e3eae748-2ab6-4203-826e-b7555edb049a"). InnerVolumeSpecName "kube-api-access-qz79n". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:25:39 crc kubenswrapper[4953]: I1211 10:25:39.336299 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3eae748-2ab6-4203-826e-b7555edb049a-util" (OuterVolumeSpecName: "util") pod "e3eae748-2ab6-4203-826e-b7555edb049a" (UID: "e3eae748-2ab6-4203-826e-b7555edb049a"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:25:39 crc kubenswrapper[4953]: I1211 10:25:39.426185 4953 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e3eae748-2ab6-4203-826e-b7555edb049a-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 10:25:39 crc kubenswrapper[4953]: I1211 10:25:39.426239 4953 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e3eae748-2ab6-4203-826e-b7555edb049a-util\") on node \"crc\" DevicePath \"\"" Dec 11 10:25:39 crc kubenswrapper[4953]: I1211 10:25:39.426249 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qz79n\" (UniqueName: \"kubernetes.io/projected/e3eae748-2ab6-4203-826e-b7555edb049a-kube-api-access-qz79n\") on node \"crc\" DevicePath \"\"" Dec 11 10:25:39 crc kubenswrapper[4953]: I1211 10:25:39.937866 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8k2qpg" event={"ID":"e3eae748-2ab6-4203-826e-b7555edb049a","Type":"ContainerDied","Data":"d2abbd24c082e943064c1e71e673c6aa032fc67e356ccbe8d9dcbf5d1a0d5f65"} Dec 11 10:25:39 crc kubenswrapper[4953]: I1211 10:25:39.938321 4953 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d2abbd24c082e943064c1e71e673c6aa032fc67e356ccbe8d9dcbf5d1a0d5f65" Dec 11 10:25:39 crc kubenswrapper[4953]: I1211 10:25:39.937876 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8k2qpg" Dec 11 10:25:39 crc kubenswrapper[4953]: I1211 10:25:39.942152 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rrs6q" event={"ID":"7e99dc28-22e4-4d3f-8e4d-48f8e9770fe7","Type":"ContainerStarted","Data":"fc152b8e0d4b9bed80dfb4c9bd34fb418aa525f383a22168e2ec4019c3aa4327"} Dec 11 10:25:39 crc kubenswrapper[4953]: I1211 10:25:39.973703 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-rrs6q" podStartSLOduration=2.1120291509999998 podStartE2EDuration="6.973672916s" podCreationTimestamp="2025-12-11 10:25:33 +0000 UTC" firstStartedPulling="2025-12-11 10:25:34.887112476 +0000 UTC m=+852.910971519" lastFinishedPulling="2025-12-11 10:25:39.748756251 +0000 UTC m=+857.772615284" observedRunningTime="2025-12-11 10:25:39.969025593 +0000 UTC m=+857.992884626" watchObservedRunningTime="2025-12-11 10:25:39.973672916 +0000 UTC m=+857.997531989" Dec 11 10:25:43 crc kubenswrapper[4953]: I1211 10:25:43.030196 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-6769fb99d-qnz6f"] Dec 11 10:25:43 crc kubenswrapper[4953]: E1211 10:25:43.030883 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3eae748-2ab6-4203-826e-b7555edb049a" containerName="extract" Dec 11 10:25:43 crc kubenswrapper[4953]: I1211 10:25:43.030896 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3eae748-2ab6-4203-826e-b7555edb049a" containerName="extract" Dec 11 10:25:43 crc kubenswrapper[4953]: E1211 10:25:43.030908 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3eae748-2ab6-4203-826e-b7555edb049a" containerName="util" Dec 11 10:25:43 crc kubenswrapper[4953]: I1211 10:25:43.030915 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3eae748-2ab6-4203-826e-b7555edb049a" containerName="util" Dec 11 10:25:43 crc kubenswrapper[4953]: E1211 10:25:43.030927 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3eae748-2ab6-4203-826e-b7555edb049a" containerName="pull" Dec 11 10:25:43 crc kubenswrapper[4953]: I1211 10:25:43.030934 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3eae748-2ab6-4203-826e-b7555edb049a" containerName="pull" Dec 11 10:25:43 crc kubenswrapper[4953]: I1211 10:25:43.031236 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3eae748-2ab6-4203-826e-b7555edb049a" containerName="extract" Dec 11 10:25:43 crc kubenswrapper[4953]: I1211 10:25:43.032082 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-6769fb99d-qnz6f" Dec 11 10:25:43 crc kubenswrapper[4953]: I1211 10:25:43.035944 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Dec 11 10:25:43 crc kubenswrapper[4953]: I1211 10:25:43.036252 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Dec 11 10:25:43 crc kubenswrapper[4953]: I1211 10:25:43.036540 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-6952s" Dec 11 10:25:43 crc kubenswrapper[4953]: I1211 10:25:43.054202 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-6769fb99d-qnz6f"] Dec 11 10:25:43 crc kubenswrapper[4953]: I1211 10:25:43.359531 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfg98\" (UniqueName: \"kubernetes.io/projected/6a26b7e3-7f9d-4532-9070-aa467b57f0e4-kube-api-access-pfg98\") pod \"nmstate-operator-6769fb99d-qnz6f\" (UID: \"6a26b7e3-7f9d-4532-9070-aa467b57f0e4\") " pod="openshift-nmstate/nmstate-operator-6769fb99d-qnz6f" Dec 11 10:25:43 crc kubenswrapper[4953]: I1211 10:25:43.461478 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pfg98\" (UniqueName: \"kubernetes.io/projected/6a26b7e3-7f9d-4532-9070-aa467b57f0e4-kube-api-access-pfg98\") pod \"nmstate-operator-6769fb99d-qnz6f\" (UID: \"6a26b7e3-7f9d-4532-9070-aa467b57f0e4\") " pod="openshift-nmstate/nmstate-operator-6769fb99d-qnz6f" Dec 11 10:25:43 crc kubenswrapper[4953]: I1211 10:25:43.507041 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pfg98\" (UniqueName: \"kubernetes.io/projected/6a26b7e3-7f9d-4532-9070-aa467b57f0e4-kube-api-access-pfg98\") pod \"nmstate-operator-6769fb99d-qnz6f\" (UID: \"6a26b7e3-7f9d-4532-9070-aa467b57f0e4\") " pod="openshift-nmstate/nmstate-operator-6769fb99d-qnz6f" Dec 11 10:25:43 crc kubenswrapper[4953]: I1211 10:25:43.685499 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-6769fb99d-qnz6f" Dec 11 10:25:43 crc kubenswrapper[4953]: I1211 10:25:43.909075 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-6769fb99d-qnz6f"] Dec 11 10:25:43 crc kubenswrapper[4953]: W1211 10:25:43.917562 4953 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6a26b7e3_7f9d_4532_9070_aa467b57f0e4.slice/crio-f00f88233841c74b7014ef75e832ce004168412bd0b9e2df2fef0b53ba590dad WatchSource:0}: Error finding container f00f88233841c74b7014ef75e832ce004168412bd0b9e2df2fef0b53ba590dad: Status 404 returned error can't find the container with id f00f88233841c74b7014ef75e832ce004168412bd0b9e2df2fef0b53ba590dad Dec 11 10:25:43 crc kubenswrapper[4953]: I1211 10:25:43.993024 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-6769fb99d-qnz6f" event={"ID":"6a26b7e3-7f9d-4532-9070-aa467b57f0e4","Type":"ContainerStarted","Data":"f00f88233841c74b7014ef75e832ce004168412bd0b9e2df2fef0b53ba590dad"} Dec 11 10:25:44 crc kubenswrapper[4953]: I1211 10:25:44.343268 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-rrs6q" Dec 11 10:25:44 crc kubenswrapper[4953]: I1211 10:25:44.343324 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-rrs6q" Dec 11 10:25:45 crc kubenswrapper[4953]: I1211 10:25:45.394322 4953 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-rrs6q" podUID="7e99dc28-22e4-4d3f-8e4d-48f8e9770fe7" containerName="registry-server" probeResult="failure" output=< Dec 11 10:25:45 crc kubenswrapper[4953]: timeout: failed to connect service ":50051" within 1s Dec 11 10:25:45 crc kubenswrapper[4953]: > Dec 11 10:25:48 crc kubenswrapper[4953]: I1211 10:25:48.016242 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-6769fb99d-qnz6f" event={"ID":"6a26b7e3-7f9d-4532-9070-aa467b57f0e4","Type":"ContainerStarted","Data":"b5f1e495b22e174f308d02f0eb1ca7dac938eac00c2863d1bb5b8984ff569d4d"} Dec 11 10:25:48 crc kubenswrapper[4953]: I1211 10:25:48.071014 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-6769fb99d-qnz6f" podStartSLOduration=1.660788511 podStartE2EDuration="5.070992828s" podCreationTimestamp="2025-12-11 10:25:43 +0000 UTC" firstStartedPulling="2025-12-11 10:25:43.920439423 +0000 UTC m=+861.944298456" lastFinishedPulling="2025-12-11 10:25:47.33064375 +0000 UTC m=+865.354502773" observedRunningTime="2025-12-11 10:25:48.068349662 +0000 UTC m=+866.092208685" watchObservedRunningTime="2025-12-11 10:25:48.070992828 +0000 UTC m=+866.094851871" Dec 11 10:25:51 crc kubenswrapper[4953]: I1211 10:25:51.949565 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-7f7f7578db-cl2xq"] Dec 11 10:25:52 crc kubenswrapper[4953]: I1211 10:25:52.000685 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-7f7f7578db-cl2xq" Dec 11 10:25:52 crc kubenswrapper[4953]: I1211 10:25:52.014698 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-6h2g9" Dec 11 10:25:52 crc kubenswrapper[4953]: I1211 10:25:52.029461 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-7f7f7578db-cl2xq"] Dec 11 10:25:52 crc kubenswrapper[4953]: I1211 10:25:52.051498 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-f8fb84555-6vgqn"] Dec 11 10:25:52 crc kubenswrapper[4953]: I1211 10:25:52.056818 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-f8fb84555-6vgqn" Dec 11 10:25:52 crc kubenswrapper[4953]: I1211 10:25:52.059323 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Dec 11 10:25:52 crc kubenswrapper[4953]: I1211 10:25:52.066548 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-f8fb84555-6vgqn"] Dec 11 10:25:52 crc kubenswrapper[4953]: I1211 10:25:52.074162 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-5dnlx"] Dec 11 10:25:52 crc kubenswrapper[4953]: I1211 10:25:52.074794 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-5dnlx" Dec 11 10:25:52 crc kubenswrapper[4953]: I1211 10:25:52.101021 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v45bs\" (UniqueName: \"kubernetes.io/projected/092d166e-69a2-487f-8790-77067bc1e7c6-kube-api-access-v45bs\") pod \"nmstate-webhook-f8fb84555-6vgqn\" (UID: \"092d166e-69a2-487f-8790-77067bc1e7c6\") " pod="openshift-nmstate/nmstate-webhook-f8fb84555-6vgqn" Dec 11 10:25:52 crc kubenswrapper[4953]: I1211 10:25:52.101086 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfpbf\" (UniqueName: \"kubernetes.io/projected/3c81d2de-4aed-4ff5-ad24-066959716a5b-kube-api-access-gfpbf\") pod \"nmstate-metrics-7f7f7578db-cl2xq\" (UID: \"3c81d2de-4aed-4ff5-ad24-066959716a5b\") " pod="openshift-nmstate/nmstate-metrics-7f7f7578db-cl2xq" Dec 11 10:25:52 crc kubenswrapper[4953]: I1211 10:25:52.101670 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/092d166e-69a2-487f-8790-77067bc1e7c6-tls-key-pair\") pod \"nmstate-webhook-f8fb84555-6vgqn\" (UID: \"092d166e-69a2-487f-8790-77067bc1e7c6\") " pod="openshift-nmstate/nmstate-webhook-f8fb84555-6vgqn" Dec 11 10:25:52 crc kubenswrapper[4953]: I1211 10:25:52.142454 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-6ff7998486-rwg62"] Dec 11 10:25:52 crc kubenswrapper[4953]: I1211 10:25:52.143430 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-rwg62" Dec 11 10:25:52 crc kubenswrapper[4953]: I1211 10:25:52.149929 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-k89x4" Dec 11 10:25:52 crc kubenswrapper[4953]: I1211 10:25:52.149973 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Dec 11 10:25:52 crc kubenswrapper[4953]: I1211 10:25:52.154343 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Dec 11 10:25:52 crc kubenswrapper[4953]: I1211 10:25:52.168229 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-6ff7998486-rwg62"] Dec 11 10:25:52 crc kubenswrapper[4953]: I1211 10:25:52.202362 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/30d483a8-3c69-4a93-bb46-58c753550b0e-plugin-serving-cert\") pod \"nmstate-console-plugin-6ff7998486-rwg62\" (UID: \"30d483a8-3c69-4a93-bb46-58c753550b0e\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-rwg62" Dec 11 10:25:52 crc kubenswrapper[4953]: I1211 10:25:52.202404 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/79a8dd6f-7ac1-4129-bf6e-e77efc13a47b-ovs-socket\") pod \"nmstate-handler-5dnlx\" (UID: \"79a8dd6f-7ac1-4129-bf6e-e77efc13a47b\") " pod="openshift-nmstate/nmstate-handler-5dnlx" Dec 11 10:25:52 crc kubenswrapper[4953]: I1211 10:25:52.202445 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/79a8dd6f-7ac1-4129-bf6e-e77efc13a47b-nmstate-lock\") pod \"nmstate-handler-5dnlx\" (UID: \"79a8dd6f-7ac1-4129-bf6e-e77efc13a47b\") " pod="openshift-nmstate/nmstate-handler-5dnlx" Dec 11 10:25:52 crc kubenswrapper[4953]: I1211 10:25:52.202515 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qk9m2\" (UniqueName: \"kubernetes.io/projected/79a8dd6f-7ac1-4129-bf6e-e77efc13a47b-kube-api-access-qk9m2\") pod \"nmstate-handler-5dnlx\" (UID: \"79a8dd6f-7ac1-4129-bf6e-e77efc13a47b\") " pod="openshift-nmstate/nmstate-handler-5dnlx" Dec 11 10:25:52 crc kubenswrapper[4953]: I1211 10:25:52.202562 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v45bs\" (UniqueName: \"kubernetes.io/projected/092d166e-69a2-487f-8790-77067bc1e7c6-kube-api-access-v45bs\") pod \"nmstate-webhook-f8fb84555-6vgqn\" (UID: \"092d166e-69a2-487f-8790-77067bc1e7c6\") " pod="openshift-nmstate/nmstate-webhook-f8fb84555-6vgqn" Dec 11 10:25:52 crc kubenswrapper[4953]: I1211 10:25:52.202626 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/79a8dd6f-7ac1-4129-bf6e-e77efc13a47b-dbus-socket\") pod \"nmstate-handler-5dnlx\" (UID: \"79a8dd6f-7ac1-4129-bf6e-e77efc13a47b\") " pod="openshift-nmstate/nmstate-handler-5dnlx" Dec 11 10:25:52 crc kubenswrapper[4953]: I1211 10:25:52.202660 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gfpbf\" (UniqueName: \"kubernetes.io/projected/3c81d2de-4aed-4ff5-ad24-066959716a5b-kube-api-access-gfpbf\") pod \"nmstate-metrics-7f7f7578db-cl2xq\" (UID: \"3c81d2de-4aed-4ff5-ad24-066959716a5b\") " pod="openshift-nmstate/nmstate-metrics-7f7f7578db-cl2xq" Dec 11 10:25:52 crc kubenswrapper[4953]: I1211 10:25:52.202684 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v55sm\" (UniqueName: \"kubernetes.io/projected/30d483a8-3c69-4a93-bb46-58c753550b0e-kube-api-access-v55sm\") pod \"nmstate-console-plugin-6ff7998486-rwg62\" (UID: \"30d483a8-3c69-4a93-bb46-58c753550b0e\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-rwg62" Dec 11 10:25:52 crc kubenswrapper[4953]: I1211 10:25:52.202706 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/092d166e-69a2-487f-8790-77067bc1e7c6-tls-key-pair\") pod \"nmstate-webhook-f8fb84555-6vgqn\" (UID: \"092d166e-69a2-487f-8790-77067bc1e7c6\") " pod="openshift-nmstate/nmstate-webhook-f8fb84555-6vgqn" Dec 11 10:25:52 crc kubenswrapper[4953]: I1211 10:25:52.202727 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/30d483a8-3c69-4a93-bb46-58c753550b0e-nginx-conf\") pod \"nmstate-console-plugin-6ff7998486-rwg62\" (UID: \"30d483a8-3c69-4a93-bb46-58c753550b0e\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-rwg62" Dec 11 10:25:52 crc kubenswrapper[4953]: E1211 10:25:52.202813 4953 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Dec 11 10:25:52 crc kubenswrapper[4953]: E1211 10:25:52.202877 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/092d166e-69a2-487f-8790-77067bc1e7c6-tls-key-pair podName:092d166e-69a2-487f-8790-77067bc1e7c6 nodeName:}" failed. No retries permitted until 2025-12-11 10:25:52.7028427 +0000 UTC m=+870.726701743 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/092d166e-69a2-487f-8790-77067bc1e7c6-tls-key-pair") pod "nmstate-webhook-f8fb84555-6vgqn" (UID: "092d166e-69a2-487f-8790-77067bc1e7c6") : secret "openshift-nmstate-webhook" not found Dec 11 10:25:52 crc kubenswrapper[4953]: I1211 10:25:52.222371 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gfpbf\" (UniqueName: \"kubernetes.io/projected/3c81d2de-4aed-4ff5-ad24-066959716a5b-kube-api-access-gfpbf\") pod \"nmstate-metrics-7f7f7578db-cl2xq\" (UID: \"3c81d2de-4aed-4ff5-ad24-066959716a5b\") " pod="openshift-nmstate/nmstate-metrics-7f7f7578db-cl2xq" Dec 11 10:25:52 crc kubenswrapper[4953]: I1211 10:25:52.228103 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v45bs\" (UniqueName: \"kubernetes.io/projected/092d166e-69a2-487f-8790-77067bc1e7c6-kube-api-access-v45bs\") pod \"nmstate-webhook-f8fb84555-6vgqn\" (UID: \"092d166e-69a2-487f-8790-77067bc1e7c6\") " pod="openshift-nmstate/nmstate-webhook-f8fb84555-6vgqn" Dec 11 10:25:52 crc kubenswrapper[4953]: I1211 10:25:52.303685 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qk9m2\" (UniqueName: \"kubernetes.io/projected/79a8dd6f-7ac1-4129-bf6e-e77efc13a47b-kube-api-access-qk9m2\") pod \"nmstate-handler-5dnlx\" (UID: \"79a8dd6f-7ac1-4129-bf6e-e77efc13a47b\") " pod="openshift-nmstate/nmstate-handler-5dnlx" Dec 11 10:25:52 crc kubenswrapper[4953]: I1211 10:25:52.303778 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/79a8dd6f-7ac1-4129-bf6e-e77efc13a47b-dbus-socket\") pod \"nmstate-handler-5dnlx\" (UID: \"79a8dd6f-7ac1-4129-bf6e-e77efc13a47b\") " pod="openshift-nmstate/nmstate-handler-5dnlx" Dec 11 10:25:52 crc kubenswrapper[4953]: I1211 10:25:52.303812 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v55sm\" (UniqueName: \"kubernetes.io/projected/30d483a8-3c69-4a93-bb46-58c753550b0e-kube-api-access-v55sm\") pod \"nmstate-console-plugin-6ff7998486-rwg62\" (UID: \"30d483a8-3c69-4a93-bb46-58c753550b0e\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-rwg62" Dec 11 10:25:52 crc kubenswrapper[4953]: I1211 10:25:52.303861 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/30d483a8-3c69-4a93-bb46-58c753550b0e-nginx-conf\") pod \"nmstate-console-plugin-6ff7998486-rwg62\" (UID: \"30d483a8-3c69-4a93-bb46-58c753550b0e\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-rwg62" Dec 11 10:25:52 crc kubenswrapper[4953]: I1211 10:25:52.303886 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/30d483a8-3c69-4a93-bb46-58c753550b0e-plugin-serving-cert\") pod \"nmstate-console-plugin-6ff7998486-rwg62\" (UID: \"30d483a8-3c69-4a93-bb46-58c753550b0e\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-rwg62" Dec 11 10:25:52 crc kubenswrapper[4953]: I1211 10:25:52.303910 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/79a8dd6f-7ac1-4129-bf6e-e77efc13a47b-ovs-socket\") pod \"nmstate-handler-5dnlx\" (UID: \"79a8dd6f-7ac1-4129-bf6e-e77efc13a47b\") " pod="openshift-nmstate/nmstate-handler-5dnlx" Dec 11 10:25:52 crc kubenswrapper[4953]: I1211 10:25:52.303949 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/79a8dd6f-7ac1-4129-bf6e-e77efc13a47b-nmstate-lock\") pod \"nmstate-handler-5dnlx\" (UID: \"79a8dd6f-7ac1-4129-bf6e-e77efc13a47b\") " pod="openshift-nmstate/nmstate-handler-5dnlx" Dec 11 10:25:52 crc kubenswrapper[4953]: I1211 10:25:52.304034 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/79a8dd6f-7ac1-4129-bf6e-e77efc13a47b-nmstate-lock\") pod \"nmstate-handler-5dnlx\" (UID: \"79a8dd6f-7ac1-4129-bf6e-e77efc13a47b\") " pod="openshift-nmstate/nmstate-handler-5dnlx" Dec 11 10:25:52 crc kubenswrapper[4953]: I1211 10:25:52.304339 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/79a8dd6f-7ac1-4129-bf6e-e77efc13a47b-dbus-socket\") pod \"nmstate-handler-5dnlx\" (UID: \"79a8dd6f-7ac1-4129-bf6e-e77efc13a47b\") " pod="openshift-nmstate/nmstate-handler-5dnlx" Dec 11 10:25:52 crc kubenswrapper[4953]: I1211 10:25:52.304380 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/79a8dd6f-7ac1-4129-bf6e-e77efc13a47b-ovs-socket\") pod \"nmstate-handler-5dnlx\" (UID: \"79a8dd6f-7ac1-4129-bf6e-e77efc13a47b\") " pod="openshift-nmstate/nmstate-handler-5dnlx" Dec 11 10:25:52 crc kubenswrapper[4953]: I1211 10:25:52.305254 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/30d483a8-3c69-4a93-bb46-58c753550b0e-nginx-conf\") pod \"nmstate-console-plugin-6ff7998486-rwg62\" (UID: \"30d483a8-3c69-4a93-bb46-58c753550b0e\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-rwg62" Dec 11 10:25:52 crc kubenswrapper[4953]: I1211 10:25:52.307848 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/30d483a8-3c69-4a93-bb46-58c753550b0e-plugin-serving-cert\") pod \"nmstate-console-plugin-6ff7998486-rwg62\" (UID: \"30d483a8-3c69-4a93-bb46-58c753550b0e\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-rwg62" Dec 11 10:25:52 crc kubenswrapper[4953]: I1211 10:25:52.324485 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qk9m2\" (UniqueName: \"kubernetes.io/projected/79a8dd6f-7ac1-4129-bf6e-e77efc13a47b-kube-api-access-qk9m2\") pod \"nmstate-handler-5dnlx\" (UID: \"79a8dd6f-7ac1-4129-bf6e-e77efc13a47b\") " pod="openshift-nmstate/nmstate-handler-5dnlx" Dec 11 10:25:52 crc kubenswrapper[4953]: I1211 10:25:52.328111 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v55sm\" (UniqueName: \"kubernetes.io/projected/30d483a8-3c69-4a93-bb46-58c753550b0e-kube-api-access-v55sm\") pod \"nmstate-console-plugin-6ff7998486-rwg62\" (UID: \"30d483a8-3c69-4a93-bb46-58c753550b0e\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-rwg62" Dec 11 10:25:52 crc kubenswrapper[4953]: I1211 10:25:52.344015 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-7f7f7578db-cl2xq" Dec 11 10:25:52 crc kubenswrapper[4953]: I1211 10:25:52.353313 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-747b56dcd7-vh44c"] Dec 11 10:25:52 crc kubenswrapper[4953]: I1211 10:25:52.353992 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-747b56dcd7-vh44c" Dec 11 10:25:52 crc kubenswrapper[4953]: I1211 10:25:52.373948 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-747b56dcd7-vh44c"] Dec 11 10:25:52 crc kubenswrapper[4953]: I1211 10:25:52.401274 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-5dnlx" Dec 11 10:25:52 crc kubenswrapper[4953]: I1211 10:25:52.404939 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/510ecc80-647b-4c77-9174-20f161f8bd04-console-oauth-config\") pod \"console-747b56dcd7-vh44c\" (UID: \"510ecc80-647b-4c77-9174-20f161f8bd04\") " pod="openshift-console/console-747b56dcd7-vh44c" Dec 11 10:25:52 crc kubenswrapper[4953]: I1211 10:25:52.405007 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/510ecc80-647b-4c77-9174-20f161f8bd04-console-config\") pod \"console-747b56dcd7-vh44c\" (UID: \"510ecc80-647b-4c77-9174-20f161f8bd04\") " pod="openshift-console/console-747b56dcd7-vh44c" Dec 11 10:25:52 crc kubenswrapper[4953]: I1211 10:25:52.405064 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5gm5\" (UniqueName: \"kubernetes.io/projected/510ecc80-647b-4c77-9174-20f161f8bd04-kube-api-access-v5gm5\") pod \"console-747b56dcd7-vh44c\" (UID: \"510ecc80-647b-4c77-9174-20f161f8bd04\") " pod="openshift-console/console-747b56dcd7-vh44c" Dec 11 10:25:52 crc kubenswrapper[4953]: I1211 10:25:52.405119 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/510ecc80-647b-4c77-9174-20f161f8bd04-console-serving-cert\") pod \"console-747b56dcd7-vh44c\" (UID: \"510ecc80-647b-4c77-9174-20f161f8bd04\") " pod="openshift-console/console-747b56dcd7-vh44c" Dec 11 10:25:52 crc kubenswrapper[4953]: I1211 10:25:52.405200 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/510ecc80-647b-4c77-9174-20f161f8bd04-service-ca\") pod \"console-747b56dcd7-vh44c\" (UID: \"510ecc80-647b-4c77-9174-20f161f8bd04\") " pod="openshift-console/console-747b56dcd7-vh44c" Dec 11 10:25:52 crc kubenswrapper[4953]: I1211 10:25:52.405233 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/510ecc80-647b-4c77-9174-20f161f8bd04-trusted-ca-bundle\") pod \"console-747b56dcd7-vh44c\" (UID: \"510ecc80-647b-4c77-9174-20f161f8bd04\") " pod="openshift-console/console-747b56dcd7-vh44c" Dec 11 10:25:52 crc kubenswrapper[4953]: I1211 10:25:52.405257 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/510ecc80-647b-4c77-9174-20f161f8bd04-oauth-serving-cert\") pod \"console-747b56dcd7-vh44c\" (UID: \"510ecc80-647b-4c77-9174-20f161f8bd04\") " pod="openshift-console/console-747b56dcd7-vh44c" Dec 11 10:25:52 crc kubenswrapper[4953]: I1211 10:25:52.464792 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-rwg62" Dec 11 10:25:52 crc kubenswrapper[4953]: I1211 10:25:52.508182 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v5gm5\" (UniqueName: \"kubernetes.io/projected/510ecc80-647b-4c77-9174-20f161f8bd04-kube-api-access-v5gm5\") pod \"console-747b56dcd7-vh44c\" (UID: \"510ecc80-647b-4c77-9174-20f161f8bd04\") " pod="openshift-console/console-747b56dcd7-vh44c" Dec 11 10:25:52 crc kubenswrapper[4953]: I1211 10:25:52.508237 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/510ecc80-647b-4c77-9174-20f161f8bd04-console-serving-cert\") pod \"console-747b56dcd7-vh44c\" (UID: \"510ecc80-647b-4c77-9174-20f161f8bd04\") " pod="openshift-console/console-747b56dcd7-vh44c" Dec 11 10:25:52 crc kubenswrapper[4953]: I1211 10:25:52.508280 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/510ecc80-647b-4c77-9174-20f161f8bd04-service-ca\") pod \"console-747b56dcd7-vh44c\" (UID: \"510ecc80-647b-4c77-9174-20f161f8bd04\") " pod="openshift-console/console-747b56dcd7-vh44c" Dec 11 10:25:52 crc kubenswrapper[4953]: I1211 10:25:52.508304 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/510ecc80-647b-4c77-9174-20f161f8bd04-trusted-ca-bundle\") pod \"console-747b56dcd7-vh44c\" (UID: \"510ecc80-647b-4c77-9174-20f161f8bd04\") " pod="openshift-console/console-747b56dcd7-vh44c" Dec 11 10:25:52 crc kubenswrapper[4953]: I1211 10:25:52.508328 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/510ecc80-647b-4c77-9174-20f161f8bd04-oauth-serving-cert\") pod \"console-747b56dcd7-vh44c\" (UID: \"510ecc80-647b-4c77-9174-20f161f8bd04\") " pod="openshift-console/console-747b56dcd7-vh44c" Dec 11 10:25:52 crc kubenswrapper[4953]: I1211 10:25:52.508371 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/510ecc80-647b-4c77-9174-20f161f8bd04-console-oauth-config\") pod \"console-747b56dcd7-vh44c\" (UID: \"510ecc80-647b-4c77-9174-20f161f8bd04\") " pod="openshift-console/console-747b56dcd7-vh44c" Dec 11 10:25:52 crc kubenswrapper[4953]: I1211 10:25:52.508392 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/510ecc80-647b-4c77-9174-20f161f8bd04-console-config\") pod \"console-747b56dcd7-vh44c\" (UID: \"510ecc80-647b-4c77-9174-20f161f8bd04\") " pod="openshift-console/console-747b56dcd7-vh44c" Dec 11 10:25:52 crc kubenswrapper[4953]: I1211 10:25:52.509285 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/510ecc80-647b-4c77-9174-20f161f8bd04-console-config\") pod \"console-747b56dcd7-vh44c\" (UID: \"510ecc80-647b-4c77-9174-20f161f8bd04\") " pod="openshift-console/console-747b56dcd7-vh44c" Dec 11 10:25:52 crc kubenswrapper[4953]: I1211 10:25:52.510125 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/510ecc80-647b-4c77-9174-20f161f8bd04-service-ca\") pod \"console-747b56dcd7-vh44c\" (UID: \"510ecc80-647b-4c77-9174-20f161f8bd04\") " pod="openshift-console/console-747b56dcd7-vh44c" Dec 11 10:25:52 crc kubenswrapper[4953]: I1211 10:25:52.514053 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/510ecc80-647b-4c77-9174-20f161f8bd04-console-serving-cert\") pod \"console-747b56dcd7-vh44c\" (UID: \"510ecc80-647b-4c77-9174-20f161f8bd04\") " pod="openshift-console/console-747b56dcd7-vh44c" Dec 11 10:25:52 crc kubenswrapper[4953]: I1211 10:25:52.515885 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/510ecc80-647b-4c77-9174-20f161f8bd04-trusted-ca-bundle\") pod \"console-747b56dcd7-vh44c\" (UID: \"510ecc80-647b-4c77-9174-20f161f8bd04\") " pod="openshift-console/console-747b56dcd7-vh44c" Dec 11 10:25:52 crc kubenswrapper[4953]: I1211 10:25:52.518070 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/510ecc80-647b-4c77-9174-20f161f8bd04-console-oauth-config\") pod \"console-747b56dcd7-vh44c\" (UID: \"510ecc80-647b-4c77-9174-20f161f8bd04\") " pod="openshift-console/console-747b56dcd7-vh44c" Dec 11 10:25:52 crc kubenswrapper[4953]: I1211 10:25:52.521665 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/510ecc80-647b-4c77-9174-20f161f8bd04-oauth-serving-cert\") pod \"console-747b56dcd7-vh44c\" (UID: \"510ecc80-647b-4c77-9174-20f161f8bd04\") " pod="openshift-console/console-747b56dcd7-vh44c" Dec 11 10:25:52 crc kubenswrapper[4953]: I1211 10:25:52.529678 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5gm5\" (UniqueName: \"kubernetes.io/projected/510ecc80-647b-4c77-9174-20f161f8bd04-kube-api-access-v5gm5\") pod \"console-747b56dcd7-vh44c\" (UID: \"510ecc80-647b-4c77-9174-20f161f8bd04\") " pod="openshift-console/console-747b56dcd7-vh44c" Dec 11 10:25:52 crc kubenswrapper[4953]: I1211 10:25:52.597431 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-7f7f7578db-cl2xq"] Dec 11 10:25:52 crc kubenswrapper[4953]: I1211 10:25:52.648499 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-6ff7998486-rwg62"] Dec 11 10:25:52 crc kubenswrapper[4953]: I1211 10:25:52.683697 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-747b56dcd7-vh44c" Dec 11 10:25:52 crc kubenswrapper[4953]: I1211 10:25:52.711432 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/092d166e-69a2-487f-8790-77067bc1e7c6-tls-key-pair\") pod \"nmstate-webhook-f8fb84555-6vgqn\" (UID: \"092d166e-69a2-487f-8790-77067bc1e7c6\") " pod="openshift-nmstate/nmstate-webhook-f8fb84555-6vgqn" Dec 11 10:25:52 crc kubenswrapper[4953]: I1211 10:25:52.716037 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/092d166e-69a2-487f-8790-77067bc1e7c6-tls-key-pair\") pod \"nmstate-webhook-f8fb84555-6vgqn\" (UID: \"092d166e-69a2-487f-8790-77067bc1e7c6\") " pod="openshift-nmstate/nmstate-webhook-f8fb84555-6vgqn" Dec 11 10:25:52 crc kubenswrapper[4953]: I1211 10:25:52.886559 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-747b56dcd7-vh44c"] Dec 11 10:25:52 crc kubenswrapper[4953]: I1211 10:25:52.977213 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-f8fb84555-6vgqn" Dec 11 10:25:53 crc kubenswrapper[4953]: I1211 10:25:53.051611 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-747b56dcd7-vh44c" event={"ID":"510ecc80-647b-4c77-9174-20f161f8bd04","Type":"ContainerStarted","Data":"1de381e91b898d290e56d994fa187827ec16e3d64aa51803891a2ce02ad394e1"} Dec 11 10:25:53 crc kubenswrapper[4953]: I1211 10:25:53.051945 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-747b56dcd7-vh44c" event={"ID":"510ecc80-647b-4c77-9174-20f161f8bd04","Type":"ContainerStarted","Data":"734146588c527672a21713dd7f8fbf9a17e9402f49f9b94a6fdc80ef5261eb96"} Dec 11 10:25:53 crc kubenswrapper[4953]: I1211 10:25:53.053118 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-rwg62" event={"ID":"30d483a8-3c69-4a93-bb46-58c753550b0e","Type":"ContainerStarted","Data":"f2440832ddc7271e67752abfba26fc9ca0215e73c1d0b510bb18d2aceec292d8"} Dec 11 10:25:53 crc kubenswrapper[4953]: I1211 10:25:53.068640 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f7f7578db-cl2xq" event={"ID":"3c81d2de-4aed-4ff5-ad24-066959716a5b","Type":"ContainerStarted","Data":"107dad86437337955c8a08a2d9b0caf2d02ee8b37c58e510ae939756c8e0e563"} Dec 11 10:25:53 crc kubenswrapper[4953]: I1211 10:25:53.072589 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-5dnlx" event={"ID":"79a8dd6f-7ac1-4129-bf6e-e77efc13a47b","Type":"ContainerStarted","Data":"03537328e178b9aff5626ccf820f00111244336e429c4ca4f8f097d597af8510"} Dec 11 10:25:53 crc kubenswrapper[4953]: I1211 10:25:53.087417 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-747b56dcd7-vh44c" podStartSLOduration=1.08736954 podStartE2EDuration="1.08736954s" podCreationTimestamp="2025-12-11 10:25:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:25:53.085607993 +0000 UTC m=+871.109467026" watchObservedRunningTime="2025-12-11 10:25:53.08736954 +0000 UTC m=+871.111228583" Dec 11 10:25:53 crc kubenswrapper[4953]: I1211 10:25:53.181060 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-f8fb84555-6vgqn"] Dec 11 10:25:53 crc kubenswrapper[4953]: W1211 10:25:53.190046 4953 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod092d166e_69a2_487f_8790_77067bc1e7c6.slice/crio-df982a39803a70a03183f215ad0fede143f99319ac316fc600d1a8f5c7d8525e WatchSource:0}: Error finding container df982a39803a70a03183f215ad0fede143f99319ac316fc600d1a8f5c7d8525e: Status 404 returned error can't find the container with id df982a39803a70a03183f215ad0fede143f99319ac316fc600d1a8f5c7d8525e Dec 11 10:25:54 crc kubenswrapper[4953]: I1211 10:25:54.078191 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-f8fb84555-6vgqn" event={"ID":"092d166e-69a2-487f-8790-77067bc1e7c6","Type":"ContainerStarted","Data":"df982a39803a70a03183f215ad0fede143f99319ac316fc600d1a8f5c7d8525e"} Dec 11 10:25:54 crc kubenswrapper[4953]: I1211 10:25:54.391030 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-rrs6q" Dec 11 10:25:54 crc kubenswrapper[4953]: I1211 10:25:54.438737 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-rrs6q" Dec 11 10:25:54 crc kubenswrapper[4953]: I1211 10:25:54.628880 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rrs6q"] Dec 11 10:25:56 crc kubenswrapper[4953]: I1211 10:25:56.092397 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f7f7578db-cl2xq" event={"ID":"3c81d2de-4aed-4ff5-ad24-066959716a5b","Type":"ContainerStarted","Data":"00e08b0b0769dd3ccf5eff78fbe700bdf9974a6cbc0e085a826f28c4e46b2a52"} Dec 11 10:25:56 crc kubenswrapper[4953]: I1211 10:25:56.094262 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-f8fb84555-6vgqn" event={"ID":"092d166e-69a2-487f-8790-77067bc1e7c6","Type":"ContainerStarted","Data":"9be92f05cb4ce6aa13efd1ce1ee23f4594ee05f5872b4db3dc24ce2117ecf40d"} Dec 11 10:25:56 crc kubenswrapper[4953]: I1211 10:25:56.094621 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-rrs6q" podUID="7e99dc28-22e4-4d3f-8e4d-48f8e9770fe7" containerName="registry-server" containerID="cri-o://fc152b8e0d4b9bed80dfb4c9bd34fb418aa525f383a22168e2ec4019c3aa4327" gracePeriod=2 Dec 11 10:25:56 crc kubenswrapper[4953]: I1211 10:25:56.119637 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-f8fb84555-6vgqn" podStartSLOduration=2.885044576 podStartE2EDuration="5.119617011s" podCreationTimestamp="2025-12-11 10:25:51 +0000 UTC" firstStartedPulling="2025-12-11 10:25:53.191872097 +0000 UTC m=+871.215731130" lastFinishedPulling="2025-12-11 10:25:55.426444502 +0000 UTC m=+873.450303565" observedRunningTime="2025-12-11 10:25:56.117356887 +0000 UTC m=+874.141215920" watchObservedRunningTime="2025-12-11 10:25:56.119617011 +0000 UTC m=+874.143476064" Dec 11 10:25:57 crc kubenswrapper[4953]: I1211 10:25:57.001151 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rrs6q" Dec 11 10:25:57 crc kubenswrapper[4953]: I1211 10:25:57.103033 4953 generic.go:334] "Generic (PLEG): container finished" podID="7e99dc28-22e4-4d3f-8e4d-48f8e9770fe7" containerID="fc152b8e0d4b9bed80dfb4c9bd34fb418aa525f383a22168e2ec4019c3aa4327" exitCode=0 Dec 11 10:25:57 crc kubenswrapper[4953]: I1211 10:25:57.103121 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rrs6q" event={"ID":"7e99dc28-22e4-4d3f-8e4d-48f8e9770fe7","Type":"ContainerDied","Data":"fc152b8e0d4b9bed80dfb4c9bd34fb418aa525f383a22168e2ec4019c3aa4327"} Dec 11 10:25:57 crc kubenswrapper[4953]: I1211 10:25:57.103129 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rrs6q" Dec 11 10:25:57 crc kubenswrapper[4953]: I1211 10:25:57.103168 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rrs6q" event={"ID":"7e99dc28-22e4-4d3f-8e4d-48f8e9770fe7","Type":"ContainerDied","Data":"0f97f9edbd7e4d88195745c3d55ccbdc48824e477d5adeaa5bb51a47b6ecf4d1"} Dec 11 10:25:57 crc kubenswrapper[4953]: I1211 10:25:57.103187 4953 scope.go:117] "RemoveContainer" containerID="fc152b8e0d4b9bed80dfb4c9bd34fb418aa525f383a22168e2ec4019c3aa4327" Dec 11 10:25:57 crc kubenswrapper[4953]: I1211 10:25:57.105733 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-5dnlx" event={"ID":"79a8dd6f-7ac1-4129-bf6e-e77efc13a47b","Type":"ContainerStarted","Data":"0c6bc5af618052b95516d72070435638125a8cb1df2336b7fb43c0a10878e4bb"} Dec 11 10:25:57 crc kubenswrapper[4953]: I1211 10:25:57.105860 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-5dnlx" Dec 11 10:25:57 crc kubenswrapper[4953]: I1211 10:25:57.256737 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e99dc28-22e4-4d3f-8e4d-48f8e9770fe7-catalog-content\") pod \"7e99dc28-22e4-4d3f-8e4d-48f8e9770fe7\" (UID: \"7e99dc28-22e4-4d3f-8e4d-48f8e9770fe7\") " Dec 11 10:25:57 crc kubenswrapper[4953]: I1211 10:25:57.256820 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xrqhj\" (UniqueName: \"kubernetes.io/projected/7e99dc28-22e4-4d3f-8e4d-48f8e9770fe7-kube-api-access-xrqhj\") pod \"7e99dc28-22e4-4d3f-8e4d-48f8e9770fe7\" (UID: \"7e99dc28-22e4-4d3f-8e4d-48f8e9770fe7\") " Dec 11 10:25:57 crc kubenswrapper[4953]: I1211 10:25:57.256870 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e99dc28-22e4-4d3f-8e4d-48f8e9770fe7-utilities\") pod \"7e99dc28-22e4-4d3f-8e4d-48f8e9770fe7\" (UID: \"7e99dc28-22e4-4d3f-8e4d-48f8e9770fe7\") " Dec 11 10:25:57 crc kubenswrapper[4953]: I1211 10:25:57.258613 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-f8fb84555-6vgqn" Dec 11 10:25:57 crc kubenswrapper[4953]: I1211 10:25:57.258655 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7e99dc28-22e4-4d3f-8e4d-48f8e9770fe7-utilities" (OuterVolumeSpecName: "utilities") pod "7e99dc28-22e4-4d3f-8e4d-48f8e9770fe7" (UID: "7e99dc28-22e4-4d3f-8e4d-48f8e9770fe7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:25:57 crc kubenswrapper[4953]: I1211 10:25:57.267407 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e99dc28-22e4-4d3f-8e4d-48f8e9770fe7-kube-api-access-xrqhj" (OuterVolumeSpecName: "kube-api-access-xrqhj") pod "7e99dc28-22e4-4d3f-8e4d-48f8e9770fe7" (UID: "7e99dc28-22e4-4d3f-8e4d-48f8e9770fe7"). InnerVolumeSpecName "kube-api-access-xrqhj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:25:57 crc kubenswrapper[4953]: I1211 10:25:57.284253 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-5dnlx" podStartSLOduration=1.901145713 podStartE2EDuration="5.284232148s" podCreationTimestamp="2025-12-11 10:25:52 +0000 UTC" firstStartedPulling="2025-12-11 10:25:52.427393433 +0000 UTC m=+870.451252466" lastFinishedPulling="2025-12-11 10:25:55.810479858 +0000 UTC m=+873.834338901" observedRunningTime="2025-12-11 10:25:57.277098744 +0000 UTC m=+875.300957787" watchObservedRunningTime="2025-12-11 10:25:57.284232148 +0000 UTC m=+875.308091181" Dec 11 10:25:57 crc kubenswrapper[4953]: I1211 10:25:57.293210 4953 scope.go:117] "RemoveContainer" containerID="c871cc1ca7e923d3c61f6d526e1a3722409dbb63cb23e374eb66783f8309d05c" Dec 11 10:25:57 crc kubenswrapper[4953]: I1211 10:25:57.314657 4953 scope.go:117] "RemoveContainer" containerID="ff769c0c890cfb72a8e1bc78678692be2930affeea027d09879baba9ace17364" Dec 11 10:25:57 crc kubenswrapper[4953]: I1211 10:25:57.330905 4953 scope.go:117] "RemoveContainer" containerID="fc152b8e0d4b9bed80dfb4c9bd34fb418aa525f383a22168e2ec4019c3aa4327" Dec 11 10:25:57 crc kubenswrapper[4953]: E1211 10:25:57.331345 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc152b8e0d4b9bed80dfb4c9bd34fb418aa525f383a22168e2ec4019c3aa4327\": container with ID starting with fc152b8e0d4b9bed80dfb4c9bd34fb418aa525f383a22168e2ec4019c3aa4327 not found: ID does not exist" containerID="fc152b8e0d4b9bed80dfb4c9bd34fb418aa525f383a22168e2ec4019c3aa4327" Dec 11 10:25:57 crc kubenswrapper[4953]: I1211 10:25:57.331404 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc152b8e0d4b9bed80dfb4c9bd34fb418aa525f383a22168e2ec4019c3aa4327"} err="failed to get container status \"fc152b8e0d4b9bed80dfb4c9bd34fb418aa525f383a22168e2ec4019c3aa4327\": rpc error: code = NotFound desc = could not find container \"fc152b8e0d4b9bed80dfb4c9bd34fb418aa525f383a22168e2ec4019c3aa4327\": container with ID starting with fc152b8e0d4b9bed80dfb4c9bd34fb418aa525f383a22168e2ec4019c3aa4327 not found: ID does not exist" Dec 11 10:25:57 crc kubenswrapper[4953]: I1211 10:25:57.331455 4953 scope.go:117] "RemoveContainer" containerID="c871cc1ca7e923d3c61f6d526e1a3722409dbb63cb23e374eb66783f8309d05c" Dec 11 10:25:57 crc kubenswrapper[4953]: E1211 10:25:57.331840 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c871cc1ca7e923d3c61f6d526e1a3722409dbb63cb23e374eb66783f8309d05c\": container with ID starting with c871cc1ca7e923d3c61f6d526e1a3722409dbb63cb23e374eb66783f8309d05c not found: ID does not exist" containerID="c871cc1ca7e923d3c61f6d526e1a3722409dbb63cb23e374eb66783f8309d05c" Dec 11 10:25:57 crc kubenswrapper[4953]: I1211 10:25:57.331887 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c871cc1ca7e923d3c61f6d526e1a3722409dbb63cb23e374eb66783f8309d05c"} err="failed to get container status \"c871cc1ca7e923d3c61f6d526e1a3722409dbb63cb23e374eb66783f8309d05c\": rpc error: code = NotFound desc = could not find container \"c871cc1ca7e923d3c61f6d526e1a3722409dbb63cb23e374eb66783f8309d05c\": container with ID starting with c871cc1ca7e923d3c61f6d526e1a3722409dbb63cb23e374eb66783f8309d05c not found: ID does not exist" Dec 11 10:25:57 crc kubenswrapper[4953]: I1211 10:25:57.331925 4953 scope.go:117] "RemoveContainer" containerID="ff769c0c890cfb72a8e1bc78678692be2930affeea027d09879baba9ace17364" Dec 11 10:25:57 crc kubenswrapper[4953]: E1211 10:25:57.332393 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff769c0c890cfb72a8e1bc78678692be2930affeea027d09879baba9ace17364\": container with ID starting with ff769c0c890cfb72a8e1bc78678692be2930affeea027d09879baba9ace17364 not found: ID does not exist" containerID="ff769c0c890cfb72a8e1bc78678692be2930affeea027d09879baba9ace17364" Dec 11 10:25:57 crc kubenswrapper[4953]: I1211 10:25:57.332463 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff769c0c890cfb72a8e1bc78678692be2930affeea027d09879baba9ace17364"} err="failed to get container status \"ff769c0c890cfb72a8e1bc78678692be2930affeea027d09879baba9ace17364\": rpc error: code = NotFound desc = could not find container \"ff769c0c890cfb72a8e1bc78678692be2930affeea027d09879baba9ace17364\": container with ID starting with ff769c0c890cfb72a8e1bc78678692be2930affeea027d09879baba9ace17364 not found: ID does not exist" Dec 11 10:25:57 crc kubenswrapper[4953]: I1211 10:25:57.359635 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xrqhj\" (UniqueName: \"kubernetes.io/projected/7e99dc28-22e4-4d3f-8e4d-48f8e9770fe7-kube-api-access-xrqhj\") on node \"crc\" DevicePath \"\"" Dec 11 10:25:57 crc kubenswrapper[4953]: I1211 10:25:57.359675 4953 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e99dc28-22e4-4d3f-8e4d-48f8e9770fe7-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 10:25:57 crc kubenswrapper[4953]: I1211 10:25:57.414435 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7e99dc28-22e4-4d3f-8e4d-48f8e9770fe7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7e99dc28-22e4-4d3f-8e4d-48f8e9770fe7" (UID: "7e99dc28-22e4-4d3f-8e4d-48f8e9770fe7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:25:57 crc kubenswrapper[4953]: I1211 10:25:57.460670 4953 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e99dc28-22e4-4d3f-8e4d-48f8e9770fe7-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 10:25:57 crc kubenswrapper[4953]: I1211 10:25:57.757680 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rrs6q"] Dec 11 10:25:57 crc kubenswrapper[4953]: I1211 10:25:57.786911 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-rrs6q"] Dec 11 10:25:58 crc kubenswrapper[4953]: I1211 10:25:58.480015 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e99dc28-22e4-4d3f-8e4d-48f8e9770fe7" path="/var/lib/kubelet/pods/7e99dc28-22e4-4d3f-8e4d-48f8e9770fe7/volumes" Dec 11 10:25:59 crc kubenswrapper[4953]: I1211 10:25:59.132792 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-rwg62" event={"ID":"30d483a8-3c69-4a93-bb46-58c753550b0e","Type":"ContainerStarted","Data":"77fc9963f83d8862af695d5b817919ed9b1378a66fc175bc4ff84db7c916fc1e"} Dec 11 10:25:59 crc kubenswrapper[4953]: I1211 10:25:59.138746 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f7f7578db-cl2xq" event={"ID":"3c81d2de-4aed-4ff5-ad24-066959716a5b","Type":"ContainerStarted","Data":"3cdff08dc52753d3811ecdee21bfb2ba20ca661396bd9d4e62d7ef9e9001a2e5"} Dec 11 10:25:59 crc kubenswrapper[4953]: I1211 10:25:59.160885 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-rwg62" podStartSLOduration=0.962748754 podStartE2EDuration="7.160855708s" podCreationTimestamp="2025-12-11 10:25:52 +0000 UTC" firstStartedPulling="2025-12-11 10:25:52.657916412 +0000 UTC m=+870.681775445" lastFinishedPulling="2025-12-11 10:25:58.856023366 +0000 UTC m=+876.879882399" observedRunningTime="2025-12-11 10:25:59.155912176 +0000 UTC m=+877.179771239" watchObservedRunningTime="2025-12-11 10:25:59.160855708 +0000 UTC m=+877.184714781" Dec 11 10:25:59 crc kubenswrapper[4953]: I1211 10:25:59.186811 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-7f7f7578db-cl2xq" podStartSLOduration=1.888919106 podStartE2EDuration="8.186794551s" podCreationTimestamp="2025-12-11 10:25:51 +0000 UTC" firstStartedPulling="2025-12-11 10:25:52.609016544 +0000 UTC m=+870.632875577" lastFinishedPulling="2025-12-11 10:25:58.906891989 +0000 UTC m=+876.930751022" observedRunningTime="2025-12-11 10:25:59.18039548 +0000 UTC m=+877.204254513" watchObservedRunningTime="2025-12-11 10:25:59.186794551 +0000 UTC m=+877.210653584" Dec 11 10:26:02 crc kubenswrapper[4953]: I1211 10:26:02.440280 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-5dnlx" Dec 11 10:26:02 crc kubenswrapper[4953]: I1211 10:26:02.684334 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-747b56dcd7-vh44c" Dec 11 10:26:02 crc kubenswrapper[4953]: I1211 10:26:02.684393 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-747b56dcd7-vh44c" Dec 11 10:26:02 crc kubenswrapper[4953]: I1211 10:26:02.690267 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-747b56dcd7-vh44c" Dec 11 10:26:03 crc kubenswrapper[4953]: I1211 10:26:03.176227 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-747b56dcd7-vh44c" Dec 11 10:26:03 crc kubenswrapper[4953]: I1211 10:26:03.247628 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-wfrqd"] Dec 11 10:26:12 crc kubenswrapper[4953]: I1211 10:26:12.984622 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-f8fb84555-6vgqn" Dec 11 10:26:26 crc kubenswrapper[4953]: I1211 10:26:26.817095 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zjdtx"] Dec 11 10:26:26 crc kubenswrapper[4953]: E1211 10:26:26.818997 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e99dc28-22e4-4d3f-8e4d-48f8e9770fe7" containerName="extract-content" Dec 11 10:26:26 crc kubenswrapper[4953]: I1211 10:26:26.819040 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e99dc28-22e4-4d3f-8e4d-48f8e9770fe7" containerName="extract-content" Dec 11 10:26:26 crc kubenswrapper[4953]: E1211 10:26:26.819051 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e99dc28-22e4-4d3f-8e4d-48f8e9770fe7" containerName="extract-utilities" Dec 11 10:26:26 crc kubenswrapper[4953]: I1211 10:26:26.819061 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e99dc28-22e4-4d3f-8e4d-48f8e9770fe7" containerName="extract-utilities" Dec 11 10:26:26 crc kubenswrapper[4953]: E1211 10:26:26.819071 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e99dc28-22e4-4d3f-8e4d-48f8e9770fe7" containerName="registry-server" Dec 11 10:26:26 crc kubenswrapper[4953]: I1211 10:26:26.819078 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e99dc28-22e4-4d3f-8e4d-48f8e9770fe7" containerName="registry-server" Dec 11 10:26:26 crc kubenswrapper[4953]: I1211 10:26:26.819184 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e99dc28-22e4-4d3f-8e4d-48f8e9770fe7" containerName="registry-server" Dec 11 10:26:26 crc kubenswrapper[4953]: I1211 10:26:26.820081 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zjdtx" Dec 11 10:26:26 crc kubenswrapper[4953]: I1211 10:26:26.823025 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Dec 11 10:26:26 crc kubenswrapper[4953]: I1211 10:26:26.828966 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zjdtx"] Dec 11 10:26:26 crc kubenswrapper[4953]: I1211 10:26:26.984118 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bkt6\" (UniqueName: \"kubernetes.io/projected/40077d7f-4309-4339-91d0-7596ed662f75-kube-api-access-2bkt6\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zjdtx\" (UID: \"40077d7f-4309-4339-91d0-7596ed662f75\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zjdtx" Dec 11 10:26:26 crc kubenswrapper[4953]: I1211 10:26:26.984176 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/40077d7f-4309-4339-91d0-7596ed662f75-util\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zjdtx\" (UID: \"40077d7f-4309-4339-91d0-7596ed662f75\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zjdtx" Dec 11 10:26:26 crc kubenswrapper[4953]: I1211 10:26:26.984199 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/40077d7f-4309-4339-91d0-7596ed662f75-bundle\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zjdtx\" (UID: \"40077d7f-4309-4339-91d0-7596ed662f75\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zjdtx" Dec 11 10:26:27 crc kubenswrapper[4953]: I1211 10:26:27.085025 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2bkt6\" (UniqueName: \"kubernetes.io/projected/40077d7f-4309-4339-91d0-7596ed662f75-kube-api-access-2bkt6\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zjdtx\" (UID: \"40077d7f-4309-4339-91d0-7596ed662f75\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zjdtx" Dec 11 10:26:27 crc kubenswrapper[4953]: I1211 10:26:27.085083 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/40077d7f-4309-4339-91d0-7596ed662f75-util\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zjdtx\" (UID: \"40077d7f-4309-4339-91d0-7596ed662f75\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zjdtx" Dec 11 10:26:27 crc kubenswrapper[4953]: I1211 10:26:27.085114 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/40077d7f-4309-4339-91d0-7596ed662f75-bundle\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zjdtx\" (UID: \"40077d7f-4309-4339-91d0-7596ed662f75\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zjdtx" Dec 11 10:26:27 crc kubenswrapper[4953]: I1211 10:26:27.085663 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/40077d7f-4309-4339-91d0-7596ed662f75-bundle\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zjdtx\" (UID: \"40077d7f-4309-4339-91d0-7596ed662f75\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zjdtx" Dec 11 10:26:27 crc kubenswrapper[4953]: I1211 10:26:27.086125 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/40077d7f-4309-4339-91d0-7596ed662f75-util\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zjdtx\" (UID: \"40077d7f-4309-4339-91d0-7596ed662f75\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zjdtx" Dec 11 10:26:27 crc kubenswrapper[4953]: I1211 10:26:27.119122 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2bkt6\" (UniqueName: \"kubernetes.io/projected/40077d7f-4309-4339-91d0-7596ed662f75-kube-api-access-2bkt6\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zjdtx\" (UID: \"40077d7f-4309-4339-91d0-7596ed662f75\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zjdtx" Dec 11 10:26:27 crc kubenswrapper[4953]: I1211 10:26:27.135544 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zjdtx" Dec 11 10:26:27 crc kubenswrapper[4953]: I1211 10:26:27.378816 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zjdtx"] Dec 11 10:26:27 crc kubenswrapper[4953]: I1211 10:26:27.438145 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zjdtx" event={"ID":"40077d7f-4309-4339-91d0-7596ed662f75","Type":"ContainerStarted","Data":"b1af8877151bb88d493f2833e683ee231b71ac79db9817389f6317185d021634"} Dec 11 10:26:28 crc kubenswrapper[4953]: I1211 10:26:28.294920 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-wfrqd" podUID="6a593442-828c-4cff-b9b9-4efa41ef6f44" containerName="console" containerID="cri-o://c01783552eecd0d5a0ac23b8c1bcd503a75a30f2bda5b53efa242177d19e5b48" gracePeriod=15 Dec 11 10:26:28 crc kubenswrapper[4953]: I1211 10:26:28.446969 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-wfrqd_6a593442-828c-4cff-b9b9-4efa41ef6f44/console/0.log" Dec 11 10:26:28 crc kubenswrapper[4953]: I1211 10:26:28.447028 4953 generic.go:334] "Generic (PLEG): container finished" podID="6a593442-828c-4cff-b9b9-4efa41ef6f44" containerID="c01783552eecd0d5a0ac23b8c1bcd503a75a30f2bda5b53efa242177d19e5b48" exitCode=2 Dec 11 10:26:28 crc kubenswrapper[4953]: I1211 10:26:28.447090 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-wfrqd" event={"ID":"6a593442-828c-4cff-b9b9-4efa41ef6f44","Type":"ContainerDied","Data":"c01783552eecd0d5a0ac23b8c1bcd503a75a30f2bda5b53efa242177d19e5b48"} Dec 11 10:26:28 crc kubenswrapper[4953]: I1211 10:26:28.449752 4953 generic.go:334] "Generic (PLEG): container finished" podID="40077d7f-4309-4339-91d0-7596ed662f75" containerID="289867edd876255db11cbd11cff6e19d253957799af21dee9108ef74f812ff4e" exitCode=0 Dec 11 10:26:28 crc kubenswrapper[4953]: I1211 10:26:28.449809 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zjdtx" event={"ID":"40077d7f-4309-4339-91d0-7596ed662f75","Type":"ContainerDied","Data":"289867edd876255db11cbd11cff6e19d253957799af21dee9108ef74f812ff4e"} Dec 11 10:26:28 crc kubenswrapper[4953]: I1211 10:26:28.669437 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-wfrqd_6a593442-828c-4cff-b9b9-4efa41ef6f44/console/0.log" Dec 11 10:26:28 crc kubenswrapper[4953]: I1211 10:26:28.669719 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-wfrqd" Dec 11 10:26:28 crc kubenswrapper[4953]: I1211 10:26:28.719277 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6a593442-828c-4cff-b9b9-4efa41ef6f44-service-ca\") pod \"6a593442-828c-4cff-b9b9-4efa41ef6f44\" (UID: \"6a593442-828c-4cff-b9b9-4efa41ef6f44\") " Dec 11 10:26:28 crc kubenswrapper[4953]: I1211 10:26:28.719384 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6a593442-828c-4cff-b9b9-4efa41ef6f44-console-serving-cert\") pod \"6a593442-828c-4cff-b9b9-4efa41ef6f44\" (UID: \"6a593442-828c-4cff-b9b9-4efa41ef6f44\") " Dec 11 10:26:28 crc kubenswrapper[4953]: I1211 10:26:28.719422 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s7gr6\" (UniqueName: \"kubernetes.io/projected/6a593442-828c-4cff-b9b9-4efa41ef6f44-kube-api-access-s7gr6\") pod \"6a593442-828c-4cff-b9b9-4efa41ef6f44\" (UID: \"6a593442-828c-4cff-b9b9-4efa41ef6f44\") " Dec 11 10:26:28 crc kubenswrapper[4953]: I1211 10:26:28.719488 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6a593442-828c-4cff-b9b9-4efa41ef6f44-console-config\") pod \"6a593442-828c-4cff-b9b9-4efa41ef6f44\" (UID: \"6a593442-828c-4cff-b9b9-4efa41ef6f44\") " Dec 11 10:26:28 crc kubenswrapper[4953]: I1211 10:26:28.719527 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6a593442-828c-4cff-b9b9-4efa41ef6f44-console-oauth-config\") pod \"6a593442-828c-4cff-b9b9-4efa41ef6f44\" (UID: \"6a593442-828c-4cff-b9b9-4efa41ef6f44\") " Dec 11 10:26:28 crc kubenswrapper[4953]: I1211 10:26:28.719564 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a593442-828c-4cff-b9b9-4efa41ef6f44-trusted-ca-bundle\") pod \"6a593442-828c-4cff-b9b9-4efa41ef6f44\" (UID: \"6a593442-828c-4cff-b9b9-4efa41ef6f44\") " Dec 11 10:26:28 crc kubenswrapper[4953]: I1211 10:26:28.719625 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6a593442-828c-4cff-b9b9-4efa41ef6f44-oauth-serving-cert\") pod \"6a593442-828c-4cff-b9b9-4efa41ef6f44\" (UID: \"6a593442-828c-4cff-b9b9-4efa41ef6f44\") " Dec 11 10:26:28 crc kubenswrapper[4953]: I1211 10:26:28.720346 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a593442-828c-4cff-b9b9-4efa41ef6f44-console-config" (OuterVolumeSpecName: "console-config") pod "6a593442-828c-4cff-b9b9-4efa41ef6f44" (UID: "6a593442-828c-4cff-b9b9-4efa41ef6f44"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:26:28 crc kubenswrapper[4953]: I1211 10:26:28.720321 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a593442-828c-4cff-b9b9-4efa41ef6f44-service-ca" (OuterVolumeSpecName: "service-ca") pod "6a593442-828c-4cff-b9b9-4efa41ef6f44" (UID: "6a593442-828c-4cff-b9b9-4efa41ef6f44"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:26:28 crc kubenswrapper[4953]: I1211 10:26:28.720623 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a593442-828c-4cff-b9b9-4efa41ef6f44-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "6a593442-828c-4cff-b9b9-4efa41ef6f44" (UID: "6a593442-828c-4cff-b9b9-4efa41ef6f44"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:26:28 crc kubenswrapper[4953]: I1211 10:26:28.720744 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a593442-828c-4cff-b9b9-4efa41ef6f44-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6a593442-828c-4cff-b9b9-4efa41ef6f44" (UID: "6a593442-828c-4cff-b9b9-4efa41ef6f44"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:26:28 crc kubenswrapper[4953]: I1211 10:26:28.721780 4953 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6a593442-828c-4cff-b9b9-4efa41ef6f44-console-config\") on node \"crc\" DevicePath \"\"" Dec 11 10:26:28 crc kubenswrapper[4953]: I1211 10:26:28.721810 4953 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a593442-828c-4cff-b9b9-4efa41ef6f44-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 10:26:28 crc kubenswrapper[4953]: I1211 10:26:28.721825 4953 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6a593442-828c-4cff-b9b9-4efa41ef6f44-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 10:26:28 crc kubenswrapper[4953]: I1211 10:26:28.721837 4953 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6a593442-828c-4cff-b9b9-4efa41ef6f44-service-ca\") on node \"crc\" DevicePath \"\"" Dec 11 10:26:28 crc kubenswrapper[4953]: I1211 10:26:28.725994 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a593442-828c-4cff-b9b9-4efa41ef6f44-kube-api-access-s7gr6" (OuterVolumeSpecName: "kube-api-access-s7gr6") pod "6a593442-828c-4cff-b9b9-4efa41ef6f44" (UID: "6a593442-828c-4cff-b9b9-4efa41ef6f44"). InnerVolumeSpecName "kube-api-access-s7gr6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:26:28 crc kubenswrapper[4953]: I1211 10:26:28.726736 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a593442-828c-4cff-b9b9-4efa41ef6f44-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "6a593442-828c-4cff-b9b9-4efa41ef6f44" (UID: "6a593442-828c-4cff-b9b9-4efa41ef6f44"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:26:28 crc kubenswrapper[4953]: I1211 10:26:28.726989 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a593442-828c-4cff-b9b9-4efa41ef6f44-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "6a593442-828c-4cff-b9b9-4efa41ef6f44" (UID: "6a593442-828c-4cff-b9b9-4efa41ef6f44"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:26:28 crc kubenswrapper[4953]: I1211 10:26:28.823017 4953 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6a593442-828c-4cff-b9b9-4efa41ef6f44-console-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 10:26:28 crc kubenswrapper[4953]: I1211 10:26:28.823067 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s7gr6\" (UniqueName: \"kubernetes.io/projected/6a593442-828c-4cff-b9b9-4efa41ef6f44-kube-api-access-s7gr6\") on node \"crc\" DevicePath \"\"" Dec 11 10:26:28 crc kubenswrapper[4953]: I1211 10:26:28.823082 4953 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6a593442-828c-4cff-b9b9-4efa41ef6f44-console-oauth-config\") on node \"crc\" DevicePath \"\"" Dec 11 10:26:29 crc kubenswrapper[4953]: I1211 10:26:29.456065 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-wfrqd_6a593442-828c-4cff-b9b9-4efa41ef6f44/console/0.log" Dec 11 10:26:29 crc kubenswrapper[4953]: I1211 10:26:29.456124 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-wfrqd" event={"ID":"6a593442-828c-4cff-b9b9-4efa41ef6f44","Type":"ContainerDied","Data":"048fef2fc96c4f67d3f0e81fe8e18db495515590a26df2c37c93829468a58750"} Dec 11 10:26:29 crc kubenswrapper[4953]: I1211 10:26:29.456166 4953 scope.go:117] "RemoveContainer" containerID="c01783552eecd0d5a0ac23b8c1bcd503a75a30f2bda5b53efa242177d19e5b48" Dec 11 10:26:29 crc kubenswrapper[4953]: I1211 10:26:29.456282 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-wfrqd" Dec 11 10:26:29 crc kubenswrapper[4953]: I1211 10:26:29.484972 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-wfrqd"] Dec 11 10:26:29 crc kubenswrapper[4953]: I1211 10:26:29.490160 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-wfrqd"] Dec 11 10:26:30 crc kubenswrapper[4953]: I1211 10:26:30.481408 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a593442-828c-4cff-b9b9-4efa41ef6f44" path="/var/lib/kubelet/pods/6a593442-828c-4cff-b9b9-4efa41ef6f44/volumes" Dec 11 10:26:34 crc kubenswrapper[4953]: I1211 10:26:34.493676 4953 generic.go:334] "Generic (PLEG): container finished" podID="40077d7f-4309-4339-91d0-7596ed662f75" containerID="46235344bcbbf40e044a013ea9e58d77ee437d16da57cbeb4a370906598c1e24" exitCode=0 Dec 11 10:26:34 crc kubenswrapper[4953]: I1211 10:26:34.495757 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zjdtx" event={"ID":"40077d7f-4309-4339-91d0-7596ed662f75","Type":"ContainerDied","Data":"46235344bcbbf40e044a013ea9e58d77ee437d16da57cbeb4a370906598c1e24"} Dec 11 10:26:35 crc kubenswrapper[4953]: I1211 10:26:35.507403 4953 generic.go:334] "Generic (PLEG): container finished" podID="40077d7f-4309-4339-91d0-7596ed662f75" containerID="26417db949b242277f88de84e2edbbcc361d8e7dc881e9db0c87a329caaa2121" exitCode=0 Dec 11 10:26:35 crc kubenswrapper[4953]: I1211 10:26:35.507534 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zjdtx" event={"ID":"40077d7f-4309-4339-91d0-7596ed662f75","Type":"ContainerDied","Data":"26417db949b242277f88de84e2edbbcc361d8e7dc881e9db0c87a329caaa2121"} Dec 11 10:26:36 crc kubenswrapper[4953]: I1211 10:26:36.772799 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zjdtx" Dec 11 10:26:36 crc kubenswrapper[4953]: I1211 10:26:36.925013 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/40077d7f-4309-4339-91d0-7596ed662f75-util\") pod \"40077d7f-4309-4339-91d0-7596ed662f75\" (UID: \"40077d7f-4309-4339-91d0-7596ed662f75\") " Dec 11 10:26:36 crc kubenswrapper[4953]: I1211 10:26:36.925104 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/40077d7f-4309-4339-91d0-7596ed662f75-bundle\") pod \"40077d7f-4309-4339-91d0-7596ed662f75\" (UID: \"40077d7f-4309-4339-91d0-7596ed662f75\") " Dec 11 10:26:36 crc kubenswrapper[4953]: I1211 10:26:36.925152 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2bkt6\" (UniqueName: \"kubernetes.io/projected/40077d7f-4309-4339-91d0-7596ed662f75-kube-api-access-2bkt6\") pod \"40077d7f-4309-4339-91d0-7596ed662f75\" (UID: \"40077d7f-4309-4339-91d0-7596ed662f75\") " Dec 11 10:26:36 crc kubenswrapper[4953]: I1211 10:26:36.927238 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/40077d7f-4309-4339-91d0-7596ed662f75-bundle" (OuterVolumeSpecName: "bundle") pod "40077d7f-4309-4339-91d0-7596ed662f75" (UID: "40077d7f-4309-4339-91d0-7596ed662f75"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:26:36 crc kubenswrapper[4953]: I1211 10:26:36.932679 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40077d7f-4309-4339-91d0-7596ed662f75-kube-api-access-2bkt6" (OuterVolumeSpecName: "kube-api-access-2bkt6") pod "40077d7f-4309-4339-91d0-7596ed662f75" (UID: "40077d7f-4309-4339-91d0-7596ed662f75"). InnerVolumeSpecName "kube-api-access-2bkt6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:26:36 crc kubenswrapper[4953]: I1211 10:26:36.940220 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/40077d7f-4309-4339-91d0-7596ed662f75-util" (OuterVolumeSpecName: "util") pod "40077d7f-4309-4339-91d0-7596ed662f75" (UID: "40077d7f-4309-4339-91d0-7596ed662f75"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:26:37 crc kubenswrapper[4953]: I1211 10:26:37.026907 4953 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/40077d7f-4309-4339-91d0-7596ed662f75-util\") on node \"crc\" DevicePath \"\"" Dec 11 10:26:37 crc kubenswrapper[4953]: I1211 10:26:37.026954 4953 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/40077d7f-4309-4339-91d0-7596ed662f75-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 10:26:37 crc kubenswrapper[4953]: I1211 10:26:37.026968 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2bkt6\" (UniqueName: \"kubernetes.io/projected/40077d7f-4309-4339-91d0-7596ed662f75-kube-api-access-2bkt6\") on node \"crc\" DevicePath \"\"" Dec 11 10:26:37 crc kubenswrapper[4953]: I1211 10:26:37.532275 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zjdtx" event={"ID":"40077d7f-4309-4339-91d0-7596ed662f75","Type":"ContainerDied","Data":"b1af8877151bb88d493f2833e683ee231b71ac79db9817389f6317185d021634"} Dec 11 10:26:37 crc kubenswrapper[4953]: I1211 10:26:37.532322 4953 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b1af8877151bb88d493f2833e683ee231b71ac79db9817389f6317185d021634" Dec 11 10:26:37 crc kubenswrapper[4953]: I1211 10:26:37.532384 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zjdtx" Dec 11 10:26:50 crc kubenswrapper[4953]: I1211 10:26:50.217691 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-7df594d648-xsbbg"] Dec 11 10:26:50 crc kubenswrapper[4953]: E1211 10:26:50.218668 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a593442-828c-4cff-b9b9-4efa41ef6f44" containerName="console" Dec 11 10:26:50 crc kubenswrapper[4953]: I1211 10:26:50.218701 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a593442-828c-4cff-b9b9-4efa41ef6f44" containerName="console" Dec 11 10:26:50 crc kubenswrapper[4953]: E1211 10:26:50.218714 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40077d7f-4309-4339-91d0-7596ed662f75" containerName="extract" Dec 11 10:26:50 crc kubenswrapper[4953]: I1211 10:26:50.218721 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="40077d7f-4309-4339-91d0-7596ed662f75" containerName="extract" Dec 11 10:26:50 crc kubenswrapper[4953]: E1211 10:26:50.218735 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40077d7f-4309-4339-91d0-7596ed662f75" containerName="util" Dec 11 10:26:50 crc kubenswrapper[4953]: I1211 10:26:50.218742 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="40077d7f-4309-4339-91d0-7596ed662f75" containerName="util" Dec 11 10:26:50 crc kubenswrapper[4953]: E1211 10:26:50.218753 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40077d7f-4309-4339-91d0-7596ed662f75" containerName="pull" Dec 11 10:26:50 crc kubenswrapper[4953]: I1211 10:26:50.218760 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="40077d7f-4309-4339-91d0-7596ed662f75" containerName="pull" Dec 11 10:26:50 crc kubenswrapper[4953]: I1211 10:26:50.218938 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="40077d7f-4309-4339-91d0-7596ed662f75" containerName="extract" Dec 11 10:26:50 crc kubenswrapper[4953]: I1211 10:26:50.218957 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a593442-828c-4cff-b9b9-4efa41ef6f44" containerName="console" Dec 11 10:26:50 crc kubenswrapper[4953]: I1211 10:26:50.219489 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-7df594d648-xsbbg" Dec 11 10:26:50 crc kubenswrapper[4953]: I1211 10:26:50.222012 4953 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Dec 11 10:26:50 crc kubenswrapper[4953]: I1211 10:26:50.222319 4953 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-b6xpc" Dec 11 10:26:50 crc kubenswrapper[4953]: I1211 10:26:50.226950 4953 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Dec 11 10:26:50 crc kubenswrapper[4953]: I1211 10:26:50.227708 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Dec 11 10:26:50 crc kubenswrapper[4953]: I1211 10:26:50.227771 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Dec 11 10:26:50 crc kubenswrapper[4953]: I1211 10:26:50.235917 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-7df594d648-xsbbg"] Dec 11 10:26:50 crc kubenswrapper[4953]: I1211 10:26:50.264444 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9dd77619-44b9-4c77-a3c9-da9aca01ebdf-apiservice-cert\") pod \"metallb-operator-controller-manager-7df594d648-xsbbg\" (UID: \"9dd77619-44b9-4c77-a3c9-da9aca01ebdf\") " pod="metallb-system/metallb-operator-controller-manager-7df594d648-xsbbg" Dec 11 10:26:50 crc kubenswrapper[4953]: I1211 10:26:50.264512 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89b8j\" (UniqueName: \"kubernetes.io/projected/9dd77619-44b9-4c77-a3c9-da9aca01ebdf-kube-api-access-89b8j\") pod \"metallb-operator-controller-manager-7df594d648-xsbbg\" (UID: \"9dd77619-44b9-4c77-a3c9-da9aca01ebdf\") " pod="metallb-system/metallb-operator-controller-manager-7df594d648-xsbbg" Dec 11 10:26:50 crc kubenswrapper[4953]: I1211 10:26:50.264591 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9dd77619-44b9-4c77-a3c9-da9aca01ebdf-webhook-cert\") pod \"metallb-operator-controller-manager-7df594d648-xsbbg\" (UID: \"9dd77619-44b9-4c77-a3c9-da9aca01ebdf\") " pod="metallb-system/metallb-operator-controller-manager-7df594d648-xsbbg" Dec 11 10:26:50 crc kubenswrapper[4953]: I1211 10:26:50.366099 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9dd77619-44b9-4c77-a3c9-da9aca01ebdf-apiservice-cert\") pod \"metallb-operator-controller-manager-7df594d648-xsbbg\" (UID: \"9dd77619-44b9-4c77-a3c9-da9aca01ebdf\") " pod="metallb-system/metallb-operator-controller-manager-7df594d648-xsbbg" Dec 11 10:26:50 crc kubenswrapper[4953]: I1211 10:26:50.366218 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89b8j\" (UniqueName: \"kubernetes.io/projected/9dd77619-44b9-4c77-a3c9-da9aca01ebdf-kube-api-access-89b8j\") pod \"metallb-operator-controller-manager-7df594d648-xsbbg\" (UID: \"9dd77619-44b9-4c77-a3c9-da9aca01ebdf\") " pod="metallb-system/metallb-operator-controller-manager-7df594d648-xsbbg" Dec 11 10:26:50 crc kubenswrapper[4953]: I1211 10:26:50.366286 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9dd77619-44b9-4c77-a3c9-da9aca01ebdf-webhook-cert\") pod \"metallb-operator-controller-manager-7df594d648-xsbbg\" (UID: \"9dd77619-44b9-4c77-a3c9-da9aca01ebdf\") " pod="metallb-system/metallb-operator-controller-manager-7df594d648-xsbbg" Dec 11 10:26:50 crc kubenswrapper[4953]: I1211 10:26:50.373011 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9dd77619-44b9-4c77-a3c9-da9aca01ebdf-apiservice-cert\") pod \"metallb-operator-controller-manager-7df594d648-xsbbg\" (UID: \"9dd77619-44b9-4c77-a3c9-da9aca01ebdf\") " pod="metallb-system/metallb-operator-controller-manager-7df594d648-xsbbg" Dec 11 10:26:50 crc kubenswrapper[4953]: I1211 10:26:50.373011 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9dd77619-44b9-4c77-a3c9-da9aca01ebdf-webhook-cert\") pod \"metallb-operator-controller-manager-7df594d648-xsbbg\" (UID: \"9dd77619-44b9-4c77-a3c9-da9aca01ebdf\") " pod="metallb-system/metallb-operator-controller-manager-7df594d648-xsbbg" Dec 11 10:26:50 crc kubenswrapper[4953]: I1211 10:26:50.387808 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89b8j\" (UniqueName: \"kubernetes.io/projected/9dd77619-44b9-4c77-a3c9-da9aca01ebdf-kube-api-access-89b8j\") pod \"metallb-operator-controller-manager-7df594d648-xsbbg\" (UID: \"9dd77619-44b9-4c77-a3c9-da9aca01ebdf\") " pod="metallb-system/metallb-operator-controller-manager-7df594d648-xsbbg" Dec 11 10:26:50 crc kubenswrapper[4953]: I1211 10:26:50.453013 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-6774c47879-2wtwj"] Dec 11 10:26:50 crc kubenswrapper[4953]: I1211 10:26:50.453764 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-6774c47879-2wtwj" Dec 11 10:26:50 crc kubenswrapper[4953]: I1211 10:26:50.455746 4953 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Dec 11 10:26:50 crc kubenswrapper[4953]: I1211 10:26:50.455746 4953 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Dec 11 10:26:50 crc kubenswrapper[4953]: I1211 10:26:50.456869 4953 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-4mqq9" Dec 11 10:26:50 crc kubenswrapper[4953]: I1211 10:26:50.467743 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0b1f3560-9351-4283-b171-7df165a2bedc-apiservice-cert\") pod \"metallb-operator-webhook-server-6774c47879-2wtwj\" (UID: \"0b1f3560-9351-4283-b171-7df165a2bedc\") " pod="metallb-system/metallb-operator-webhook-server-6774c47879-2wtwj" Dec 11 10:26:50 crc kubenswrapper[4953]: I1211 10:26:50.467827 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hsz5b\" (UniqueName: \"kubernetes.io/projected/0b1f3560-9351-4283-b171-7df165a2bedc-kube-api-access-hsz5b\") pod \"metallb-operator-webhook-server-6774c47879-2wtwj\" (UID: \"0b1f3560-9351-4283-b171-7df165a2bedc\") " pod="metallb-system/metallb-operator-webhook-server-6774c47879-2wtwj" Dec 11 10:26:50 crc kubenswrapper[4953]: I1211 10:26:50.467893 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0b1f3560-9351-4283-b171-7df165a2bedc-webhook-cert\") pod \"metallb-operator-webhook-server-6774c47879-2wtwj\" (UID: \"0b1f3560-9351-4283-b171-7df165a2bedc\") " pod="metallb-system/metallb-operator-webhook-server-6774c47879-2wtwj" Dec 11 10:26:50 crc kubenswrapper[4953]: I1211 10:26:50.471099 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-6774c47879-2wtwj"] Dec 11 10:26:50 crc kubenswrapper[4953]: I1211 10:26:50.536649 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-7df594d648-xsbbg" Dec 11 10:26:50 crc kubenswrapper[4953]: I1211 10:26:50.569537 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0b1f3560-9351-4283-b171-7df165a2bedc-webhook-cert\") pod \"metallb-operator-webhook-server-6774c47879-2wtwj\" (UID: \"0b1f3560-9351-4283-b171-7df165a2bedc\") " pod="metallb-system/metallb-operator-webhook-server-6774c47879-2wtwj" Dec 11 10:26:50 crc kubenswrapper[4953]: I1211 10:26:50.572542 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0b1f3560-9351-4283-b171-7df165a2bedc-apiservice-cert\") pod \"metallb-operator-webhook-server-6774c47879-2wtwj\" (UID: \"0b1f3560-9351-4283-b171-7df165a2bedc\") " pod="metallb-system/metallb-operator-webhook-server-6774c47879-2wtwj" Dec 11 10:26:50 crc kubenswrapper[4953]: I1211 10:26:50.572615 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hsz5b\" (UniqueName: \"kubernetes.io/projected/0b1f3560-9351-4283-b171-7df165a2bedc-kube-api-access-hsz5b\") pod \"metallb-operator-webhook-server-6774c47879-2wtwj\" (UID: \"0b1f3560-9351-4283-b171-7df165a2bedc\") " pod="metallb-system/metallb-operator-webhook-server-6774c47879-2wtwj" Dec 11 10:26:50 crc kubenswrapper[4953]: I1211 10:26:50.577256 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0b1f3560-9351-4283-b171-7df165a2bedc-webhook-cert\") pod \"metallb-operator-webhook-server-6774c47879-2wtwj\" (UID: \"0b1f3560-9351-4283-b171-7df165a2bedc\") " pod="metallb-system/metallb-operator-webhook-server-6774c47879-2wtwj" Dec 11 10:26:50 crc kubenswrapper[4953]: I1211 10:26:50.582482 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0b1f3560-9351-4283-b171-7df165a2bedc-apiservice-cert\") pod \"metallb-operator-webhook-server-6774c47879-2wtwj\" (UID: \"0b1f3560-9351-4283-b171-7df165a2bedc\") " pod="metallb-system/metallb-operator-webhook-server-6774c47879-2wtwj" Dec 11 10:26:50 crc kubenswrapper[4953]: I1211 10:26:50.598147 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hsz5b\" (UniqueName: \"kubernetes.io/projected/0b1f3560-9351-4283-b171-7df165a2bedc-kube-api-access-hsz5b\") pod \"metallb-operator-webhook-server-6774c47879-2wtwj\" (UID: \"0b1f3560-9351-4283-b171-7df165a2bedc\") " pod="metallb-system/metallb-operator-webhook-server-6774c47879-2wtwj" Dec 11 10:26:50 crc kubenswrapper[4953]: I1211 10:26:50.779266 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-6774c47879-2wtwj" Dec 11 10:26:50 crc kubenswrapper[4953]: I1211 10:26:50.993694 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-7df594d648-xsbbg"] Dec 11 10:26:51 crc kubenswrapper[4953]: W1211 10:26:51.006036 4953 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9dd77619_44b9_4c77_a3c9_da9aca01ebdf.slice/crio-417d80db65e482c87df8bfdb98edc0b1f6e4041b1c2b29ecd07939abd107f2a6 WatchSource:0}: Error finding container 417d80db65e482c87df8bfdb98edc0b1f6e4041b1c2b29ecd07939abd107f2a6: Status 404 returned error can't find the container with id 417d80db65e482c87df8bfdb98edc0b1f6e4041b1c2b29ecd07939abd107f2a6 Dec 11 10:26:51 crc kubenswrapper[4953]: I1211 10:26:51.097842 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-6774c47879-2wtwj"] Dec 11 10:26:51 crc kubenswrapper[4953]: W1211 10:26:51.104045 4953 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0b1f3560_9351_4283_b171_7df165a2bedc.slice/crio-1d669f0f4f1e4dc4fa83e4697fb9e0289287d47a6a096382a12867a91c2b182a WatchSource:0}: Error finding container 1d669f0f4f1e4dc4fa83e4697fb9e0289287d47a6a096382a12867a91c2b182a: Status 404 returned error can't find the container with id 1d669f0f4f1e4dc4fa83e4697fb9e0289287d47a6a096382a12867a91c2b182a Dec 11 10:26:51 crc kubenswrapper[4953]: I1211 10:26:51.637954 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-7df594d648-xsbbg" event={"ID":"9dd77619-44b9-4c77-a3c9-da9aca01ebdf","Type":"ContainerStarted","Data":"417d80db65e482c87df8bfdb98edc0b1f6e4041b1c2b29ecd07939abd107f2a6"} Dec 11 10:26:51 crc kubenswrapper[4953]: I1211 10:26:51.639230 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-6774c47879-2wtwj" event={"ID":"0b1f3560-9351-4283-b171-7df165a2bedc","Type":"ContainerStarted","Data":"1d669f0f4f1e4dc4fa83e4697fb9e0289287d47a6a096382a12867a91c2b182a"} Dec 11 10:26:55 crc kubenswrapper[4953]: I1211 10:26:55.697038 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-7df594d648-xsbbg" event={"ID":"9dd77619-44b9-4c77-a3c9-da9aca01ebdf","Type":"ContainerStarted","Data":"2cd09c2b2f31bac03522fd268a0a6f4c0c34d0d03e71f565d00d7e74eb3aa522"} Dec 11 10:26:55 crc kubenswrapper[4953]: I1211 10:26:55.698186 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-7df594d648-xsbbg" Dec 11 10:26:55 crc kubenswrapper[4953]: I1211 10:26:55.735389 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-7df594d648-xsbbg" podStartSLOduration=2.266520989 podStartE2EDuration="5.73536509s" podCreationTimestamp="2025-12-11 10:26:50 +0000 UTC" firstStartedPulling="2025-12-11 10:26:51.008685936 +0000 UTC m=+929.032544969" lastFinishedPulling="2025-12-11 10:26:54.477530037 +0000 UTC m=+932.501389070" observedRunningTime="2025-12-11 10:26:55.726961253 +0000 UTC m=+933.750820286" watchObservedRunningTime="2025-12-11 10:26:55.73536509 +0000 UTC m=+933.759224123" Dec 11 10:26:56 crc kubenswrapper[4953]: I1211 10:26:56.705804 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-6774c47879-2wtwj" event={"ID":"0b1f3560-9351-4283-b171-7df165a2bedc","Type":"ContainerStarted","Data":"ea87bd43644e3db9c0d514997583872299932ca1ec83095764877c1d204eecaa"} Dec 11 10:26:56 crc kubenswrapper[4953]: I1211 10:26:56.730875 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-6774c47879-2wtwj" podStartSLOduration=1.364215024 podStartE2EDuration="6.730853222s" podCreationTimestamp="2025-12-11 10:26:50 +0000 UTC" firstStartedPulling="2025-12-11 10:26:51.1077627 +0000 UTC m=+929.131621733" lastFinishedPulling="2025-12-11 10:26:56.474400888 +0000 UTC m=+934.498259931" observedRunningTime="2025-12-11 10:26:56.72670842 +0000 UTC m=+934.750567483" watchObservedRunningTime="2025-12-11 10:26:56.730853222 +0000 UTC m=+934.754712255" Dec 11 10:26:57 crc kubenswrapper[4953]: I1211 10:26:57.719244 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-6774c47879-2wtwj" Dec 11 10:27:10 crc kubenswrapper[4953]: I1211 10:27:10.783975 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-6774c47879-2wtwj" Dec 11 10:27:18 crc kubenswrapper[4953]: I1211 10:27:18.193946 4953 patch_prober.go:28] interesting pod/machine-config-daemon-q2898 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 10:27:18 crc kubenswrapper[4953]: I1211 10:27:18.194696 4953 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 10:27:30 crc kubenswrapper[4953]: I1211 10:27:30.540421 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-7df594d648-xsbbg" Dec 11 10:27:31 crc kubenswrapper[4953]: I1211 10:27:31.286150 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-wsh66"] Dec 11 10:27:31 crc kubenswrapper[4953]: I1211 10:27:31.289122 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-wsh66" Dec 11 10:27:31 crc kubenswrapper[4953]: I1211 10:27:31.297326 4953 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Dec 11 10:27:31 crc kubenswrapper[4953]: I1211 10:27:31.297675 4953 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-v5c9d" Dec 11 10:27:31 crc kubenswrapper[4953]: I1211 10:27:31.297921 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Dec 11 10:27:31 crc kubenswrapper[4953]: I1211 10:27:31.306474 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7784b6fcf-wzplr"] Dec 11 10:27:31 crc kubenswrapper[4953]: I1211 10:27:31.307409 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-wzplr" Dec 11 10:27:31 crc kubenswrapper[4953]: I1211 10:27:31.309136 4953 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Dec 11 10:27:31 crc kubenswrapper[4953]: I1211 10:27:31.320293 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7784b6fcf-wzplr"] Dec 11 10:27:31 crc kubenswrapper[4953]: I1211 10:27:31.320645 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/97ad19c9-42d1-49f4-a634-baa459c11c80-frr-startup\") pod \"frr-k8s-wsh66\" (UID: \"97ad19c9-42d1-49f4-a634-baa459c11c80\") " pod="metallb-system/frr-k8s-wsh66" Dec 11 10:27:31 crc kubenswrapper[4953]: I1211 10:27:31.320703 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/97ad19c9-42d1-49f4-a634-baa459c11c80-reloader\") pod \"frr-k8s-wsh66\" (UID: \"97ad19c9-42d1-49f4-a634-baa459c11c80\") " pod="metallb-system/frr-k8s-wsh66" Dec 11 10:27:31 crc kubenswrapper[4953]: I1211 10:27:31.320736 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/97ad19c9-42d1-49f4-a634-baa459c11c80-frr-conf\") pod \"frr-k8s-wsh66\" (UID: \"97ad19c9-42d1-49f4-a634-baa459c11c80\") " pod="metallb-system/frr-k8s-wsh66" Dec 11 10:27:31 crc kubenswrapper[4953]: I1211 10:27:31.320779 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/97ad19c9-42d1-49f4-a634-baa459c11c80-frr-sockets\") pod \"frr-k8s-wsh66\" (UID: \"97ad19c9-42d1-49f4-a634-baa459c11c80\") " pod="metallb-system/frr-k8s-wsh66" Dec 11 10:27:31 crc kubenswrapper[4953]: I1211 10:27:31.320822 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/97ad19c9-42d1-49f4-a634-baa459c11c80-metrics\") pod \"frr-k8s-wsh66\" (UID: \"97ad19c9-42d1-49f4-a634-baa459c11c80\") " pod="metallb-system/frr-k8s-wsh66" Dec 11 10:27:31 crc kubenswrapper[4953]: I1211 10:27:31.320897 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7lqx\" (UniqueName: \"kubernetes.io/projected/97ad19c9-42d1-49f4-a634-baa459c11c80-kube-api-access-p7lqx\") pod \"frr-k8s-wsh66\" (UID: \"97ad19c9-42d1-49f4-a634-baa459c11c80\") " pod="metallb-system/frr-k8s-wsh66" Dec 11 10:27:31 crc kubenswrapper[4953]: I1211 10:27:31.320928 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/97ad19c9-42d1-49f4-a634-baa459c11c80-metrics-certs\") pod \"frr-k8s-wsh66\" (UID: \"97ad19c9-42d1-49f4-a634-baa459c11c80\") " pod="metallb-system/frr-k8s-wsh66" Dec 11 10:27:31 crc kubenswrapper[4953]: I1211 10:27:31.417787 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-z79z5"] Dec 11 10:27:31 crc kubenswrapper[4953]: I1211 10:27:31.418734 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-z79z5" Dec 11 10:27:31 crc kubenswrapper[4953]: I1211 10:27:31.422911 4953 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Dec 11 10:27:31 crc kubenswrapper[4953]: I1211 10:27:31.422953 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Dec 11 10:27:31 crc kubenswrapper[4953]: I1211 10:27:31.422911 4953 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Dec 11 10:27:31 crc kubenswrapper[4953]: I1211 10:27:31.422928 4953 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-v5bp6" Dec 11 10:27:31 crc kubenswrapper[4953]: I1211 10:27:31.423330 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7lqx\" (UniqueName: \"kubernetes.io/projected/97ad19c9-42d1-49f4-a634-baa459c11c80-kube-api-access-p7lqx\") pod \"frr-k8s-wsh66\" (UID: \"97ad19c9-42d1-49f4-a634-baa459c11c80\") " pod="metallb-system/frr-k8s-wsh66" Dec 11 10:27:31 crc kubenswrapper[4953]: I1211 10:27:31.423376 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/97ad19c9-42d1-49f4-a634-baa459c11c80-metrics-certs\") pod \"frr-k8s-wsh66\" (UID: \"97ad19c9-42d1-49f4-a634-baa459c11c80\") " pod="metallb-system/frr-k8s-wsh66" Dec 11 10:27:31 crc kubenswrapper[4953]: I1211 10:27:31.423412 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zb56r\" (UniqueName: \"kubernetes.io/projected/e417a1bf-f380-4cd9-8f0b-a9b1766c578a-kube-api-access-zb56r\") pod \"frr-k8s-webhook-server-7784b6fcf-wzplr\" (UID: \"e417a1bf-f380-4cd9-8f0b-a9b1766c578a\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-wzplr" Dec 11 10:27:31 crc kubenswrapper[4953]: I1211 10:27:31.423442 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/97ad19c9-42d1-49f4-a634-baa459c11c80-frr-startup\") pod \"frr-k8s-wsh66\" (UID: \"97ad19c9-42d1-49f4-a634-baa459c11c80\") " pod="metallb-system/frr-k8s-wsh66" Dec 11 10:27:31 crc kubenswrapper[4953]: I1211 10:27:31.423460 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/97ad19c9-42d1-49f4-a634-baa459c11c80-reloader\") pod \"frr-k8s-wsh66\" (UID: \"97ad19c9-42d1-49f4-a634-baa459c11c80\") " pod="metallb-system/frr-k8s-wsh66" Dec 11 10:27:31 crc kubenswrapper[4953]: I1211 10:27:31.423479 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/97ad19c9-42d1-49f4-a634-baa459c11c80-frr-conf\") pod \"frr-k8s-wsh66\" (UID: \"97ad19c9-42d1-49f4-a634-baa459c11c80\") " pod="metallb-system/frr-k8s-wsh66" Dec 11 10:27:31 crc kubenswrapper[4953]: I1211 10:27:31.423499 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/97ad19c9-42d1-49f4-a634-baa459c11c80-frr-sockets\") pod \"frr-k8s-wsh66\" (UID: \"97ad19c9-42d1-49f4-a634-baa459c11c80\") " pod="metallb-system/frr-k8s-wsh66" Dec 11 10:27:31 crc kubenswrapper[4953]: I1211 10:27:31.423523 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/97ad19c9-42d1-49f4-a634-baa459c11c80-metrics\") pod \"frr-k8s-wsh66\" (UID: \"97ad19c9-42d1-49f4-a634-baa459c11c80\") " pod="metallb-system/frr-k8s-wsh66" Dec 11 10:27:31 crc kubenswrapper[4953]: I1211 10:27:31.423549 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e417a1bf-f380-4cd9-8f0b-a9b1766c578a-cert\") pod \"frr-k8s-webhook-server-7784b6fcf-wzplr\" (UID: \"e417a1bf-f380-4cd9-8f0b-a9b1766c578a\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-wzplr" Dec 11 10:27:31 crc kubenswrapper[4953]: I1211 10:27:31.424222 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/97ad19c9-42d1-49f4-a634-baa459c11c80-reloader\") pod \"frr-k8s-wsh66\" (UID: \"97ad19c9-42d1-49f4-a634-baa459c11c80\") " pod="metallb-system/frr-k8s-wsh66" Dec 11 10:27:31 crc kubenswrapper[4953]: I1211 10:27:31.424291 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/97ad19c9-42d1-49f4-a634-baa459c11c80-frr-sockets\") pod \"frr-k8s-wsh66\" (UID: \"97ad19c9-42d1-49f4-a634-baa459c11c80\") " pod="metallb-system/frr-k8s-wsh66" Dec 11 10:27:31 crc kubenswrapper[4953]: I1211 10:27:31.424443 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/97ad19c9-42d1-49f4-a634-baa459c11c80-frr-conf\") pod \"frr-k8s-wsh66\" (UID: \"97ad19c9-42d1-49f4-a634-baa459c11c80\") " pod="metallb-system/frr-k8s-wsh66" Dec 11 10:27:31 crc kubenswrapper[4953]: I1211 10:27:31.424764 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/97ad19c9-42d1-49f4-a634-baa459c11c80-frr-startup\") pod \"frr-k8s-wsh66\" (UID: \"97ad19c9-42d1-49f4-a634-baa459c11c80\") " pod="metallb-system/frr-k8s-wsh66" Dec 11 10:27:31 crc kubenswrapper[4953]: I1211 10:27:31.424780 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/97ad19c9-42d1-49f4-a634-baa459c11c80-metrics\") pod \"frr-k8s-wsh66\" (UID: \"97ad19c9-42d1-49f4-a634-baa459c11c80\") " pod="metallb-system/frr-k8s-wsh66" Dec 11 10:27:31 crc kubenswrapper[4953]: I1211 10:27:31.448718 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/97ad19c9-42d1-49f4-a634-baa459c11c80-metrics-certs\") pod \"frr-k8s-wsh66\" (UID: \"97ad19c9-42d1-49f4-a634-baa459c11c80\") " pod="metallb-system/frr-k8s-wsh66" Dec 11 10:27:31 crc kubenswrapper[4953]: E1211 10:27:31.561261 4953 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Dec 11 10:27:31 crc kubenswrapper[4953]: E1211 10:27:31.561429 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e417a1bf-f380-4cd9-8f0b-a9b1766c578a-cert podName:e417a1bf-f380-4cd9-8f0b-a9b1766c578a nodeName:}" failed. No retries permitted until 2025-12-11 10:27:32.061335828 +0000 UTC m=+970.085195001 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/e417a1bf-f380-4cd9-8f0b-a9b1766c578a-cert") pod "frr-k8s-webhook-server-7784b6fcf-wzplr" (UID: "e417a1bf-f380-4cd9-8f0b-a9b1766c578a") : secret "frr-k8s-webhook-server-cert" not found Dec 11 10:27:31 crc kubenswrapper[4953]: I1211 10:27:31.568037 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e417a1bf-f380-4cd9-8f0b-a9b1766c578a-cert\") pod \"frr-k8s-webhook-server-7784b6fcf-wzplr\" (UID: \"e417a1bf-f380-4cd9-8f0b-a9b1766c578a\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-wzplr" Dec 11 10:27:31 crc kubenswrapper[4953]: I1211 10:27:31.568137 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zb56r\" (UniqueName: \"kubernetes.io/projected/e417a1bf-f380-4cd9-8f0b-a9b1766c578a-kube-api-access-zb56r\") pod \"frr-k8s-webhook-server-7784b6fcf-wzplr\" (UID: \"e417a1bf-f380-4cd9-8f0b-a9b1766c578a\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-wzplr" Dec 11 10:27:31 crc kubenswrapper[4953]: I1211 10:27:31.581648 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7lqx\" (UniqueName: \"kubernetes.io/projected/97ad19c9-42d1-49f4-a634-baa459c11c80-kube-api-access-p7lqx\") pod \"frr-k8s-wsh66\" (UID: \"97ad19c9-42d1-49f4-a634-baa459c11c80\") " pod="metallb-system/frr-k8s-wsh66" Dec 11 10:27:31 crc kubenswrapper[4953]: I1211 10:27:31.609962 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zb56r\" (UniqueName: \"kubernetes.io/projected/e417a1bf-f380-4cd9-8f0b-a9b1766c578a-kube-api-access-zb56r\") pod \"frr-k8s-webhook-server-7784b6fcf-wzplr\" (UID: \"e417a1bf-f380-4cd9-8f0b-a9b1766c578a\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-wzplr" Dec 11 10:27:31 crc kubenswrapper[4953]: I1211 10:27:31.610299 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-wsh66" Dec 11 10:27:31 crc kubenswrapper[4953]: I1211 10:27:31.630642 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-5bddd4b946-5t68s"] Dec 11 10:27:31 crc kubenswrapper[4953]: I1211 10:27:31.631646 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-5bddd4b946-5t68s" Dec 11 10:27:31 crc kubenswrapper[4953]: I1211 10:27:31.643610 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-5bddd4b946-5t68s"] Dec 11 10:27:31 crc kubenswrapper[4953]: I1211 10:27:31.643850 4953 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Dec 11 10:27:31 crc kubenswrapper[4953]: I1211 10:27:31.674904 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/94d8c17f-ab97-41a7-a7e5-bb8fa013b562-metallb-excludel2\") pod \"speaker-z79z5\" (UID: \"94d8c17f-ab97-41a7-a7e5-bb8fa013b562\") " pod="metallb-system/speaker-z79z5" Dec 11 10:27:31 crc kubenswrapper[4953]: I1211 10:27:31.674960 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/94d8c17f-ab97-41a7-a7e5-bb8fa013b562-metrics-certs\") pod \"speaker-z79z5\" (UID: \"94d8c17f-ab97-41a7-a7e5-bb8fa013b562\") " pod="metallb-system/speaker-z79z5" Dec 11 10:27:31 crc kubenswrapper[4953]: I1211 10:27:31.675028 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22skp\" (UniqueName: \"kubernetes.io/projected/94d8c17f-ab97-41a7-a7e5-bb8fa013b562-kube-api-access-22skp\") pod \"speaker-z79z5\" (UID: \"94d8c17f-ab97-41a7-a7e5-bb8fa013b562\") " pod="metallb-system/speaker-z79z5" Dec 11 10:27:31 crc kubenswrapper[4953]: I1211 10:27:31.675072 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/15c4c986-25bb-43ac-93b3-ea7b2dd1e707-cert\") pod \"controller-5bddd4b946-5t68s\" (UID: \"15c4c986-25bb-43ac-93b3-ea7b2dd1e707\") " pod="metallb-system/controller-5bddd4b946-5t68s" Dec 11 10:27:31 crc kubenswrapper[4953]: I1211 10:27:31.675180 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srjht\" (UniqueName: \"kubernetes.io/projected/15c4c986-25bb-43ac-93b3-ea7b2dd1e707-kube-api-access-srjht\") pod \"controller-5bddd4b946-5t68s\" (UID: \"15c4c986-25bb-43ac-93b3-ea7b2dd1e707\") " pod="metallb-system/controller-5bddd4b946-5t68s" Dec 11 10:27:31 crc kubenswrapper[4953]: I1211 10:27:31.675228 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/94d8c17f-ab97-41a7-a7e5-bb8fa013b562-memberlist\") pod \"speaker-z79z5\" (UID: \"94d8c17f-ab97-41a7-a7e5-bb8fa013b562\") " pod="metallb-system/speaker-z79z5" Dec 11 10:27:31 crc kubenswrapper[4953]: I1211 10:27:31.675253 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/15c4c986-25bb-43ac-93b3-ea7b2dd1e707-metrics-certs\") pod \"controller-5bddd4b946-5t68s\" (UID: \"15c4c986-25bb-43ac-93b3-ea7b2dd1e707\") " pod="metallb-system/controller-5bddd4b946-5t68s" Dec 11 10:27:31 crc kubenswrapper[4953]: I1211 10:27:31.785742 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-srjht\" (UniqueName: \"kubernetes.io/projected/15c4c986-25bb-43ac-93b3-ea7b2dd1e707-kube-api-access-srjht\") pod \"controller-5bddd4b946-5t68s\" (UID: \"15c4c986-25bb-43ac-93b3-ea7b2dd1e707\") " pod="metallb-system/controller-5bddd4b946-5t68s" Dec 11 10:27:31 crc kubenswrapper[4953]: I1211 10:27:31.786124 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/94d8c17f-ab97-41a7-a7e5-bb8fa013b562-memberlist\") pod \"speaker-z79z5\" (UID: \"94d8c17f-ab97-41a7-a7e5-bb8fa013b562\") " pod="metallb-system/speaker-z79z5" Dec 11 10:27:31 crc kubenswrapper[4953]: I1211 10:27:31.786170 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/15c4c986-25bb-43ac-93b3-ea7b2dd1e707-metrics-certs\") pod \"controller-5bddd4b946-5t68s\" (UID: \"15c4c986-25bb-43ac-93b3-ea7b2dd1e707\") " pod="metallb-system/controller-5bddd4b946-5t68s" Dec 11 10:27:31 crc kubenswrapper[4953]: I1211 10:27:31.786235 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/94d8c17f-ab97-41a7-a7e5-bb8fa013b562-metallb-excludel2\") pod \"speaker-z79z5\" (UID: \"94d8c17f-ab97-41a7-a7e5-bb8fa013b562\") " pod="metallb-system/speaker-z79z5" Dec 11 10:27:31 crc kubenswrapper[4953]: I1211 10:27:31.786260 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/94d8c17f-ab97-41a7-a7e5-bb8fa013b562-metrics-certs\") pod \"speaker-z79z5\" (UID: \"94d8c17f-ab97-41a7-a7e5-bb8fa013b562\") " pod="metallb-system/speaker-z79z5" Dec 11 10:27:31 crc kubenswrapper[4953]: I1211 10:27:31.786296 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22skp\" (UniqueName: \"kubernetes.io/projected/94d8c17f-ab97-41a7-a7e5-bb8fa013b562-kube-api-access-22skp\") pod \"speaker-z79z5\" (UID: \"94d8c17f-ab97-41a7-a7e5-bb8fa013b562\") " pod="metallb-system/speaker-z79z5" Dec 11 10:27:31 crc kubenswrapper[4953]: I1211 10:27:31.786318 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/15c4c986-25bb-43ac-93b3-ea7b2dd1e707-cert\") pod \"controller-5bddd4b946-5t68s\" (UID: \"15c4c986-25bb-43ac-93b3-ea7b2dd1e707\") " pod="metallb-system/controller-5bddd4b946-5t68s" Dec 11 10:27:31 crc kubenswrapper[4953]: E1211 10:27:31.786330 4953 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Dec 11 10:27:31 crc kubenswrapper[4953]: E1211 10:27:31.786415 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/94d8c17f-ab97-41a7-a7e5-bb8fa013b562-memberlist podName:94d8c17f-ab97-41a7-a7e5-bb8fa013b562 nodeName:}" failed. No retries permitted until 2025-12-11 10:27:32.286393302 +0000 UTC m=+970.310252435 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/94d8c17f-ab97-41a7-a7e5-bb8fa013b562-memberlist") pod "speaker-z79z5" (UID: "94d8c17f-ab97-41a7-a7e5-bb8fa013b562") : secret "metallb-memberlist" not found Dec 11 10:27:31 crc kubenswrapper[4953]: I1211 10:27:31.787184 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/94d8c17f-ab97-41a7-a7e5-bb8fa013b562-metallb-excludel2\") pod \"speaker-z79z5\" (UID: \"94d8c17f-ab97-41a7-a7e5-bb8fa013b562\") " pod="metallb-system/speaker-z79z5" Dec 11 10:27:31 crc kubenswrapper[4953]: I1211 10:27:31.789396 4953 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Dec 11 10:27:31 crc kubenswrapper[4953]: I1211 10:27:31.792561 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/94d8c17f-ab97-41a7-a7e5-bb8fa013b562-metrics-certs\") pod \"speaker-z79z5\" (UID: \"94d8c17f-ab97-41a7-a7e5-bb8fa013b562\") " pod="metallb-system/speaker-z79z5" Dec 11 10:27:31 crc kubenswrapper[4953]: I1211 10:27:31.793219 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/15c4c986-25bb-43ac-93b3-ea7b2dd1e707-metrics-certs\") pod \"controller-5bddd4b946-5t68s\" (UID: \"15c4c986-25bb-43ac-93b3-ea7b2dd1e707\") " pod="metallb-system/controller-5bddd4b946-5t68s" Dec 11 10:27:31 crc kubenswrapper[4953]: I1211 10:27:31.799757 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/15c4c986-25bb-43ac-93b3-ea7b2dd1e707-cert\") pod \"controller-5bddd4b946-5t68s\" (UID: \"15c4c986-25bb-43ac-93b3-ea7b2dd1e707\") " pod="metallb-system/controller-5bddd4b946-5t68s" Dec 11 10:27:31 crc kubenswrapper[4953]: I1211 10:27:31.803228 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-srjht\" (UniqueName: \"kubernetes.io/projected/15c4c986-25bb-43ac-93b3-ea7b2dd1e707-kube-api-access-srjht\") pod \"controller-5bddd4b946-5t68s\" (UID: \"15c4c986-25bb-43ac-93b3-ea7b2dd1e707\") " pod="metallb-system/controller-5bddd4b946-5t68s" Dec 11 10:27:31 crc kubenswrapper[4953]: I1211 10:27:31.806072 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-22skp\" (UniqueName: \"kubernetes.io/projected/94d8c17f-ab97-41a7-a7e5-bb8fa013b562-kube-api-access-22skp\") pod \"speaker-z79z5\" (UID: \"94d8c17f-ab97-41a7-a7e5-bb8fa013b562\") " pod="metallb-system/speaker-z79z5" Dec 11 10:27:32 crc kubenswrapper[4953]: I1211 10:27:32.007674 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-5bddd4b946-5t68s" Dec 11 10:27:32 crc kubenswrapper[4953]: I1211 10:27:32.079525 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-wsh66" event={"ID":"97ad19c9-42d1-49f4-a634-baa459c11c80","Type":"ContainerStarted","Data":"4b7b5ac7319743d2546d9b43401e97fb25a74561eb27de4755a16f74d37d0707"} Dec 11 10:27:32 crc kubenswrapper[4953]: I1211 10:27:32.090361 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e417a1bf-f380-4cd9-8f0b-a9b1766c578a-cert\") pod \"frr-k8s-webhook-server-7784b6fcf-wzplr\" (UID: \"e417a1bf-f380-4cd9-8f0b-a9b1766c578a\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-wzplr" Dec 11 10:27:32 crc kubenswrapper[4953]: I1211 10:27:32.095290 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e417a1bf-f380-4cd9-8f0b-a9b1766c578a-cert\") pod \"frr-k8s-webhook-server-7784b6fcf-wzplr\" (UID: \"e417a1bf-f380-4cd9-8f0b-a9b1766c578a\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-wzplr" Dec 11 10:27:32 crc kubenswrapper[4953]: I1211 10:27:32.231092 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-wzplr" Dec 11 10:27:32 crc kubenswrapper[4953]: I1211 10:27:32.292988 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/94d8c17f-ab97-41a7-a7e5-bb8fa013b562-memberlist\") pod \"speaker-z79z5\" (UID: \"94d8c17f-ab97-41a7-a7e5-bb8fa013b562\") " pod="metallb-system/speaker-z79z5" Dec 11 10:27:32 crc kubenswrapper[4953]: E1211 10:27:32.293223 4953 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Dec 11 10:27:32 crc kubenswrapper[4953]: E1211 10:27:32.293287 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/94d8c17f-ab97-41a7-a7e5-bb8fa013b562-memberlist podName:94d8c17f-ab97-41a7-a7e5-bb8fa013b562 nodeName:}" failed. No retries permitted until 2025-12-11 10:27:33.293267629 +0000 UTC m=+971.317126662 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/94d8c17f-ab97-41a7-a7e5-bb8fa013b562-memberlist") pod "speaker-z79z5" (UID: "94d8c17f-ab97-41a7-a7e5-bb8fa013b562") : secret "metallb-memberlist" not found Dec 11 10:27:32 crc kubenswrapper[4953]: I1211 10:27:32.502051 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-5bddd4b946-5t68s"] Dec 11 10:27:32 crc kubenswrapper[4953]: W1211 10:27:32.510077 4953 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod15c4c986_25bb_43ac_93b3_ea7b2dd1e707.slice/crio-4960f8fe91ae2f5aa22b7cd6af577a706bafcdc408cebcd7c8f9c39b39d869e7 WatchSource:0}: Error finding container 4960f8fe91ae2f5aa22b7cd6af577a706bafcdc408cebcd7c8f9c39b39d869e7: Status 404 returned error can't find the container with id 4960f8fe91ae2f5aa22b7cd6af577a706bafcdc408cebcd7c8f9c39b39d869e7 Dec 11 10:27:32 crc kubenswrapper[4953]: I1211 10:27:32.759052 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7784b6fcf-wzplr"] Dec 11 10:27:33 crc kubenswrapper[4953]: I1211 10:27:33.087511 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-wzplr" event={"ID":"e417a1bf-f380-4cd9-8f0b-a9b1766c578a","Type":"ContainerStarted","Data":"594866277c5e29241fcc273ba4ffe1c965db0cb05b6e92b867f19c9f8a8cca87"} Dec 11 10:27:33 crc kubenswrapper[4953]: I1211 10:27:33.090032 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-5bddd4b946-5t68s" event={"ID":"15c4c986-25bb-43ac-93b3-ea7b2dd1e707","Type":"ContainerStarted","Data":"7f3f8ce3b33da836f53b06c307d04ee94bd84746d7a1a49969a28ca3f8fa9758"} Dec 11 10:27:33 crc kubenswrapper[4953]: I1211 10:27:33.090218 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-5bddd4b946-5t68s" event={"ID":"15c4c986-25bb-43ac-93b3-ea7b2dd1e707","Type":"ContainerStarted","Data":"7f7e247e333da4ea0dac55af520a17482ee47fe56df5d775d51c3c8db824913b"} Dec 11 10:27:33 crc kubenswrapper[4953]: I1211 10:27:33.090383 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-5bddd4b946-5t68s" event={"ID":"15c4c986-25bb-43ac-93b3-ea7b2dd1e707","Type":"ContainerStarted","Data":"4960f8fe91ae2f5aa22b7cd6af577a706bafcdc408cebcd7c8f9c39b39d869e7"} Dec 11 10:27:33 crc kubenswrapper[4953]: I1211 10:27:33.090519 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-5bddd4b946-5t68s" Dec 11 10:27:33 crc kubenswrapper[4953]: I1211 10:27:33.116765 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-5bddd4b946-5t68s" podStartSLOduration=2.116739564 podStartE2EDuration="2.116739564s" podCreationTimestamp="2025-12-11 10:27:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:27:33.113547953 +0000 UTC m=+971.137406986" watchObservedRunningTime="2025-12-11 10:27:33.116739564 +0000 UTC m=+971.140598607" Dec 11 10:27:33 crc kubenswrapper[4953]: I1211 10:27:33.370431 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/94d8c17f-ab97-41a7-a7e5-bb8fa013b562-memberlist\") pod \"speaker-z79z5\" (UID: \"94d8c17f-ab97-41a7-a7e5-bb8fa013b562\") " pod="metallb-system/speaker-z79z5" Dec 11 10:27:33 crc kubenswrapper[4953]: I1211 10:27:33.377374 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/94d8c17f-ab97-41a7-a7e5-bb8fa013b562-memberlist\") pod \"speaker-z79z5\" (UID: \"94d8c17f-ab97-41a7-a7e5-bb8fa013b562\") " pod="metallb-system/speaker-z79z5" Dec 11 10:27:33 crc kubenswrapper[4953]: I1211 10:27:33.659975 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-z79z5" Dec 11 10:27:33 crc kubenswrapper[4953]: W1211 10:27:33.716827 4953 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod94d8c17f_ab97_41a7_a7e5_bb8fa013b562.slice/crio-31bbf1f285ec7e81cf8bdcaeb20154dc683686d800182a7392647a160340e913 WatchSource:0}: Error finding container 31bbf1f285ec7e81cf8bdcaeb20154dc683686d800182a7392647a160340e913: Status 404 returned error can't find the container with id 31bbf1f285ec7e81cf8bdcaeb20154dc683686d800182a7392647a160340e913 Dec 11 10:27:34 crc kubenswrapper[4953]: I1211 10:27:34.125322 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-z79z5" event={"ID":"94d8c17f-ab97-41a7-a7e5-bb8fa013b562","Type":"ContainerStarted","Data":"31bbf1f285ec7e81cf8bdcaeb20154dc683686d800182a7392647a160340e913"} Dec 11 10:27:35 crc kubenswrapper[4953]: I1211 10:27:35.133606 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-z79z5" event={"ID":"94d8c17f-ab97-41a7-a7e5-bb8fa013b562","Type":"ContainerStarted","Data":"1a652ca9429a0336d324e3a4cc1508f7a924531591cd2a81b5baf794692eb5b3"} Dec 11 10:27:35 crc kubenswrapper[4953]: I1211 10:27:35.133974 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-z79z5" event={"ID":"94d8c17f-ab97-41a7-a7e5-bb8fa013b562","Type":"ContainerStarted","Data":"6ee13a5112d6ee6dd359a8e64cb2721b67d315f6610f155c01818b22c7b12459"} Dec 11 10:27:35 crc kubenswrapper[4953]: I1211 10:27:35.135132 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-z79z5" Dec 11 10:27:35 crc kubenswrapper[4953]: I1211 10:27:35.158982 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-z79z5" podStartSLOduration=4.158955499 podStartE2EDuration="4.158955499s" podCreationTimestamp="2025-12-11 10:27:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:27:35.158498045 +0000 UTC m=+973.182357088" watchObservedRunningTime="2025-12-11 10:27:35.158955499 +0000 UTC m=+973.182814532" Dec 11 10:27:40 crc kubenswrapper[4953]: I1211 10:27:40.182316 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-wzplr" event={"ID":"e417a1bf-f380-4cd9-8f0b-a9b1766c578a","Type":"ContainerStarted","Data":"befc362f4db04dabebed591b7a905324d033b075d127ea9006ce754f03c632b8"} Dec 11 10:27:40 crc kubenswrapper[4953]: I1211 10:27:40.183225 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-wzplr" Dec 11 10:27:40 crc kubenswrapper[4953]: I1211 10:27:40.184928 4953 generic.go:334] "Generic (PLEG): container finished" podID="97ad19c9-42d1-49f4-a634-baa459c11c80" containerID="506b4af8e96dcf334bc5032cb7858fdd34ae1bfb0462f72e2e0a7eb22aed047d" exitCode=0 Dec 11 10:27:40 crc kubenswrapper[4953]: I1211 10:27:40.184994 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-wsh66" event={"ID":"97ad19c9-42d1-49f4-a634-baa459c11c80","Type":"ContainerDied","Data":"506b4af8e96dcf334bc5032cb7858fdd34ae1bfb0462f72e2e0a7eb22aed047d"} Dec 11 10:27:40 crc kubenswrapper[4953]: I1211 10:27:40.208350 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-wzplr" podStartSLOduration=2.50338914 podStartE2EDuration="9.208325845s" podCreationTimestamp="2025-12-11 10:27:31 +0000 UTC" firstStartedPulling="2025-12-11 10:27:32.782128623 +0000 UTC m=+970.805987656" lastFinishedPulling="2025-12-11 10:27:39.487065328 +0000 UTC m=+977.510924361" observedRunningTime="2025-12-11 10:27:40.20674455 +0000 UTC m=+978.230603603" watchObservedRunningTime="2025-12-11 10:27:40.208325845 +0000 UTC m=+978.232184908" Dec 11 10:27:41 crc kubenswrapper[4953]: I1211 10:27:41.191359 4953 generic.go:334] "Generic (PLEG): container finished" podID="97ad19c9-42d1-49f4-a634-baa459c11c80" containerID="81c3cd5f965840fbdddeef42b3a5ffad4359b2f66d976b8fd4ef30f99832c8f5" exitCode=0 Dec 11 10:27:41 crc kubenswrapper[4953]: I1211 10:27:41.191911 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-wsh66" event={"ID":"97ad19c9-42d1-49f4-a634-baa459c11c80","Type":"ContainerDied","Data":"81c3cd5f965840fbdddeef42b3a5ffad4359b2f66d976b8fd4ef30f99832c8f5"} Dec 11 10:27:42 crc kubenswrapper[4953]: I1211 10:27:42.013012 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-5bddd4b946-5t68s" Dec 11 10:27:42 crc kubenswrapper[4953]: I1211 10:27:42.198443 4953 generic.go:334] "Generic (PLEG): container finished" podID="97ad19c9-42d1-49f4-a634-baa459c11c80" containerID="1e06c458bd82d8289425ed9e04e4c1ea678871c7fb6971d4ac600615a5326a51" exitCode=0 Dec 11 10:27:42 crc kubenswrapper[4953]: I1211 10:27:42.198496 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-wsh66" event={"ID":"97ad19c9-42d1-49f4-a634-baa459c11c80","Type":"ContainerDied","Data":"1e06c458bd82d8289425ed9e04e4c1ea678871c7fb6971d4ac600615a5326a51"} Dec 11 10:27:43 crc kubenswrapper[4953]: I1211 10:27:43.208169 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-wsh66" event={"ID":"97ad19c9-42d1-49f4-a634-baa459c11c80","Type":"ContainerStarted","Data":"a65fead4bfbd196fe0c74efb6bf91972dcb9683a4de8ea22c78830e2debd6965"} Dec 11 10:27:43 crc kubenswrapper[4953]: I1211 10:27:43.208465 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-wsh66" event={"ID":"97ad19c9-42d1-49f4-a634-baa459c11c80","Type":"ContainerStarted","Data":"bb41452dd91c43c07bc164b134918d0b00e0a25819e7b6b39606529f04784b94"} Dec 11 10:27:43 crc kubenswrapper[4953]: I1211 10:27:43.208477 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-wsh66" event={"ID":"97ad19c9-42d1-49f4-a634-baa459c11c80","Type":"ContainerStarted","Data":"7b2cf900b811440438e655ea81eab635617f904d52129d7b1408144d0e7e1fca"} Dec 11 10:27:43 crc kubenswrapper[4953]: I1211 10:27:43.208485 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-wsh66" event={"ID":"97ad19c9-42d1-49f4-a634-baa459c11c80","Type":"ContainerStarted","Data":"f2d4cd16ae8dbeace82a6eeedff64bd665a4c98df7be839a5f9b5534824fd338"} Dec 11 10:27:43 crc kubenswrapper[4953]: I1211 10:27:43.208495 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-wsh66" event={"ID":"97ad19c9-42d1-49f4-a634-baa459c11c80","Type":"ContainerStarted","Data":"d8e71a53cb2214f632dba5e90b4b485ecb5a60fbc8700b384f2825f6808ce6bd"} Dec 11 10:27:44 crc kubenswrapper[4953]: I1211 10:27:44.220441 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-wsh66" event={"ID":"97ad19c9-42d1-49f4-a634-baa459c11c80","Type":"ContainerStarted","Data":"c7295ee9263c79e6eec9b87cb1bc006e04ce0539e7d22b8ebb34453f97ced1a5"} Dec 11 10:27:44 crc kubenswrapper[4953]: I1211 10:27:44.220664 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-wsh66" Dec 11 10:27:44 crc kubenswrapper[4953]: I1211 10:27:44.252990 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-wsh66" podStartSLOduration=5.594645671 podStartE2EDuration="13.252970516s" podCreationTimestamp="2025-12-11 10:27:31 +0000 UTC" firstStartedPulling="2025-12-11 10:27:31.804229851 +0000 UTC m=+969.828088884" lastFinishedPulling="2025-12-11 10:27:39.462554706 +0000 UTC m=+977.486413729" observedRunningTime="2025-12-11 10:27:44.249466353 +0000 UTC m=+982.273325386" watchObservedRunningTime="2025-12-11 10:27:44.252970516 +0000 UTC m=+982.276829559" Dec 11 10:27:46 crc kubenswrapper[4953]: I1211 10:27:46.611727 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-wsh66" Dec 11 10:27:46 crc kubenswrapper[4953]: I1211 10:27:46.655396 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-wsh66" Dec 11 10:27:48 crc kubenswrapper[4953]: I1211 10:27:48.194622 4953 patch_prober.go:28] interesting pod/machine-config-daemon-q2898 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 10:27:48 crc kubenswrapper[4953]: I1211 10:27:48.195680 4953 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 10:27:52 crc kubenswrapper[4953]: I1211 10:27:52.236911 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-wzplr" Dec 11 10:27:53 crc kubenswrapper[4953]: I1211 10:27:53.667910 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-z79z5" Dec 11 10:27:55 crc kubenswrapper[4953]: I1211 10:27:55.063897 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ah2sdg"] Dec 11 10:27:55 crc kubenswrapper[4953]: I1211 10:27:55.066044 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ah2sdg" Dec 11 10:27:55 crc kubenswrapper[4953]: I1211 10:27:55.069350 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Dec 11 10:27:55 crc kubenswrapper[4953]: I1211 10:27:55.077946 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ah2sdg"] Dec 11 10:27:55 crc kubenswrapper[4953]: I1211 10:27:55.149106 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c0689468-4037-474c-b76c-3580965a01fc-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ah2sdg\" (UID: \"c0689468-4037-474c-b76c-3580965a01fc\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ah2sdg" Dec 11 10:27:55 crc kubenswrapper[4953]: I1211 10:27:55.149172 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c0689468-4037-474c-b76c-3580965a01fc-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ah2sdg\" (UID: \"c0689468-4037-474c-b76c-3580965a01fc\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ah2sdg" Dec 11 10:27:55 crc kubenswrapper[4953]: I1211 10:27:55.149199 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksgbx\" (UniqueName: \"kubernetes.io/projected/c0689468-4037-474c-b76c-3580965a01fc-kube-api-access-ksgbx\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ah2sdg\" (UID: \"c0689468-4037-474c-b76c-3580965a01fc\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ah2sdg" Dec 11 10:27:55 crc kubenswrapper[4953]: I1211 10:27:55.251084 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c0689468-4037-474c-b76c-3580965a01fc-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ah2sdg\" (UID: \"c0689468-4037-474c-b76c-3580965a01fc\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ah2sdg" Dec 11 10:27:55 crc kubenswrapper[4953]: I1211 10:27:55.251149 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c0689468-4037-474c-b76c-3580965a01fc-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ah2sdg\" (UID: \"c0689468-4037-474c-b76c-3580965a01fc\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ah2sdg" Dec 11 10:27:55 crc kubenswrapper[4953]: I1211 10:27:55.251178 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ksgbx\" (UniqueName: \"kubernetes.io/projected/c0689468-4037-474c-b76c-3580965a01fc-kube-api-access-ksgbx\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ah2sdg\" (UID: \"c0689468-4037-474c-b76c-3580965a01fc\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ah2sdg" Dec 11 10:27:55 crc kubenswrapper[4953]: I1211 10:27:55.252046 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c0689468-4037-474c-b76c-3580965a01fc-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ah2sdg\" (UID: \"c0689468-4037-474c-b76c-3580965a01fc\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ah2sdg" Dec 11 10:27:55 crc kubenswrapper[4953]: I1211 10:27:55.252349 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c0689468-4037-474c-b76c-3580965a01fc-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ah2sdg\" (UID: \"c0689468-4037-474c-b76c-3580965a01fc\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ah2sdg" Dec 11 10:27:55 crc kubenswrapper[4953]: I1211 10:27:55.272443 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ksgbx\" (UniqueName: \"kubernetes.io/projected/c0689468-4037-474c-b76c-3580965a01fc-kube-api-access-ksgbx\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ah2sdg\" (UID: \"c0689468-4037-474c-b76c-3580965a01fc\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ah2sdg" Dec 11 10:27:55 crc kubenswrapper[4953]: I1211 10:27:55.387468 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ah2sdg" Dec 11 10:27:55 crc kubenswrapper[4953]: I1211 10:27:55.913542 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ah2sdg"] Dec 11 10:27:55 crc kubenswrapper[4953]: W1211 10:27:55.919436 4953 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc0689468_4037_474c_b76c_3580965a01fc.slice/crio-8b76bbfc7cda59cc4b89771d5e4d2a31442da6a219daa7b52dc3868bacde5d54 WatchSource:0}: Error finding container 8b76bbfc7cda59cc4b89771d5e4d2a31442da6a219daa7b52dc3868bacde5d54: Status 404 returned error can't find the container with id 8b76bbfc7cda59cc4b89771d5e4d2a31442da6a219daa7b52dc3868bacde5d54 Dec 11 10:27:56 crc kubenswrapper[4953]: I1211 10:27:56.315518 4953 generic.go:334] "Generic (PLEG): container finished" podID="c0689468-4037-474c-b76c-3580965a01fc" containerID="3254fe53e7d84a2f389b99dc8d5208fc6d9f6a86f6aa1ab70ad2fa274cdb66a9" exitCode=0 Dec 11 10:27:56 crc kubenswrapper[4953]: I1211 10:27:56.315587 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ah2sdg" event={"ID":"c0689468-4037-474c-b76c-3580965a01fc","Type":"ContainerDied","Data":"3254fe53e7d84a2f389b99dc8d5208fc6d9f6a86f6aa1ab70ad2fa274cdb66a9"} Dec 11 10:27:56 crc kubenswrapper[4953]: I1211 10:27:56.315628 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ah2sdg" event={"ID":"c0689468-4037-474c-b76c-3580965a01fc","Type":"ContainerStarted","Data":"8b76bbfc7cda59cc4b89771d5e4d2a31442da6a219daa7b52dc3868bacde5d54"} Dec 11 10:28:00 crc kubenswrapper[4953]: I1211 10:28:00.457830 4953 generic.go:334] "Generic (PLEG): container finished" podID="c0689468-4037-474c-b76c-3580965a01fc" containerID="5c317d43e91fd432dfb79a05aa68f9c953907c1f3ae9daff5861373b481ef3ba" exitCode=0 Dec 11 10:28:00 crc kubenswrapper[4953]: I1211 10:28:00.457983 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ah2sdg" event={"ID":"c0689468-4037-474c-b76c-3580965a01fc","Type":"ContainerDied","Data":"5c317d43e91fd432dfb79a05aa68f9c953907c1f3ae9daff5861373b481ef3ba"} Dec 11 10:28:01 crc kubenswrapper[4953]: I1211 10:28:01.472731 4953 generic.go:334] "Generic (PLEG): container finished" podID="c0689468-4037-474c-b76c-3580965a01fc" containerID="c8a523dc6250a96c0dc4439409d16a29e9f4658850d6beaf83431e7f35dbb7d9" exitCode=0 Dec 11 10:28:01 crc kubenswrapper[4953]: I1211 10:28:01.473093 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ah2sdg" event={"ID":"c0689468-4037-474c-b76c-3580965a01fc","Type":"ContainerDied","Data":"c8a523dc6250a96c0dc4439409d16a29e9f4658850d6beaf83431e7f35dbb7d9"} Dec 11 10:28:01 crc kubenswrapper[4953]: I1211 10:28:01.613566 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-wsh66" Dec 11 10:28:02 crc kubenswrapper[4953]: I1211 10:28:02.851613 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ah2sdg" Dec 11 10:28:02 crc kubenswrapper[4953]: I1211 10:28:02.904567 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c0689468-4037-474c-b76c-3580965a01fc-util\") pod \"c0689468-4037-474c-b76c-3580965a01fc\" (UID: \"c0689468-4037-474c-b76c-3580965a01fc\") " Dec 11 10:28:02 crc kubenswrapper[4953]: I1211 10:28:02.904741 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c0689468-4037-474c-b76c-3580965a01fc-bundle\") pod \"c0689468-4037-474c-b76c-3580965a01fc\" (UID: \"c0689468-4037-474c-b76c-3580965a01fc\") " Dec 11 10:28:02 crc kubenswrapper[4953]: I1211 10:28:02.904790 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ksgbx\" (UniqueName: \"kubernetes.io/projected/c0689468-4037-474c-b76c-3580965a01fc-kube-api-access-ksgbx\") pod \"c0689468-4037-474c-b76c-3580965a01fc\" (UID: \"c0689468-4037-474c-b76c-3580965a01fc\") " Dec 11 10:28:02 crc kubenswrapper[4953]: I1211 10:28:02.907348 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c0689468-4037-474c-b76c-3580965a01fc-bundle" (OuterVolumeSpecName: "bundle") pod "c0689468-4037-474c-b76c-3580965a01fc" (UID: "c0689468-4037-474c-b76c-3580965a01fc"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:28:02 crc kubenswrapper[4953]: I1211 10:28:02.910050 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0689468-4037-474c-b76c-3580965a01fc-kube-api-access-ksgbx" (OuterVolumeSpecName: "kube-api-access-ksgbx") pod "c0689468-4037-474c-b76c-3580965a01fc" (UID: "c0689468-4037-474c-b76c-3580965a01fc"). InnerVolumeSpecName "kube-api-access-ksgbx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:28:02 crc kubenswrapper[4953]: I1211 10:28:02.921160 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c0689468-4037-474c-b76c-3580965a01fc-util" (OuterVolumeSpecName: "util") pod "c0689468-4037-474c-b76c-3580965a01fc" (UID: "c0689468-4037-474c-b76c-3580965a01fc"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:28:03 crc kubenswrapper[4953]: I1211 10:28:03.007338 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ksgbx\" (UniqueName: \"kubernetes.io/projected/c0689468-4037-474c-b76c-3580965a01fc-kube-api-access-ksgbx\") on node \"crc\" DevicePath \"\"" Dec 11 10:28:03 crc kubenswrapper[4953]: I1211 10:28:03.007407 4953 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c0689468-4037-474c-b76c-3580965a01fc-util\") on node \"crc\" DevicePath \"\"" Dec 11 10:28:03 crc kubenswrapper[4953]: I1211 10:28:03.007437 4953 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c0689468-4037-474c-b76c-3580965a01fc-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 10:28:03 crc kubenswrapper[4953]: I1211 10:28:03.489547 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ah2sdg" event={"ID":"c0689468-4037-474c-b76c-3580965a01fc","Type":"ContainerDied","Data":"8b76bbfc7cda59cc4b89771d5e4d2a31442da6a219daa7b52dc3868bacde5d54"} Dec 11 10:28:03 crc kubenswrapper[4953]: I1211 10:28:03.489939 4953 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8b76bbfc7cda59cc4b89771d5e4d2a31442da6a219daa7b52dc3868bacde5d54" Dec 11 10:28:03 crc kubenswrapper[4953]: I1211 10:28:03.489666 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ah2sdg" Dec 11 10:28:07 crc kubenswrapper[4953]: I1211 10:28:07.762601 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-qn9zx"] Dec 11 10:28:07 crc kubenswrapper[4953]: E1211 10:28:07.763267 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0689468-4037-474c-b76c-3580965a01fc" containerName="util" Dec 11 10:28:07 crc kubenswrapper[4953]: I1211 10:28:07.763290 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0689468-4037-474c-b76c-3580965a01fc" containerName="util" Dec 11 10:28:07 crc kubenswrapper[4953]: E1211 10:28:07.763310 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0689468-4037-474c-b76c-3580965a01fc" containerName="extract" Dec 11 10:28:07 crc kubenswrapper[4953]: I1211 10:28:07.763319 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0689468-4037-474c-b76c-3580965a01fc" containerName="extract" Dec 11 10:28:07 crc kubenswrapper[4953]: E1211 10:28:07.763328 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0689468-4037-474c-b76c-3580965a01fc" containerName="pull" Dec 11 10:28:07 crc kubenswrapper[4953]: I1211 10:28:07.763338 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0689468-4037-474c-b76c-3580965a01fc" containerName="pull" Dec 11 10:28:07 crc kubenswrapper[4953]: I1211 10:28:07.763518 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0689468-4037-474c-b76c-3580965a01fc" containerName="extract" Dec 11 10:28:07 crc kubenswrapper[4953]: I1211 10:28:07.764129 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-qn9zx" Dec 11 10:28:07 crc kubenswrapper[4953]: I1211 10:28:07.768937 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"openshift-service-ca.crt" Dec 11 10:28:07 crc kubenswrapper[4953]: I1211 10:28:07.768937 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"kube-root-ca.crt" Dec 11 10:28:07 crc kubenswrapper[4953]: I1211 10:28:07.768999 4953 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager-operator"/"cert-manager-operator-controller-manager-dockercfg-9jtqw" Dec 11 10:28:07 crc kubenswrapper[4953]: I1211 10:28:07.791992 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-qn9zx"] Dec 11 10:28:07 crc kubenswrapper[4953]: I1211 10:28:07.922002 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/bc473330-ce44-4445-a7c1-92a86759c5d9-tmp\") pod \"cert-manager-operator-controller-manager-64cf6dff88-qn9zx\" (UID: \"bc473330-ce44-4445-a7c1-92a86759c5d9\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-qn9zx" Dec 11 10:28:07 crc kubenswrapper[4953]: I1211 10:28:07.922143 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhqw2\" (UniqueName: \"kubernetes.io/projected/bc473330-ce44-4445-a7c1-92a86759c5d9-kube-api-access-jhqw2\") pod \"cert-manager-operator-controller-manager-64cf6dff88-qn9zx\" (UID: \"bc473330-ce44-4445-a7c1-92a86759c5d9\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-qn9zx" Dec 11 10:28:08 crc kubenswrapper[4953]: I1211 10:28:08.023037 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jhqw2\" (UniqueName: \"kubernetes.io/projected/bc473330-ce44-4445-a7c1-92a86759c5d9-kube-api-access-jhqw2\") pod \"cert-manager-operator-controller-manager-64cf6dff88-qn9zx\" (UID: \"bc473330-ce44-4445-a7c1-92a86759c5d9\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-qn9zx" Dec 11 10:28:08 crc kubenswrapper[4953]: I1211 10:28:08.023116 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/bc473330-ce44-4445-a7c1-92a86759c5d9-tmp\") pod \"cert-manager-operator-controller-manager-64cf6dff88-qn9zx\" (UID: \"bc473330-ce44-4445-a7c1-92a86759c5d9\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-qn9zx" Dec 11 10:28:08 crc kubenswrapper[4953]: I1211 10:28:08.023646 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/bc473330-ce44-4445-a7c1-92a86759c5d9-tmp\") pod \"cert-manager-operator-controller-manager-64cf6dff88-qn9zx\" (UID: \"bc473330-ce44-4445-a7c1-92a86759c5d9\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-qn9zx" Dec 11 10:28:08 crc kubenswrapper[4953]: I1211 10:28:08.040424 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhqw2\" (UniqueName: \"kubernetes.io/projected/bc473330-ce44-4445-a7c1-92a86759c5d9-kube-api-access-jhqw2\") pod \"cert-manager-operator-controller-manager-64cf6dff88-qn9zx\" (UID: \"bc473330-ce44-4445-a7c1-92a86759c5d9\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-qn9zx" Dec 11 10:28:08 crc kubenswrapper[4953]: I1211 10:28:08.085107 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-qn9zx" Dec 11 10:28:08 crc kubenswrapper[4953]: I1211 10:28:08.635385 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-qn9zx"] Dec 11 10:28:09 crc kubenswrapper[4953]: I1211 10:28:09.532384 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-qn9zx" event={"ID":"bc473330-ce44-4445-a7c1-92a86759c5d9","Type":"ContainerStarted","Data":"a50304bc39742e9294982188585e543ce6797e429fce158eab00b30a514e70bc"} Dec 11 10:28:14 crc kubenswrapper[4953]: I1211 10:28:14.350787 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-nznsk"] Dec 11 10:28:14 crc kubenswrapper[4953]: I1211 10:28:14.352623 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nznsk" Dec 11 10:28:14 crc kubenswrapper[4953]: I1211 10:28:14.358113 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nznsk"] Dec 11 10:28:14 crc kubenswrapper[4953]: I1211 10:28:14.549393 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ef08360-d11b-4805-83f4-16c914be3cb8-catalog-content\") pod \"redhat-marketplace-nznsk\" (UID: \"9ef08360-d11b-4805-83f4-16c914be3cb8\") " pod="openshift-marketplace/redhat-marketplace-nznsk" Dec 11 10:28:14 crc kubenswrapper[4953]: I1211 10:28:14.549876 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75rxf\" (UniqueName: \"kubernetes.io/projected/9ef08360-d11b-4805-83f4-16c914be3cb8-kube-api-access-75rxf\") pod \"redhat-marketplace-nznsk\" (UID: \"9ef08360-d11b-4805-83f4-16c914be3cb8\") " pod="openshift-marketplace/redhat-marketplace-nznsk" Dec 11 10:28:14 crc kubenswrapper[4953]: I1211 10:28:14.550123 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ef08360-d11b-4805-83f4-16c914be3cb8-utilities\") pod \"redhat-marketplace-nznsk\" (UID: \"9ef08360-d11b-4805-83f4-16c914be3cb8\") " pod="openshift-marketplace/redhat-marketplace-nznsk" Dec 11 10:28:14 crc kubenswrapper[4953]: I1211 10:28:14.651320 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ef08360-d11b-4805-83f4-16c914be3cb8-utilities\") pod \"redhat-marketplace-nznsk\" (UID: \"9ef08360-d11b-4805-83f4-16c914be3cb8\") " pod="openshift-marketplace/redhat-marketplace-nznsk" Dec 11 10:28:14 crc kubenswrapper[4953]: I1211 10:28:14.651455 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ef08360-d11b-4805-83f4-16c914be3cb8-catalog-content\") pod \"redhat-marketplace-nznsk\" (UID: \"9ef08360-d11b-4805-83f4-16c914be3cb8\") " pod="openshift-marketplace/redhat-marketplace-nznsk" Dec 11 10:28:14 crc kubenswrapper[4953]: I1211 10:28:14.651921 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ef08360-d11b-4805-83f4-16c914be3cb8-utilities\") pod \"redhat-marketplace-nznsk\" (UID: \"9ef08360-d11b-4805-83f4-16c914be3cb8\") " pod="openshift-marketplace/redhat-marketplace-nznsk" Dec 11 10:28:14 crc kubenswrapper[4953]: I1211 10:28:14.652099 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ef08360-d11b-4805-83f4-16c914be3cb8-catalog-content\") pod \"redhat-marketplace-nznsk\" (UID: \"9ef08360-d11b-4805-83f4-16c914be3cb8\") " pod="openshift-marketplace/redhat-marketplace-nznsk" Dec 11 10:28:14 crc kubenswrapper[4953]: I1211 10:28:14.652190 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-75rxf\" (UniqueName: \"kubernetes.io/projected/9ef08360-d11b-4805-83f4-16c914be3cb8-kube-api-access-75rxf\") pod \"redhat-marketplace-nznsk\" (UID: \"9ef08360-d11b-4805-83f4-16c914be3cb8\") " pod="openshift-marketplace/redhat-marketplace-nznsk" Dec 11 10:28:14 crc kubenswrapper[4953]: I1211 10:28:14.682184 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-75rxf\" (UniqueName: \"kubernetes.io/projected/9ef08360-d11b-4805-83f4-16c914be3cb8-kube-api-access-75rxf\") pod \"redhat-marketplace-nznsk\" (UID: \"9ef08360-d11b-4805-83f4-16c914be3cb8\") " pod="openshift-marketplace/redhat-marketplace-nznsk" Dec 11 10:28:14 crc kubenswrapper[4953]: I1211 10:28:14.970727 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nznsk" Dec 11 10:28:17 crc kubenswrapper[4953]: I1211 10:28:17.351751 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nznsk"] Dec 11 10:28:17 crc kubenswrapper[4953]: W1211 10:28:17.352754 4953 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9ef08360_d11b_4805_83f4_16c914be3cb8.slice/crio-e60d7a378e950b9838cea91b71ce8d0fae093842a6534d3999d8188f4a7317ef WatchSource:0}: Error finding container e60d7a378e950b9838cea91b71ce8d0fae093842a6534d3999d8188f4a7317ef: Status 404 returned error can't find the container with id e60d7a378e950b9838cea91b71ce8d0fae093842a6534d3999d8188f4a7317ef Dec 11 10:28:17 crc kubenswrapper[4953]: I1211 10:28:17.624642 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-qn9zx" event={"ID":"bc473330-ce44-4445-a7c1-92a86759c5d9","Type":"ContainerStarted","Data":"2e674ededdc99395044ee014663cd75a6c35cbc467fb66a94b71c64e5d6d0eca"} Dec 11 10:28:17 crc kubenswrapper[4953]: I1211 10:28:17.626297 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nznsk" event={"ID":"9ef08360-d11b-4805-83f4-16c914be3cb8","Type":"ContainerStarted","Data":"333b24b925ec443677d2cb37b701556d0079ad70b4db94c19fc1f95c2d15f0fd"} Dec 11 10:28:17 crc kubenswrapper[4953]: I1211 10:28:17.626330 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nznsk" event={"ID":"9ef08360-d11b-4805-83f4-16c914be3cb8","Type":"ContainerStarted","Data":"e60d7a378e950b9838cea91b71ce8d0fae093842a6534d3999d8188f4a7317ef"} Dec 11 10:28:17 crc kubenswrapper[4953]: I1211 10:28:17.644558 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-qn9zx" podStartSLOduration=2.089630044 podStartE2EDuration="10.644535126s" podCreationTimestamp="2025-12-11 10:28:07 +0000 UTC" firstStartedPulling="2025-12-11 10:28:08.65574791 +0000 UTC m=+1006.679606943" lastFinishedPulling="2025-12-11 10:28:17.210653002 +0000 UTC m=+1015.234512025" observedRunningTime="2025-12-11 10:28:17.641359394 +0000 UTC m=+1015.665218427" watchObservedRunningTime="2025-12-11 10:28:17.644535126 +0000 UTC m=+1015.668394159" Dec 11 10:28:18 crc kubenswrapper[4953]: I1211 10:28:18.193731 4953 patch_prober.go:28] interesting pod/machine-config-daemon-q2898 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 10:28:18 crc kubenswrapper[4953]: I1211 10:28:18.194079 4953 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 10:28:18 crc kubenswrapper[4953]: I1211 10:28:18.194189 4953 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q2898" Dec 11 10:28:18 crc kubenswrapper[4953]: I1211 10:28:18.194940 4953 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4128485b59765a5f0e1c236093ee311843a19fb26e6f522ba47964eefbd53b75"} pod="openshift-machine-config-operator/machine-config-daemon-q2898" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 11 10:28:18 crc kubenswrapper[4953]: I1211 10:28:18.195109 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" containerName="machine-config-daemon" containerID="cri-o://4128485b59765a5f0e1c236093ee311843a19fb26e6f522ba47964eefbd53b75" gracePeriod=600 Dec 11 10:28:18 crc kubenswrapper[4953]: I1211 10:28:18.645464 4953 generic.go:334] "Generic (PLEG): container finished" podID="ed741fb7-1326-48b7-a713-17c9f0243eac" containerID="4128485b59765a5f0e1c236093ee311843a19fb26e6f522ba47964eefbd53b75" exitCode=0 Dec 11 10:28:18 crc kubenswrapper[4953]: I1211 10:28:18.645522 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q2898" event={"ID":"ed741fb7-1326-48b7-a713-17c9f0243eac","Type":"ContainerDied","Data":"4128485b59765a5f0e1c236093ee311843a19fb26e6f522ba47964eefbd53b75"} Dec 11 10:28:18 crc kubenswrapper[4953]: I1211 10:28:18.645828 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q2898" event={"ID":"ed741fb7-1326-48b7-a713-17c9f0243eac","Type":"ContainerStarted","Data":"d7aacf4c14bd2bc98ec833613461a09282ac2ac960a4b2c012b1862a1a65908a"} Dec 11 10:28:18 crc kubenswrapper[4953]: I1211 10:28:18.645872 4953 scope.go:117] "RemoveContainer" containerID="3ca59c50b35b5c8d77fc457ff5e5a06ef5ae754b46ae582746445b4e7704377c" Dec 11 10:28:18 crc kubenswrapper[4953]: I1211 10:28:18.649839 4953 generic.go:334] "Generic (PLEG): container finished" podID="9ef08360-d11b-4805-83f4-16c914be3cb8" containerID="333b24b925ec443677d2cb37b701556d0079ad70b4db94c19fc1f95c2d15f0fd" exitCode=0 Dec 11 10:28:18 crc kubenswrapper[4953]: I1211 10:28:18.651639 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nznsk" event={"ID":"9ef08360-d11b-4805-83f4-16c914be3cb8","Type":"ContainerDied","Data":"333b24b925ec443677d2cb37b701556d0079ad70b4db94c19fc1f95c2d15f0fd"} Dec 11 10:28:20 crc kubenswrapper[4953]: I1211 10:28:20.666732 4953 generic.go:334] "Generic (PLEG): container finished" podID="9ef08360-d11b-4805-83f4-16c914be3cb8" containerID="7f0218b6a292d812c83144d1f4d48e6b6ea3d1c312b2f2802693449bc4e599d8" exitCode=0 Dec 11 10:28:20 crc kubenswrapper[4953]: I1211 10:28:20.666789 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nznsk" event={"ID":"9ef08360-d11b-4805-83f4-16c914be3cb8","Type":"ContainerDied","Data":"7f0218b6a292d812c83144d1f4d48e6b6ea3d1c312b2f2802693449bc4e599d8"} Dec 11 10:28:21 crc kubenswrapper[4953]: I1211 10:28:21.676480 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nznsk" event={"ID":"9ef08360-d11b-4805-83f4-16c914be3cb8","Type":"ContainerStarted","Data":"2fcbc9f98870f27559a5db645a19a26c5be114400741d09d55e7af6f6bb9bde5"} Dec 11 10:28:21 crc kubenswrapper[4953]: I1211 10:28:21.709925 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-nznsk" podStartSLOduration=4.997718958 podStartE2EDuration="7.709902954s" podCreationTimestamp="2025-12-11 10:28:14 +0000 UTC" firstStartedPulling="2025-12-11 10:28:18.654558864 +0000 UTC m=+1016.678417897" lastFinishedPulling="2025-12-11 10:28:21.36674286 +0000 UTC m=+1019.390601893" observedRunningTime="2025-12-11 10:28:21.705040974 +0000 UTC m=+1019.728900007" watchObservedRunningTime="2025-12-11 10:28:21.709902954 +0000 UTC m=+1019.733761987" Dec 11 10:28:21 crc kubenswrapper[4953]: I1211 10:28:21.879544 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-r2rsb"] Dec 11 10:28:21 crc kubenswrapper[4953]: I1211 10:28:21.880558 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-f4fb5df64-r2rsb" Dec 11 10:28:21 crc kubenswrapper[4953]: I1211 10:28:21.882382 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Dec 11 10:28:21 crc kubenswrapper[4953]: I1211 10:28:21.882439 4953 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-74fks" Dec 11 10:28:21 crc kubenswrapper[4953]: I1211 10:28:21.882453 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Dec 11 10:28:21 crc kubenswrapper[4953]: I1211 10:28:21.891335 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-r2rsb"] Dec 11 10:28:22 crc kubenswrapper[4953]: I1211 10:28:22.067208 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pxvd\" (UniqueName: \"kubernetes.io/projected/096fad5f-94d2-43d5-93d5-d3daf6438972-kube-api-access-6pxvd\") pod \"cert-manager-webhook-f4fb5df64-r2rsb\" (UID: \"096fad5f-94d2-43d5-93d5-d3daf6438972\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-r2rsb" Dec 11 10:28:22 crc kubenswrapper[4953]: I1211 10:28:22.067245 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/096fad5f-94d2-43d5-93d5-d3daf6438972-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-r2rsb\" (UID: \"096fad5f-94d2-43d5-93d5-d3daf6438972\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-r2rsb" Dec 11 10:28:22 crc kubenswrapper[4953]: I1211 10:28:22.168388 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6pxvd\" (UniqueName: \"kubernetes.io/projected/096fad5f-94d2-43d5-93d5-d3daf6438972-kube-api-access-6pxvd\") pod \"cert-manager-webhook-f4fb5df64-r2rsb\" (UID: \"096fad5f-94d2-43d5-93d5-d3daf6438972\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-r2rsb" Dec 11 10:28:22 crc kubenswrapper[4953]: I1211 10:28:22.168477 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/096fad5f-94d2-43d5-93d5-d3daf6438972-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-r2rsb\" (UID: \"096fad5f-94d2-43d5-93d5-d3daf6438972\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-r2rsb" Dec 11 10:28:22 crc kubenswrapper[4953]: I1211 10:28:22.190360 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/096fad5f-94d2-43d5-93d5-d3daf6438972-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-r2rsb\" (UID: \"096fad5f-94d2-43d5-93d5-d3daf6438972\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-r2rsb" Dec 11 10:28:22 crc kubenswrapper[4953]: I1211 10:28:22.190455 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6pxvd\" (UniqueName: \"kubernetes.io/projected/096fad5f-94d2-43d5-93d5-d3daf6438972-kube-api-access-6pxvd\") pod \"cert-manager-webhook-f4fb5df64-r2rsb\" (UID: \"096fad5f-94d2-43d5-93d5-d3daf6438972\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-r2rsb" Dec 11 10:28:22 crc kubenswrapper[4953]: I1211 10:28:22.197241 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-f4fb5df64-r2rsb" Dec 11 10:28:22 crc kubenswrapper[4953]: I1211 10:28:22.600060 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-r2rsb"] Dec 11 10:28:22 crc kubenswrapper[4953]: W1211 10:28:22.607420 4953 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod096fad5f_94d2_43d5_93d5_d3daf6438972.slice/crio-f671956a2aa8fd549147124f13f2f475415ed1ff2febafbc8adea96e36e5de77 WatchSource:0}: Error finding container f671956a2aa8fd549147124f13f2f475415ed1ff2febafbc8adea96e36e5de77: Status 404 returned error can't find the container with id f671956a2aa8fd549147124f13f2f475415ed1ff2febafbc8adea96e36e5de77 Dec 11 10:28:22 crc kubenswrapper[4953]: I1211 10:28:22.684358 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-f4fb5df64-r2rsb" event={"ID":"096fad5f-94d2-43d5-93d5-d3daf6438972","Type":"ContainerStarted","Data":"f671956a2aa8fd549147124f13f2f475415ed1ff2febafbc8adea96e36e5de77"} Dec 11 10:28:24 crc kubenswrapper[4953]: I1211 10:28:24.665827 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-ht7z2"] Dec 11 10:28:24 crc kubenswrapper[4953]: I1211 10:28:24.668901 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-855d9ccff4-ht7z2" Dec 11 10:28:24 crc kubenswrapper[4953]: I1211 10:28:24.671453 4953 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-vjkzb" Dec 11 10:28:24 crc kubenswrapper[4953]: I1211 10:28:24.673492 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-ht7z2"] Dec 11 10:28:24 crc kubenswrapper[4953]: I1211 10:28:24.802489 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a6c256f8-7cf4-4196-b3ee-4124af7fed31-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-ht7z2\" (UID: \"a6c256f8-7cf4-4196-b3ee-4124af7fed31\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-ht7z2" Dec 11 10:28:24 crc kubenswrapper[4953]: I1211 10:28:24.802583 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrsph\" (UniqueName: \"kubernetes.io/projected/a6c256f8-7cf4-4196-b3ee-4124af7fed31-kube-api-access-lrsph\") pod \"cert-manager-cainjector-855d9ccff4-ht7z2\" (UID: \"a6c256f8-7cf4-4196-b3ee-4124af7fed31\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-ht7z2" Dec 11 10:28:24 crc kubenswrapper[4953]: I1211 10:28:24.903489 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a6c256f8-7cf4-4196-b3ee-4124af7fed31-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-ht7z2\" (UID: \"a6c256f8-7cf4-4196-b3ee-4124af7fed31\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-ht7z2" Dec 11 10:28:24 crc kubenswrapper[4953]: I1211 10:28:24.903592 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lrsph\" (UniqueName: \"kubernetes.io/projected/a6c256f8-7cf4-4196-b3ee-4124af7fed31-kube-api-access-lrsph\") pod \"cert-manager-cainjector-855d9ccff4-ht7z2\" (UID: \"a6c256f8-7cf4-4196-b3ee-4124af7fed31\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-ht7z2" Dec 11 10:28:24 crc kubenswrapper[4953]: I1211 10:28:24.927801 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrsph\" (UniqueName: \"kubernetes.io/projected/a6c256f8-7cf4-4196-b3ee-4124af7fed31-kube-api-access-lrsph\") pod \"cert-manager-cainjector-855d9ccff4-ht7z2\" (UID: \"a6c256f8-7cf4-4196-b3ee-4124af7fed31\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-ht7z2" Dec 11 10:28:24 crc kubenswrapper[4953]: I1211 10:28:24.939349 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a6c256f8-7cf4-4196-b3ee-4124af7fed31-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-ht7z2\" (UID: \"a6c256f8-7cf4-4196-b3ee-4124af7fed31\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-ht7z2" Dec 11 10:28:24 crc kubenswrapper[4953]: I1211 10:28:24.971817 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-nznsk" Dec 11 10:28:24 crc kubenswrapper[4953]: I1211 10:28:24.971874 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-nznsk" Dec 11 10:28:24 crc kubenswrapper[4953]: I1211 10:28:24.995844 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-855d9ccff4-ht7z2" Dec 11 10:28:25 crc kubenswrapper[4953]: I1211 10:28:25.034302 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-nznsk" Dec 11 10:28:25 crc kubenswrapper[4953]: I1211 10:28:25.432590 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-ht7z2"] Dec 11 10:28:25 crc kubenswrapper[4953]: I1211 10:28:25.706157 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-855d9ccff4-ht7z2" event={"ID":"a6c256f8-7cf4-4196-b3ee-4124af7fed31","Type":"ContainerStarted","Data":"eed4860a8af71bca2cb01fcc30d437cdbd43368361f3c95ededcbe1ecc334623"} Dec 11 10:28:28 crc kubenswrapper[4953]: I1211 10:28:28.348169 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-w2cg9"] Dec 11 10:28:28 crc kubenswrapper[4953]: I1211 10:28:28.349840 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w2cg9" Dec 11 10:28:28 crc kubenswrapper[4953]: I1211 10:28:28.373186 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-w2cg9"] Dec 11 10:28:28 crc kubenswrapper[4953]: I1211 10:28:28.578342 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f4c8cd1-2ee1-4d74-aedf-f5da8fa20c9d-catalog-content\") pod \"certified-operators-w2cg9\" (UID: \"7f4c8cd1-2ee1-4d74-aedf-f5da8fa20c9d\") " pod="openshift-marketplace/certified-operators-w2cg9" Dec 11 10:28:28 crc kubenswrapper[4953]: I1211 10:28:28.578413 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8c4p\" (UniqueName: \"kubernetes.io/projected/7f4c8cd1-2ee1-4d74-aedf-f5da8fa20c9d-kube-api-access-h8c4p\") pod \"certified-operators-w2cg9\" (UID: \"7f4c8cd1-2ee1-4d74-aedf-f5da8fa20c9d\") " pod="openshift-marketplace/certified-operators-w2cg9" Dec 11 10:28:28 crc kubenswrapper[4953]: I1211 10:28:28.578452 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f4c8cd1-2ee1-4d74-aedf-f5da8fa20c9d-utilities\") pod \"certified-operators-w2cg9\" (UID: \"7f4c8cd1-2ee1-4d74-aedf-f5da8fa20c9d\") " pod="openshift-marketplace/certified-operators-w2cg9" Dec 11 10:28:28 crc kubenswrapper[4953]: I1211 10:28:28.681443 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f4c8cd1-2ee1-4d74-aedf-f5da8fa20c9d-catalog-content\") pod \"certified-operators-w2cg9\" (UID: \"7f4c8cd1-2ee1-4d74-aedf-f5da8fa20c9d\") " pod="openshift-marketplace/certified-operators-w2cg9" Dec 11 10:28:28 crc kubenswrapper[4953]: I1211 10:28:28.681863 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8c4p\" (UniqueName: \"kubernetes.io/projected/7f4c8cd1-2ee1-4d74-aedf-f5da8fa20c9d-kube-api-access-h8c4p\") pod \"certified-operators-w2cg9\" (UID: \"7f4c8cd1-2ee1-4d74-aedf-f5da8fa20c9d\") " pod="openshift-marketplace/certified-operators-w2cg9" Dec 11 10:28:28 crc kubenswrapper[4953]: I1211 10:28:28.681911 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f4c8cd1-2ee1-4d74-aedf-f5da8fa20c9d-utilities\") pod \"certified-operators-w2cg9\" (UID: \"7f4c8cd1-2ee1-4d74-aedf-f5da8fa20c9d\") " pod="openshift-marketplace/certified-operators-w2cg9" Dec 11 10:28:28 crc kubenswrapper[4953]: I1211 10:28:28.682163 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f4c8cd1-2ee1-4d74-aedf-f5da8fa20c9d-catalog-content\") pod \"certified-operators-w2cg9\" (UID: \"7f4c8cd1-2ee1-4d74-aedf-f5da8fa20c9d\") " pod="openshift-marketplace/certified-operators-w2cg9" Dec 11 10:28:28 crc kubenswrapper[4953]: I1211 10:28:28.682276 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f4c8cd1-2ee1-4d74-aedf-f5da8fa20c9d-utilities\") pod \"certified-operators-w2cg9\" (UID: \"7f4c8cd1-2ee1-4d74-aedf-f5da8fa20c9d\") " pod="openshift-marketplace/certified-operators-w2cg9" Dec 11 10:28:28 crc kubenswrapper[4953]: I1211 10:28:28.714437 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8c4p\" (UniqueName: \"kubernetes.io/projected/7f4c8cd1-2ee1-4d74-aedf-f5da8fa20c9d-kube-api-access-h8c4p\") pod \"certified-operators-w2cg9\" (UID: \"7f4c8cd1-2ee1-4d74-aedf-f5da8fa20c9d\") " pod="openshift-marketplace/certified-operators-w2cg9" Dec 11 10:28:28 crc kubenswrapper[4953]: I1211 10:28:28.982284 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w2cg9" Dec 11 10:28:35 crc kubenswrapper[4953]: I1211 10:28:35.017493 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-nznsk" Dec 11 10:28:35 crc kubenswrapper[4953]: I1211 10:28:35.062221 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nznsk"] Dec 11 10:28:35 crc kubenswrapper[4953]: I1211 10:28:35.080210 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-w2cg9"] Dec 11 10:28:35 crc kubenswrapper[4953]: W1211 10:28:35.084636 4953 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7f4c8cd1_2ee1_4d74_aedf_f5da8fa20c9d.slice/crio-ee1dd12d62c82d4e7956ab483db977471a6af555b296f1f9d4ef776f7f87f89d WatchSource:0}: Error finding container ee1dd12d62c82d4e7956ab483db977471a6af555b296f1f9d4ef776f7f87f89d: Status 404 returned error can't find the container with id ee1dd12d62c82d4e7956ab483db977471a6af555b296f1f9d4ef776f7f87f89d Dec 11 10:28:35 crc kubenswrapper[4953]: I1211 10:28:35.785080 4953 generic.go:334] "Generic (PLEG): container finished" podID="7f4c8cd1-2ee1-4d74-aedf-f5da8fa20c9d" containerID="abbd36f7f17c206c9d80a475740e8d31d51ef05e06140ee7bbc6cefe0454821d" exitCode=0 Dec 11 10:28:35 crc kubenswrapper[4953]: I1211 10:28:35.785192 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w2cg9" event={"ID":"7f4c8cd1-2ee1-4d74-aedf-f5da8fa20c9d","Type":"ContainerDied","Data":"abbd36f7f17c206c9d80a475740e8d31d51ef05e06140ee7bbc6cefe0454821d"} Dec 11 10:28:35 crc kubenswrapper[4953]: I1211 10:28:35.785240 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w2cg9" event={"ID":"7f4c8cd1-2ee1-4d74-aedf-f5da8fa20c9d","Type":"ContainerStarted","Data":"ee1dd12d62c82d4e7956ab483db977471a6af555b296f1f9d4ef776f7f87f89d"} Dec 11 10:28:35 crc kubenswrapper[4953]: I1211 10:28:35.786883 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-f4fb5df64-r2rsb" event={"ID":"096fad5f-94d2-43d5-93d5-d3daf6438972","Type":"ContainerStarted","Data":"0f8e55b44e395407f6d4edba45727e795d33bf7c47e7cbf94adbc40ed7ed2d98"} Dec 11 10:28:35 crc kubenswrapper[4953]: I1211 10:28:35.786981 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-f4fb5df64-r2rsb" Dec 11 10:28:35 crc kubenswrapper[4953]: I1211 10:28:35.788513 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-855d9ccff4-ht7z2" event={"ID":"a6c256f8-7cf4-4196-b3ee-4124af7fed31","Type":"ContainerStarted","Data":"c2f33685f0f46c9d6232515a912e650c24df179092896fe45e00132b9d8263dd"} Dec 11 10:28:35 crc kubenswrapper[4953]: I1211 10:28:35.788652 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-nznsk" podUID="9ef08360-d11b-4805-83f4-16c914be3cb8" containerName="registry-server" containerID="cri-o://2fcbc9f98870f27559a5db645a19a26c5be114400741d09d55e7af6f6bb9bde5" gracePeriod=2 Dec 11 10:28:35 crc kubenswrapper[4953]: I1211 10:28:35.835958 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-f4fb5df64-r2rsb" podStartSLOduration=2.660459961 podStartE2EDuration="14.835929973s" podCreationTimestamp="2025-12-11 10:28:21 +0000 UTC" firstStartedPulling="2025-12-11 10:28:22.609974148 +0000 UTC m=+1020.633833181" lastFinishedPulling="2025-12-11 10:28:34.78544416 +0000 UTC m=+1032.809303193" observedRunningTime="2025-12-11 10:28:35.831872841 +0000 UTC m=+1033.855731884" watchObservedRunningTime="2025-12-11 10:28:35.835929973 +0000 UTC m=+1033.859789016" Dec 11 10:28:35 crc kubenswrapper[4953]: I1211 10:28:35.965144 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-855d9ccff4-ht7z2" podStartSLOduration=2.644326307 podStartE2EDuration="11.965116965s" podCreationTimestamp="2025-12-11 10:28:24 +0000 UTC" firstStartedPulling="2025-12-11 10:28:25.457827082 +0000 UTC m=+1023.481686125" lastFinishedPulling="2025-12-11 10:28:34.77861775 +0000 UTC m=+1032.802476783" observedRunningTime="2025-12-11 10:28:35.956373617 +0000 UTC m=+1033.980232650" watchObservedRunningTime="2025-12-11 10:28:35.965116965 +0000 UTC m=+1033.988975998" Dec 11 10:28:36 crc kubenswrapper[4953]: I1211 10:28:36.296851 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nznsk" Dec 11 10:28:36 crc kubenswrapper[4953]: I1211 10:28:36.394187 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-75rxf\" (UniqueName: \"kubernetes.io/projected/9ef08360-d11b-4805-83f4-16c914be3cb8-kube-api-access-75rxf\") pod \"9ef08360-d11b-4805-83f4-16c914be3cb8\" (UID: \"9ef08360-d11b-4805-83f4-16c914be3cb8\") " Dec 11 10:28:36 crc kubenswrapper[4953]: I1211 10:28:36.394383 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ef08360-d11b-4805-83f4-16c914be3cb8-catalog-content\") pod \"9ef08360-d11b-4805-83f4-16c914be3cb8\" (UID: \"9ef08360-d11b-4805-83f4-16c914be3cb8\") " Dec 11 10:28:36 crc kubenswrapper[4953]: I1211 10:28:36.394447 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ef08360-d11b-4805-83f4-16c914be3cb8-utilities\") pod \"9ef08360-d11b-4805-83f4-16c914be3cb8\" (UID: \"9ef08360-d11b-4805-83f4-16c914be3cb8\") " Dec 11 10:28:36 crc kubenswrapper[4953]: I1211 10:28:36.395359 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ef08360-d11b-4805-83f4-16c914be3cb8-utilities" (OuterVolumeSpecName: "utilities") pod "9ef08360-d11b-4805-83f4-16c914be3cb8" (UID: "9ef08360-d11b-4805-83f4-16c914be3cb8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:28:36 crc kubenswrapper[4953]: I1211 10:28:36.402778 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ef08360-d11b-4805-83f4-16c914be3cb8-kube-api-access-75rxf" (OuterVolumeSpecName: "kube-api-access-75rxf") pod "9ef08360-d11b-4805-83f4-16c914be3cb8" (UID: "9ef08360-d11b-4805-83f4-16c914be3cb8"). InnerVolumeSpecName "kube-api-access-75rxf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:28:36 crc kubenswrapper[4953]: I1211 10:28:36.415918 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ef08360-d11b-4805-83f4-16c914be3cb8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9ef08360-d11b-4805-83f4-16c914be3cb8" (UID: "9ef08360-d11b-4805-83f4-16c914be3cb8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:28:36 crc kubenswrapper[4953]: I1211 10:28:36.495751 4953 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ef08360-d11b-4805-83f4-16c914be3cb8-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 10:28:36 crc kubenswrapper[4953]: I1211 10:28:36.495796 4953 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ef08360-d11b-4805-83f4-16c914be3cb8-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 10:28:36 crc kubenswrapper[4953]: I1211 10:28:36.495810 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-75rxf\" (UniqueName: \"kubernetes.io/projected/9ef08360-d11b-4805-83f4-16c914be3cb8-kube-api-access-75rxf\") on node \"crc\" DevicePath \"\"" Dec 11 10:28:36 crc kubenswrapper[4953]: I1211 10:28:36.798099 4953 generic.go:334] "Generic (PLEG): container finished" podID="9ef08360-d11b-4805-83f4-16c914be3cb8" containerID="2fcbc9f98870f27559a5db645a19a26c5be114400741d09d55e7af6f6bb9bde5" exitCode=0 Dec 11 10:28:36 crc kubenswrapper[4953]: I1211 10:28:36.798185 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nznsk" event={"ID":"9ef08360-d11b-4805-83f4-16c914be3cb8","Type":"ContainerDied","Data":"2fcbc9f98870f27559a5db645a19a26c5be114400741d09d55e7af6f6bb9bde5"} Dec 11 10:28:36 crc kubenswrapper[4953]: I1211 10:28:36.798224 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nznsk" Dec 11 10:28:36 crc kubenswrapper[4953]: I1211 10:28:36.798295 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nznsk" event={"ID":"9ef08360-d11b-4805-83f4-16c914be3cb8","Type":"ContainerDied","Data":"e60d7a378e950b9838cea91b71ce8d0fae093842a6534d3999d8188f4a7317ef"} Dec 11 10:28:36 crc kubenswrapper[4953]: I1211 10:28:36.798339 4953 scope.go:117] "RemoveContainer" containerID="2fcbc9f98870f27559a5db645a19a26c5be114400741d09d55e7af6f6bb9bde5" Dec 11 10:28:36 crc kubenswrapper[4953]: I1211 10:28:36.818123 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nznsk"] Dec 11 10:28:36 crc kubenswrapper[4953]: I1211 10:28:36.822179 4953 scope.go:117] "RemoveContainer" containerID="7f0218b6a292d812c83144d1f4d48e6b6ea3d1c312b2f2802693449bc4e599d8" Dec 11 10:28:36 crc kubenswrapper[4953]: I1211 10:28:36.822975 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-nznsk"] Dec 11 10:28:36 crc kubenswrapper[4953]: I1211 10:28:36.839799 4953 scope.go:117] "RemoveContainer" containerID="333b24b925ec443677d2cb37b701556d0079ad70b4db94c19fc1f95c2d15f0fd" Dec 11 10:28:36 crc kubenswrapper[4953]: I1211 10:28:36.948047 4953 scope.go:117] "RemoveContainer" containerID="2fcbc9f98870f27559a5db645a19a26c5be114400741d09d55e7af6f6bb9bde5" Dec 11 10:28:36 crc kubenswrapper[4953]: E1211 10:28:36.948683 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2fcbc9f98870f27559a5db645a19a26c5be114400741d09d55e7af6f6bb9bde5\": container with ID starting with 2fcbc9f98870f27559a5db645a19a26c5be114400741d09d55e7af6f6bb9bde5 not found: ID does not exist" containerID="2fcbc9f98870f27559a5db645a19a26c5be114400741d09d55e7af6f6bb9bde5" Dec 11 10:28:36 crc kubenswrapper[4953]: I1211 10:28:36.948751 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2fcbc9f98870f27559a5db645a19a26c5be114400741d09d55e7af6f6bb9bde5"} err="failed to get container status \"2fcbc9f98870f27559a5db645a19a26c5be114400741d09d55e7af6f6bb9bde5\": rpc error: code = NotFound desc = could not find container \"2fcbc9f98870f27559a5db645a19a26c5be114400741d09d55e7af6f6bb9bde5\": container with ID starting with 2fcbc9f98870f27559a5db645a19a26c5be114400741d09d55e7af6f6bb9bde5 not found: ID does not exist" Dec 11 10:28:36 crc kubenswrapper[4953]: I1211 10:28:36.948798 4953 scope.go:117] "RemoveContainer" containerID="7f0218b6a292d812c83144d1f4d48e6b6ea3d1c312b2f2802693449bc4e599d8" Dec 11 10:28:36 crc kubenswrapper[4953]: E1211 10:28:36.950348 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f0218b6a292d812c83144d1f4d48e6b6ea3d1c312b2f2802693449bc4e599d8\": container with ID starting with 7f0218b6a292d812c83144d1f4d48e6b6ea3d1c312b2f2802693449bc4e599d8 not found: ID does not exist" containerID="7f0218b6a292d812c83144d1f4d48e6b6ea3d1c312b2f2802693449bc4e599d8" Dec 11 10:28:36 crc kubenswrapper[4953]: I1211 10:28:36.950382 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f0218b6a292d812c83144d1f4d48e6b6ea3d1c312b2f2802693449bc4e599d8"} err="failed to get container status \"7f0218b6a292d812c83144d1f4d48e6b6ea3d1c312b2f2802693449bc4e599d8\": rpc error: code = NotFound desc = could not find container \"7f0218b6a292d812c83144d1f4d48e6b6ea3d1c312b2f2802693449bc4e599d8\": container with ID starting with 7f0218b6a292d812c83144d1f4d48e6b6ea3d1c312b2f2802693449bc4e599d8 not found: ID does not exist" Dec 11 10:28:36 crc kubenswrapper[4953]: I1211 10:28:36.950404 4953 scope.go:117] "RemoveContainer" containerID="333b24b925ec443677d2cb37b701556d0079ad70b4db94c19fc1f95c2d15f0fd" Dec 11 10:28:36 crc kubenswrapper[4953]: E1211 10:28:36.950747 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"333b24b925ec443677d2cb37b701556d0079ad70b4db94c19fc1f95c2d15f0fd\": container with ID starting with 333b24b925ec443677d2cb37b701556d0079ad70b4db94c19fc1f95c2d15f0fd not found: ID does not exist" containerID="333b24b925ec443677d2cb37b701556d0079ad70b4db94c19fc1f95c2d15f0fd" Dec 11 10:28:36 crc kubenswrapper[4953]: I1211 10:28:36.950784 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"333b24b925ec443677d2cb37b701556d0079ad70b4db94c19fc1f95c2d15f0fd"} err="failed to get container status \"333b24b925ec443677d2cb37b701556d0079ad70b4db94c19fc1f95c2d15f0fd\": rpc error: code = NotFound desc = could not find container \"333b24b925ec443677d2cb37b701556d0079ad70b4db94c19fc1f95c2d15f0fd\": container with ID starting with 333b24b925ec443677d2cb37b701556d0079ad70b4db94c19fc1f95c2d15f0fd not found: ID does not exist" Dec 11 10:28:38 crc kubenswrapper[4953]: I1211 10:28:38.489073 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ef08360-d11b-4805-83f4-16c914be3cb8" path="/var/lib/kubelet/pods/9ef08360-d11b-4805-83f4-16c914be3cb8/volumes" Dec 11 10:28:41 crc kubenswrapper[4953]: I1211 10:28:41.029554 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-86cb77c54b-zpjqb"] Dec 11 10:28:41 crc kubenswrapper[4953]: E1211 10:28:41.030325 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ef08360-d11b-4805-83f4-16c914be3cb8" containerName="extract-utilities" Dec 11 10:28:41 crc kubenswrapper[4953]: I1211 10:28:41.030346 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ef08360-d11b-4805-83f4-16c914be3cb8" containerName="extract-utilities" Dec 11 10:28:41 crc kubenswrapper[4953]: E1211 10:28:41.030364 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ef08360-d11b-4805-83f4-16c914be3cb8" containerName="extract-content" Dec 11 10:28:41 crc kubenswrapper[4953]: I1211 10:28:41.030391 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ef08360-d11b-4805-83f4-16c914be3cb8" containerName="extract-content" Dec 11 10:28:41 crc kubenswrapper[4953]: E1211 10:28:41.030417 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ef08360-d11b-4805-83f4-16c914be3cb8" containerName="registry-server" Dec 11 10:28:41 crc kubenswrapper[4953]: I1211 10:28:41.030423 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ef08360-d11b-4805-83f4-16c914be3cb8" containerName="registry-server" Dec 11 10:28:41 crc kubenswrapper[4953]: I1211 10:28:41.030633 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ef08360-d11b-4805-83f4-16c914be3cb8" containerName="registry-server" Dec 11 10:28:41 crc kubenswrapper[4953]: I1211 10:28:41.031178 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-86cb77c54b-zpjqb" Dec 11 10:28:41 crc kubenswrapper[4953]: I1211 10:28:41.035303 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-86cb77c54b-zpjqb"] Dec 11 10:28:41 crc kubenswrapper[4953]: I1211 10:28:41.038700 4953 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-7n7fx" Dec 11 10:28:41 crc kubenswrapper[4953]: I1211 10:28:41.154360 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b04280da-0938-44cd-8c87-04fadceb003c-bound-sa-token\") pod \"cert-manager-86cb77c54b-zpjqb\" (UID: \"b04280da-0938-44cd-8c87-04fadceb003c\") " pod="cert-manager/cert-manager-86cb77c54b-zpjqb" Dec 11 10:28:41 crc kubenswrapper[4953]: I1211 10:28:41.154440 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlm9t\" (UniqueName: \"kubernetes.io/projected/b04280da-0938-44cd-8c87-04fadceb003c-kube-api-access-nlm9t\") pod \"cert-manager-86cb77c54b-zpjqb\" (UID: \"b04280da-0938-44cd-8c87-04fadceb003c\") " pod="cert-manager/cert-manager-86cb77c54b-zpjqb" Dec 11 10:28:41 crc kubenswrapper[4953]: I1211 10:28:41.255707 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b04280da-0938-44cd-8c87-04fadceb003c-bound-sa-token\") pod \"cert-manager-86cb77c54b-zpjqb\" (UID: \"b04280da-0938-44cd-8c87-04fadceb003c\") " pod="cert-manager/cert-manager-86cb77c54b-zpjqb" Dec 11 10:28:41 crc kubenswrapper[4953]: I1211 10:28:41.255810 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nlm9t\" (UniqueName: \"kubernetes.io/projected/b04280da-0938-44cd-8c87-04fadceb003c-kube-api-access-nlm9t\") pod \"cert-manager-86cb77c54b-zpjqb\" (UID: \"b04280da-0938-44cd-8c87-04fadceb003c\") " pod="cert-manager/cert-manager-86cb77c54b-zpjqb" Dec 11 10:28:41 crc kubenswrapper[4953]: I1211 10:28:41.275183 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b04280da-0938-44cd-8c87-04fadceb003c-bound-sa-token\") pod \"cert-manager-86cb77c54b-zpjqb\" (UID: \"b04280da-0938-44cd-8c87-04fadceb003c\") " pod="cert-manager/cert-manager-86cb77c54b-zpjqb" Dec 11 10:28:41 crc kubenswrapper[4953]: I1211 10:28:41.275404 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nlm9t\" (UniqueName: \"kubernetes.io/projected/b04280da-0938-44cd-8c87-04fadceb003c-kube-api-access-nlm9t\") pod \"cert-manager-86cb77c54b-zpjqb\" (UID: \"b04280da-0938-44cd-8c87-04fadceb003c\") " pod="cert-manager/cert-manager-86cb77c54b-zpjqb" Dec 11 10:28:41 crc kubenswrapper[4953]: I1211 10:28:41.356063 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-86cb77c54b-zpjqb" Dec 11 10:28:41 crc kubenswrapper[4953]: I1211 10:28:41.877103 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w2cg9" event={"ID":"7f4c8cd1-2ee1-4d74-aedf-f5da8fa20c9d","Type":"ContainerStarted","Data":"87dc5bc65de8a1312cbf7c9c65698827eed6e5c75dd30656027fe156d18c9e90"} Dec 11 10:28:42 crc kubenswrapper[4953]: I1211 10:28:42.021375 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-86cb77c54b-zpjqb"] Dec 11 10:28:42 crc kubenswrapper[4953]: W1211 10:28:42.028771 4953 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb04280da_0938_44cd_8c87_04fadceb003c.slice/crio-3aa567b5033fb6aa05210f9513028dd74767b5e37633ed6d7c384c96f2e5ceae WatchSource:0}: Error finding container 3aa567b5033fb6aa05210f9513028dd74767b5e37633ed6d7c384c96f2e5ceae: Status 404 returned error can't find the container with id 3aa567b5033fb6aa05210f9513028dd74767b5e37633ed6d7c384c96f2e5ceae Dec 11 10:28:42 crc kubenswrapper[4953]: I1211 10:28:42.200939 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-f4fb5df64-r2rsb" Dec 11 10:28:42 crc kubenswrapper[4953]: I1211 10:28:42.884303 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-86cb77c54b-zpjqb" event={"ID":"b04280da-0938-44cd-8c87-04fadceb003c","Type":"ContainerStarted","Data":"39c732d760e87e4553c37305af2a8ae3db52ebed044d67d7f8129e8cb2b1f508"} Dec 11 10:28:42 crc kubenswrapper[4953]: I1211 10:28:42.884734 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-86cb77c54b-zpjqb" event={"ID":"b04280da-0938-44cd-8c87-04fadceb003c","Type":"ContainerStarted","Data":"3aa567b5033fb6aa05210f9513028dd74767b5e37633ed6d7c384c96f2e5ceae"} Dec 11 10:28:42 crc kubenswrapper[4953]: I1211 10:28:42.886947 4953 generic.go:334] "Generic (PLEG): container finished" podID="7f4c8cd1-2ee1-4d74-aedf-f5da8fa20c9d" containerID="87dc5bc65de8a1312cbf7c9c65698827eed6e5c75dd30656027fe156d18c9e90" exitCode=0 Dec 11 10:28:42 crc kubenswrapper[4953]: I1211 10:28:42.886979 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w2cg9" event={"ID":"7f4c8cd1-2ee1-4d74-aedf-f5da8fa20c9d","Type":"ContainerDied","Data":"87dc5bc65de8a1312cbf7c9c65698827eed6e5c75dd30656027fe156d18c9e90"} Dec 11 10:28:42 crc kubenswrapper[4953]: I1211 10:28:42.905633 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-86cb77c54b-zpjqb" podStartSLOduration=2.90560905 podStartE2EDuration="2.90560905s" podCreationTimestamp="2025-12-11 10:28:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:28:42.899871964 +0000 UTC m=+1040.923731007" watchObservedRunningTime="2025-12-11 10:28:42.90560905 +0000 UTC m=+1040.929468093" Dec 11 10:28:43 crc kubenswrapper[4953]: I1211 10:28:43.895616 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w2cg9" event={"ID":"7f4c8cd1-2ee1-4d74-aedf-f5da8fa20c9d","Type":"ContainerStarted","Data":"6a41c2126cd02748818b1009d5997376bace4e2f97bc5d9788dde104b76e2803"} Dec 11 10:28:44 crc kubenswrapper[4953]: I1211 10:28:44.115927 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-w2cg9" podStartSLOduration=8.340264321 podStartE2EDuration="16.115911311s" podCreationTimestamp="2025-12-11 10:28:28 +0000 UTC" firstStartedPulling="2025-12-11 10:28:35.78863267 +0000 UTC m=+1033.812491703" lastFinishedPulling="2025-12-11 10:28:43.56427966 +0000 UTC m=+1041.588138693" observedRunningTime="2025-12-11 10:28:44.111280061 +0000 UTC m=+1042.135139094" watchObservedRunningTime="2025-12-11 10:28:44.115911311 +0000 UTC m=+1042.139770344" Dec 11 10:28:45 crc kubenswrapper[4953]: I1211 10:28:45.751677 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-bvj8x"] Dec 11 10:28:45 crc kubenswrapper[4953]: I1211 10:28:45.753361 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-bvj8x" Dec 11 10:28:45 crc kubenswrapper[4953]: I1211 10:28:45.759186 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-fq55k" Dec 11 10:28:45 crc kubenswrapper[4953]: I1211 10:28:45.759468 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Dec 11 10:28:45 crc kubenswrapper[4953]: I1211 10:28:45.759794 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Dec 11 10:28:45 crc kubenswrapper[4953]: I1211 10:28:45.771134 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-bvj8x"] Dec 11 10:28:45 crc kubenswrapper[4953]: I1211 10:28:45.909603 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnvnq\" (UniqueName: \"kubernetes.io/projected/cd873cf1-d102-4c66-b3cc-e4a775526d43-kube-api-access-cnvnq\") pod \"openstack-operator-index-bvj8x\" (UID: \"cd873cf1-d102-4c66-b3cc-e4a775526d43\") " pod="openstack-operators/openstack-operator-index-bvj8x" Dec 11 10:28:46 crc kubenswrapper[4953]: I1211 10:28:46.011013 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cnvnq\" (UniqueName: \"kubernetes.io/projected/cd873cf1-d102-4c66-b3cc-e4a775526d43-kube-api-access-cnvnq\") pod \"openstack-operator-index-bvj8x\" (UID: \"cd873cf1-d102-4c66-b3cc-e4a775526d43\") " pod="openstack-operators/openstack-operator-index-bvj8x" Dec 11 10:28:46 crc kubenswrapper[4953]: I1211 10:28:46.029350 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cnvnq\" (UniqueName: \"kubernetes.io/projected/cd873cf1-d102-4c66-b3cc-e4a775526d43-kube-api-access-cnvnq\") pod \"openstack-operator-index-bvj8x\" (UID: \"cd873cf1-d102-4c66-b3cc-e4a775526d43\") " pod="openstack-operators/openstack-operator-index-bvj8x" Dec 11 10:28:46 crc kubenswrapper[4953]: I1211 10:28:46.087068 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-bvj8x" Dec 11 10:28:46 crc kubenswrapper[4953]: I1211 10:28:46.513316 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-bvj8x"] Dec 11 10:28:46 crc kubenswrapper[4953]: W1211 10:28:46.515122 4953 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcd873cf1_d102_4c66_b3cc_e4a775526d43.slice/crio-7e84cd0c40bde2e804e46bb352b28e0ab1cfdbbe5ff96a750aca0efbee1f4395 WatchSource:0}: Error finding container 7e84cd0c40bde2e804e46bb352b28e0ab1cfdbbe5ff96a750aca0efbee1f4395: Status 404 returned error can't find the container with id 7e84cd0c40bde2e804e46bb352b28e0ab1cfdbbe5ff96a750aca0efbee1f4395 Dec 11 10:28:46 crc kubenswrapper[4953]: I1211 10:28:46.935686 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-bvj8x" event={"ID":"cd873cf1-d102-4c66-b3cc-e4a775526d43","Type":"ContainerStarted","Data":"7e84cd0c40bde2e804e46bb352b28e0ab1cfdbbe5ff96a750aca0efbee1f4395"} Dec 11 10:28:48 crc kubenswrapper[4953]: I1211 10:28:48.330326 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-bvj8x"] Dec 11 10:28:48 crc kubenswrapper[4953]: I1211 10:28:48.739105 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-dx7jh"] Dec 11 10:28:48 crc kubenswrapper[4953]: I1211 10:28:48.741650 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-dx7jh" Dec 11 10:28:48 crc kubenswrapper[4953]: I1211 10:28:48.747846 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-dx7jh"] Dec 11 10:28:48 crc kubenswrapper[4953]: I1211 10:28:48.822238 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4fdt\" (UniqueName: \"kubernetes.io/projected/523ebe34-8eb2-4a92-ba2f-180e03f29d3a-kube-api-access-n4fdt\") pod \"openstack-operator-index-dx7jh\" (UID: \"523ebe34-8eb2-4a92-ba2f-180e03f29d3a\") " pod="openstack-operators/openstack-operator-index-dx7jh" Dec 11 10:28:49 crc kubenswrapper[4953]: I1211 10:28:49.124381 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n4fdt\" (UniqueName: \"kubernetes.io/projected/523ebe34-8eb2-4a92-ba2f-180e03f29d3a-kube-api-access-n4fdt\") pod \"openstack-operator-index-dx7jh\" (UID: \"523ebe34-8eb2-4a92-ba2f-180e03f29d3a\") " pod="openstack-operators/openstack-operator-index-dx7jh" Dec 11 10:28:49 crc kubenswrapper[4953]: I1211 10:28:49.128298 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-w2cg9" Dec 11 10:28:49 crc kubenswrapper[4953]: I1211 10:28:49.128971 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-w2cg9" Dec 11 10:28:49 crc kubenswrapper[4953]: I1211 10:28:49.158473 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4fdt\" (UniqueName: \"kubernetes.io/projected/523ebe34-8eb2-4a92-ba2f-180e03f29d3a-kube-api-access-n4fdt\") pod \"openstack-operator-index-dx7jh\" (UID: \"523ebe34-8eb2-4a92-ba2f-180e03f29d3a\") " pod="openstack-operators/openstack-operator-index-dx7jh" Dec 11 10:28:49 crc kubenswrapper[4953]: I1211 10:28:49.188256 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-w2cg9" Dec 11 10:28:49 crc kubenswrapper[4953]: I1211 10:28:49.430259 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-dx7jh" Dec 11 10:28:49 crc kubenswrapper[4953]: I1211 10:28:49.848765 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-dx7jh"] Dec 11 10:28:50 crc kubenswrapper[4953]: I1211 10:28:50.161316 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-bvj8x" event={"ID":"cd873cf1-d102-4c66-b3cc-e4a775526d43","Type":"ContainerStarted","Data":"fa1af79aa01440dbf222f9a00777186d6ead30b59476825264772f4a2f3f5db9"} Dec 11 10:28:50 crc kubenswrapper[4953]: I1211 10:28:50.161441 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-bvj8x" podUID="cd873cf1-d102-4c66-b3cc-e4a775526d43" containerName="registry-server" containerID="cri-o://fa1af79aa01440dbf222f9a00777186d6ead30b59476825264772f4a2f3f5db9" gracePeriod=2 Dec 11 10:28:50 crc kubenswrapper[4953]: I1211 10:28:50.163612 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-dx7jh" event={"ID":"523ebe34-8eb2-4a92-ba2f-180e03f29d3a","Type":"ContainerStarted","Data":"1dfbc61df96eacd23e1a4649faf4f008234cadb3c950ffcde86ed0f12c60eb81"} Dec 11 10:28:50 crc kubenswrapper[4953]: I1211 10:28:50.187932 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-bvj8x" podStartSLOduration=2.533210242 podStartE2EDuration="5.187911129s" podCreationTimestamp="2025-12-11 10:28:45 +0000 UTC" firstStartedPulling="2025-12-11 10:28:46.518284315 +0000 UTC m=+1044.542143338" lastFinishedPulling="2025-12-11 10:28:49.172985192 +0000 UTC m=+1047.196844225" observedRunningTime="2025-12-11 10:28:50.184189428 +0000 UTC m=+1048.208048471" watchObservedRunningTime="2025-12-11 10:28:50.187911129 +0000 UTC m=+1048.211770162" Dec 11 10:28:50 crc kubenswrapper[4953]: I1211 10:28:50.215105 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-w2cg9" Dec 11 10:28:50 crc kubenswrapper[4953]: E1211 10:28:50.382563 4953 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcd873cf1_d102_4c66_b3cc_e4a775526d43.slice/crio-conmon-fa1af79aa01440dbf222f9a00777186d6ead30b59476825264772f4a2f3f5db9.scope\": RecentStats: unable to find data in memory cache]" Dec 11 10:28:51 crc kubenswrapper[4953]: I1211 10:28:51.048426 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-bvj8x" Dec 11 10:28:51 crc kubenswrapper[4953]: I1211 10:28:51.063728 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cnvnq\" (UniqueName: \"kubernetes.io/projected/cd873cf1-d102-4c66-b3cc-e4a775526d43-kube-api-access-cnvnq\") pod \"cd873cf1-d102-4c66-b3cc-e4a775526d43\" (UID: \"cd873cf1-d102-4c66-b3cc-e4a775526d43\") " Dec 11 10:28:51 crc kubenswrapper[4953]: I1211 10:28:51.070833 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd873cf1-d102-4c66-b3cc-e4a775526d43-kube-api-access-cnvnq" (OuterVolumeSpecName: "kube-api-access-cnvnq") pod "cd873cf1-d102-4c66-b3cc-e4a775526d43" (UID: "cd873cf1-d102-4c66-b3cc-e4a775526d43"). InnerVolumeSpecName "kube-api-access-cnvnq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:28:51 crc kubenswrapper[4953]: I1211 10:28:51.165307 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cnvnq\" (UniqueName: \"kubernetes.io/projected/cd873cf1-d102-4c66-b3cc-e4a775526d43-kube-api-access-cnvnq\") on node \"crc\" DevicePath \"\"" Dec 11 10:28:51 crc kubenswrapper[4953]: I1211 10:28:51.173889 4953 generic.go:334] "Generic (PLEG): container finished" podID="cd873cf1-d102-4c66-b3cc-e4a775526d43" containerID="fa1af79aa01440dbf222f9a00777186d6ead30b59476825264772f4a2f3f5db9" exitCode=0 Dec 11 10:28:51 crc kubenswrapper[4953]: I1211 10:28:51.173985 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-bvj8x" Dec 11 10:28:51 crc kubenswrapper[4953]: I1211 10:28:51.173951 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-bvj8x" event={"ID":"cd873cf1-d102-4c66-b3cc-e4a775526d43","Type":"ContainerDied","Data":"fa1af79aa01440dbf222f9a00777186d6ead30b59476825264772f4a2f3f5db9"} Dec 11 10:28:51 crc kubenswrapper[4953]: I1211 10:28:51.174121 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-bvj8x" event={"ID":"cd873cf1-d102-4c66-b3cc-e4a775526d43","Type":"ContainerDied","Data":"7e84cd0c40bde2e804e46bb352b28e0ab1cfdbbe5ff96a750aca0efbee1f4395"} Dec 11 10:28:51 crc kubenswrapper[4953]: I1211 10:28:51.174163 4953 scope.go:117] "RemoveContainer" containerID="fa1af79aa01440dbf222f9a00777186d6ead30b59476825264772f4a2f3f5db9" Dec 11 10:28:51 crc kubenswrapper[4953]: I1211 10:28:51.177490 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-dx7jh" event={"ID":"523ebe34-8eb2-4a92-ba2f-180e03f29d3a","Type":"ContainerStarted","Data":"474079ed7385847cb58ee77d5e47246cf8aa389639c121a9c372ac9c45a10cfc"} Dec 11 10:28:51 crc kubenswrapper[4953]: I1211 10:28:51.195464 4953 scope.go:117] "RemoveContainer" containerID="fa1af79aa01440dbf222f9a00777186d6ead30b59476825264772f4a2f3f5db9" Dec 11 10:28:51 crc kubenswrapper[4953]: E1211 10:28:51.196498 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa1af79aa01440dbf222f9a00777186d6ead30b59476825264772f4a2f3f5db9\": container with ID starting with fa1af79aa01440dbf222f9a00777186d6ead30b59476825264772f4a2f3f5db9 not found: ID does not exist" containerID="fa1af79aa01440dbf222f9a00777186d6ead30b59476825264772f4a2f3f5db9" Dec 11 10:28:51 crc kubenswrapper[4953]: I1211 10:28:51.196550 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa1af79aa01440dbf222f9a00777186d6ead30b59476825264772f4a2f3f5db9"} err="failed to get container status \"fa1af79aa01440dbf222f9a00777186d6ead30b59476825264772f4a2f3f5db9\": rpc error: code = NotFound desc = could not find container \"fa1af79aa01440dbf222f9a00777186d6ead30b59476825264772f4a2f3f5db9\": container with ID starting with fa1af79aa01440dbf222f9a00777186d6ead30b59476825264772f4a2f3f5db9 not found: ID does not exist" Dec 11 10:28:51 crc kubenswrapper[4953]: I1211 10:28:51.224016 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-dx7jh" podStartSLOduration=2.704986658 podStartE2EDuration="3.22399184s" podCreationTimestamp="2025-12-11 10:28:48 +0000 UTC" firstStartedPulling="2025-12-11 10:28:49.876650512 +0000 UTC m=+1047.900509545" lastFinishedPulling="2025-12-11 10:28:50.395655694 +0000 UTC m=+1048.419514727" observedRunningTime="2025-12-11 10:28:51.195860312 +0000 UTC m=+1049.219719345" watchObservedRunningTime="2025-12-11 10:28:51.22399184 +0000 UTC m=+1049.247850893" Dec 11 10:28:51 crc kubenswrapper[4953]: I1211 10:28:51.231349 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-bvj8x"] Dec 11 10:28:51 crc kubenswrapper[4953]: I1211 10:28:51.238367 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-bvj8x"] Dec 11 10:28:52 crc kubenswrapper[4953]: I1211 10:28:52.312564 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-w2cg9"] Dec 11 10:28:52 crc kubenswrapper[4953]: I1211 10:28:52.483053 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd873cf1-d102-4c66-b3cc-e4a775526d43" path="/var/lib/kubelet/pods/cd873cf1-d102-4c66-b3cc-e4a775526d43/volumes" Dec 11 10:28:52 crc kubenswrapper[4953]: I1211 10:28:52.942535 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rrz7v"] Dec 11 10:28:52 crc kubenswrapper[4953]: I1211 10:28:52.942888 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-rrz7v" podUID="3e2c3944-72ff-4162-88c6-3e8583aa5065" containerName="registry-server" containerID="cri-o://08043763bcd936c27f4c543bf1bb5bebe6691edf79488ba883109dd0fbf509c4" gracePeriod=2 Dec 11 10:28:57 crc kubenswrapper[4953]: I1211 10:28:57.347543 4953 generic.go:334] "Generic (PLEG): container finished" podID="3e2c3944-72ff-4162-88c6-3e8583aa5065" containerID="08043763bcd936c27f4c543bf1bb5bebe6691edf79488ba883109dd0fbf509c4" exitCode=0 Dec 11 10:28:57 crc kubenswrapper[4953]: I1211 10:28:57.347705 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rrz7v" event={"ID":"3e2c3944-72ff-4162-88c6-3e8583aa5065","Type":"ContainerDied","Data":"08043763bcd936c27f4c543bf1bb5bebe6691edf79488ba883109dd0fbf509c4"} Dec 11 10:28:57 crc kubenswrapper[4953]: E1211 10:28:57.591542 4953 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 08043763bcd936c27f4c543bf1bb5bebe6691edf79488ba883109dd0fbf509c4 is running failed: container process not found" containerID="08043763bcd936c27f4c543bf1bb5bebe6691edf79488ba883109dd0fbf509c4" cmd=["grpc_health_probe","-addr=:50051"] Dec 11 10:28:57 crc kubenswrapper[4953]: E1211 10:28:57.592017 4953 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 08043763bcd936c27f4c543bf1bb5bebe6691edf79488ba883109dd0fbf509c4 is running failed: container process not found" containerID="08043763bcd936c27f4c543bf1bb5bebe6691edf79488ba883109dd0fbf509c4" cmd=["grpc_health_probe","-addr=:50051"] Dec 11 10:28:57 crc kubenswrapper[4953]: E1211 10:28:57.592368 4953 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 08043763bcd936c27f4c543bf1bb5bebe6691edf79488ba883109dd0fbf509c4 is running failed: container process not found" containerID="08043763bcd936c27f4c543bf1bb5bebe6691edf79488ba883109dd0fbf509c4" cmd=["grpc_health_probe","-addr=:50051"] Dec 11 10:28:57 crc kubenswrapper[4953]: E1211 10:28:57.592409 4953 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 08043763bcd936c27f4c543bf1bb5bebe6691edf79488ba883109dd0fbf509c4 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-rrz7v" podUID="3e2c3944-72ff-4162-88c6-3e8583aa5065" containerName="registry-server" Dec 11 10:28:57 crc kubenswrapper[4953]: I1211 10:28:57.627973 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rrz7v" Dec 11 10:28:57 crc kubenswrapper[4953]: I1211 10:28:57.775016 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e2c3944-72ff-4162-88c6-3e8583aa5065-catalog-content\") pod \"3e2c3944-72ff-4162-88c6-3e8583aa5065\" (UID: \"3e2c3944-72ff-4162-88c6-3e8583aa5065\") " Dec 11 10:28:57 crc kubenswrapper[4953]: I1211 10:28:57.775090 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8prg9\" (UniqueName: \"kubernetes.io/projected/3e2c3944-72ff-4162-88c6-3e8583aa5065-kube-api-access-8prg9\") pod \"3e2c3944-72ff-4162-88c6-3e8583aa5065\" (UID: \"3e2c3944-72ff-4162-88c6-3e8583aa5065\") " Dec 11 10:28:57 crc kubenswrapper[4953]: I1211 10:28:57.775130 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e2c3944-72ff-4162-88c6-3e8583aa5065-utilities\") pod \"3e2c3944-72ff-4162-88c6-3e8583aa5065\" (UID: \"3e2c3944-72ff-4162-88c6-3e8583aa5065\") " Dec 11 10:28:57 crc kubenswrapper[4953]: I1211 10:28:57.776261 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3e2c3944-72ff-4162-88c6-3e8583aa5065-utilities" (OuterVolumeSpecName: "utilities") pod "3e2c3944-72ff-4162-88c6-3e8583aa5065" (UID: "3e2c3944-72ff-4162-88c6-3e8583aa5065"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:28:57 crc kubenswrapper[4953]: I1211 10:28:57.781692 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e2c3944-72ff-4162-88c6-3e8583aa5065-kube-api-access-8prg9" (OuterVolumeSpecName: "kube-api-access-8prg9") pod "3e2c3944-72ff-4162-88c6-3e8583aa5065" (UID: "3e2c3944-72ff-4162-88c6-3e8583aa5065"). InnerVolumeSpecName "kube-api-access-8prg9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:28:57 crc kubenswrapper[4953]: I1211 10:28:57.826373 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3e2c3944-72ff-4162-88c6-3e8583aa5065-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3e2c3944-72ff-4162-88c6-3e8583aa5065" (UID: "3e2c3944-72ff-4162-88c6-3e8583aa5065"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:28:57 crc kubenswrapper[4953]: I1211 10:28:57.876413 4953 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e2c3944-72ff-4162-88c6-3e8583aa5065-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 10:28:57 crc kubenswrapper[4953]: I1211 10:28:57.876495 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8prg9\" (UniqueName: \"kubernetes.io/projected/3e2c3944-72ff-4162-88c6-3e8583aa5065-kube-api-access-8prg9\") on node \"crc\" DevicePath \"\"" Dec 11 10:28:57 crc kubenswrapper[4953]: I1211 10:28:57.876510 4953 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e2c3944-72ff-4162-88c6-3e8583aa5065-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 10:28:58 crc kubenswrapper[4953]: I1211 10:28:58.357281 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rrz7v" event={"ID":"3e2c3944-72ff-4162-88c6-3e8583aa5065","Type":"ContainerDied","Data":"62212278fae6e35fb66842baa18bcd78b7c50e0fc4d906bd66e2af0826e16e51"} Dec 11 10:28:58 crc kubenswrapper[4953]: I1211 10:28:58.357909 4953 scope.go:117] "RemoveContainer" containerID="08043763bcd936c27f4c543bf1bb5bebe6691edf79488ba883109dd0fbf509c4" Dec 11 10:28:58 crc kubenswrapper[4953]: I1211 10:28:58.357429 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rrz7v" Dec 11 10:28:58 crc kubenswrapper[4953]: I1211 10:28:58.380113 4953 scope.go:117] "RemoveContainer" containerID="0a071c74c5045f228b9600cf1b63ddf9fa00bd79550644f9142a31586be178ad" Dec 11 10:28:58 crc kubenswrapper[4953]: I1211 10:28:58.395934 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rrz7v"] Dec 11 10:28:58 crc kubenswrapper[4953]: I1211 10:28:58.398446 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-rrz7v"] Dec 11 10:28:58 crc kubenswrapper[4953]: I1211 10:28:58.417153 4953 scope.go:117] "RemoveContainer" containerID="fa75007dfd353a951c423b0cb3df9b218cc881780b15fc9ba884e91e2fde8c6b" Dec 11 10:28:58 crc kubenswrapper[4953]: I1211 10:28:58.485759 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e2c3944-72ff-4162-88c6-3e8583aa5065" path="/var/lib/kubelet/pods/3e2c3944-72ff-4162-88c6-3e8583aa5065/volumes" Dec 11 10:28:59 crc kubenswrapper[4953]: I1211 10:28:59.431265 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-dx7jh" Dec 11 10:28:59 crc kubenswrapper[4953]: I1211 10:28:59.431356 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-dx7jh" Dec 11 10:28:59 crc kubenswrapper[4953]: I1211 10:28:59.478359 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-dx7jh" Dec 11 10:29:00 crc kubenswrapper[4953]: I1211 10:29:00.436460 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-dx7jh" Dec 11 10:29:04 crc kubenswrapper[4953]: I1211 10:29:04.744319 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-ff4hk"] Dec 11 10:29:04 crc kubenswrapper[4953]: E1211 10:29:04.744956 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e2c3944-72ff-4162-88c6-3e8583aa5065" containerName="extract-utilities" Dec 11 10:29:04 crc kubenswrapper[4953]: I1211 10:29:04.744984 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e2c3944-72ff-4162-88c6-3e8583aa5065" containerName="extract-utilities" Dec 11 10:29:04 crc kubenswrapper[4953]: E1211 10:29:04.745014 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e2c3944-72ff-4162-88c6-3e8583aa5065" containerName="extract-content" Dec 11 10:29:04 crc kubenswrapper[4953]: I1211 10:29:04.745024 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e2c3944-72ff-4162-88c6-3e8583aa5065" containerName="extract-content" Dec 11 10:29:04 crc kubenswrapper[4953]: E1211 10:29:04.745045 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd873cf1-d102-4c66-b3cc-e4a775526d43" containerName="registry-server" Dec 11 10:29:04 crc kubenswrapper[4953]: I1211 10:29:04.745054 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd873cf1-d102-4c66-b3cc-e4a775526d43" containerName="registry-server" Dec 11 10:29:04 crc kubenswrapper[4953]: E1211 10:29:04.745064 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e2c3944-72ff-4162-88c6-3e8583aa5065" containerName="registry-server" Dec 11 10:29:04 crc kubenswrapper[4953]: I1211 10:29:04.745073 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e2c3944-72ff-4162-88c6-3e8583aa5065" containerName="registry-server" Dec 11 10:29:04 crc kubenswrapper[4953]: I1211 10:29:04.745223 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd873cf1-d102-4c66-b3cc-e4a775526d43" containerName="registry-server" Dec 11 10:29:04 crc kubenswrapper[4953]: I1211 10:29:04.745234 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e2c3944-72ff-4162-88c6-3e8583aa5065" containerName="registry-server" Dec 11 10:29:04 crc kubenswrapper[4953]: I1211 10:29:04.746309 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ff4hk" Dec 11 10:29:04 crc kubenswrapper[4953]: I1211 10:29:04.755851 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ff4hk"] Dec 11 10:29:04 crc kubenswrapper[4953]: I1211 10:29:04.915311 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d052962-681b-4149-a93f-25b2146477d6-catalog-content\") pod \"community-operators-ff4hk\" (UID: \"7d052962-681b-4149-a93f-25b2146477d6\") " pod="openshift-marketplace/community-operators-ff4hk" Dec 11 10:29:04 crc kubenswrapper[4953]: I1211 10:29:04.915422 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d052962-681b-4149-a93f-25b2146477d6-utilities\") pod \"community-operators-ff4hk\" (UID: \"7d052962-681b-4149-a93f-25b2146477d6\") " pod="openshift-marketplace/community-operators-ff4hk" Dec 11 10:29:04 crc kubenswrapper[4953]: I1211 10:29:04.915449 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkhdf\" (UniqueName: \"kubernetes.io/projected/7d052962-681b-4149-a93f-25b2146477d6-kube-api-access-bkhdf\") pod \"community-operators-ff4hk\" (UID: \"7d052962-681b-4149-a93f-25b2146477d6\") " pod="openshift-marketplace/community-operators-ff4hk" Dec 11 10:29:05 crc kubenswrapper[4953]: I1211 10:29:05.016978 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d052962-681b-4149-a93f-25b2146477d6-catalog-content\") pod \"community-operators-ff4hk\" (UID: \"7d052962-681b-4149-a93f-25b2146477d6\") " pod="openshift-marketplace/community-operators-ff4hk" Dec 11 10:29:05 crc kubenswrapper[4953]: I1211 10:29:05.017038 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d052962-681b-4149-a93f-25b2146477d6-utilities\") pod \"community-operators-ff4hk\" (UID: \"7d052962-681b-4149-a93f-25b2146477d6\") " pod="openshift-marketplace/community-operators-ff4hk" Dec 11 10:29:05 crc kubenswrapper[4953]: I1211 10:29:05.017088 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bkhdf\" (UniqueName: \"kubernetes.io/projected/7d052962-681b-4149-a93f-25b2146477d6-kube-api-access-bkhdf\") pod \"community-operators-ff4hk\" (UID: \"7d052962-681b-4149-a93f-25b2146477d6\") " pod="openshift-marketplace/community-operators-ff4hk" Dec 11 10:29:05 crc kubenswrapper[4953]: I1211 10:29:05.017662 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d052962-681b-4149-a93f-25b2146477d6-catalog-content\") pod \"community-operators-ff4hk\" (UID: \"7d052962-681b-4149-a93f-25b2146477d6\") " pod="openshift-marketplace/community-operators-ff4hk" Dec 11 10:29:05 crc kubenswrapper[4953]: I1211 10:29:05.017766 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d052962-681b-4149-a93f-25b2146477d6-utilities\") pod \"community-operators-ff4hk\" (UID: \"7d052962-681b-4149-a93f-25b2146477d6\") " pod="openshift-marketplace/community-operators-ff4hk" Dec 11 10:29:05 crc kubenswrapper[4953]: I1211 10:29:05.036643 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bkhdf\" (UniqueName: \"kubernetes.io/projected/7d052962-681b-4149-a93f-25b2146477d6-kube-api-access-bkhdf\") pod \"community-operators-ff4hk\" (UID: \"7d052962-681b-4149-a93f-25b2146477d6\") " pod="openshift-marketplace/community-operators-ff4hk" Dec 11 10:29:05 crc kubenswrapper[4953]: I1211 10:29:05.070067 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ff4hk" Dec 11 10:29:05 crc kubenswrapper[4953]: I1211 10:29:05.375979 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ff4hk"] Dec 11 10:29:05 crc kubenswrapper[4953]: W1211 10:29:05.384943 4953 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7d052962_681b_4149_a93f_25b2146477d6.slice/crio-2bf387cef098885d42ac7780c4f75024ae4dec71c2a06d241d34a8c4f5065420 WatchSource:0}: Error finding container 2bf387cef098885d42ac7780c4f75024ae4dec71c2a06d241d34a8c4f5065420: Status 404 returned error can't find the container with id 2bf387cef098885d42ac7780c4f75024ae4dec71c2a06d241d34a8c4f5065420 Dec 11 10:29:05 crc kubenswrapper[4953]: I1211 10:29:05.436641 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ff4hk" event={"ID":"7d052962-681b-4149-a93f-25b2146477d6","Type":"ContainerStarted","Data":"2bf387cef098885d42ac7780c4f75024ae4dec71c2a06d241d34a8c4f5065420"} Dec 11 10:29:05 crc kubenswrapper[4953]: I1211 10:29:05.780741 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/d48c0f09d4f85e6997d52028ed3b1a3994c4f4b73b6b561c4a3bbe5136czw6z"] Dec 11 10:29:05 crc kubenswrapper[4953]: I1211 10:29:05.782359 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/d48c0f09d4f85e6997d52028ed3b1a3994c4f4b73b6b561c4a3bbe5136czw6z" Dec 11 10:29:05 crc kubenswrapper[4953]: I1211 10:29:05.791747 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-tlg5f" Dec 11 10:29:05 crc kubenswrapper[4953]: I1211 10:29:05.797517 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/d48c0f09d4f85e6997d52028ed3b1a3994c4f4b73b6b561c4a3bbe5136czw6z"] Dec 11 10:29:05 crc kubenswrapper[4953]: I1211 10:29:05.936080 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6612d320-600d-4d86-a518-8594611f0a3c-bundle\") pod \"d48c0f09d4f85e6997d52028ed3b1a3994c4f4b73b6b561c4a3bbe5136czw6z\" (UID: \"6612d320-600d-4d86-a518-8594611f0a3c\") " pod="openstack-operators/d48c0f09d4f85e6997d52028ed3b1a3994c4f4b73b6b561c4a3bbe5136czw6z" Dec 11 10:29:05 crc kubenswrapper[4953]: I1211 10:29:05.936468 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wsgsm\" (UniqueName: \"kubernetes.io/projected/6612d320-600d-4d86-a518-8594611f0a3c-kube-api-access-wsgsm\") pod \"d48c0f09d4f85e6997d52028ed3b1a3994c4f4b73b6b561c4a3bbe5136czw6z\" (UID: \"6612d320-600d-4d86-a518-8594611f0a3c\") " pod="openstack-operators/d48c0f09d4f85e6997d52028ed3b1a3994c4f4b73b6b561c4a3bbe5136czw6z" Dec 11 10:29:05 crc kubenswrapper[4953]: I1211 10:29:05.936535 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6612d320-600d-4d86-a518-8594611f0a3c-util\") pod \"d48c0f09d4f85e6997d52028ed3b1a3994c4f4b73b6b561c4a3bbe5136czw6z\" (UID: \"6612d320-600d-4d86-a518-8594611f0a3c\") " pod="openstack-operators/d48c0f09d4f85e6997d52028ed3b1a3994c4f4b73b6b561c4a3bbe5136czw6z" Dec 11 10:29:06 crc kubenswrapper[4953]: I1211 10:29:06.038691 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6612d320-600d-4d86-a518-8594611f0a3c-util\") pod \"d48c0f09d4f85e6997d52028ed3b1a3994c4f4b73b6b561c4a3bbe5136czw6z\" (UID: \"6612d320-600d-4d86-a518-8594611f0a3c\") " pod="openstack-operators/d48c0f09d4f85e6997d52028ed3b1a3994c4f4b73b6b561c4a3bbe5136czw6z" Dec 11 10:29:06 crc kubenswrapper[4953]: I1211 10:29:06.038902 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6612d320-600d-4d86-a518-8594611f0a3c-bundle\") pod \"d48c0f09d4f85e6997d52028ed3b1a3994c4f4b73b6b561c4a3bbe5136czw6z\" (UID: \"6612d320-600d-4d86-a518-8594611f0a3c\") " pod="openstack-operators/d48c0f09d4f85e6997d52028ed3b1a3994c4f4b73b6b561c4a3bbe5136czw6z" Dec 11 10:29:06 crc kubenswrapper[4953]: I1211 10:29:06.038979 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wsgsm\" (UniqueName: \"kubernetes.io/projected/6612d320-600d-4d86-a518-8594611f0a3c-kube-api-access-wsgsm\") pod \"d48c0f09d4f85e6997d52028ed3b1a3994c4f4b73b6b561c4a3bbe5136czw6z\" (UID: \"6612d320-600d-4d86-a518-8594611f0a3c\") " pod="openstack-operators/d48c0f09d4f85e6997d52028ed3b1a3994c4f4b73b6b561c4a3bbe5136czw6z" Dec 11 10:29:06 crc kubenswrapper[4953]: I1211 10:29:06.040533 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6612d320-600d-4d86-a518-8594611f0a3c-util\") pod \"d48c0f09d4f85e6997d52028ed3b1a3994c4f4b73b6b561c4a3bbe5136czw6z\" (UID: \"6612d320-600d-4d86-a518-8594611f0a3c\") " pod="openstack-operators/d48c0f09d4f85e6997d52028ed3b1a3994c4f4b73b6b561c4a3bbe5136czw6z" Dec 11 10:29:06 crc kubenswrapper[4953]: I1211 10:29:06.040629 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6612d320-600d-4d86-a518-8594611f0a3c-bundle\") pod \"d48c0f09d4f85e6997d52028ed3b1a3994c4f4b73b6b561c4a3bbe5136czw6z\" (UID: \"6612d320-600d-4d86-a518-8594611f0a3c\") " pod="openstack-operators/d48c0f09d4f85e6997d52028ed3b1a3994c4f4b73b6b561c4a3bbe5136czw6z" Dec 11 10:29:06 crc kubenswrapper[4953]: I1211 10:29:06.064219 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wsgsm\" (UniqueName: \"kubernetes.io/projected/6612d320-600d-4d86-a518-8594611f0a3c-kube-api-access-wsgsm\") pod \"d48c0f09d4f85e6997d52028ed3b1a3994c4f4b73b6b561c4a3bbe5136czw6z\" (UID: \"6612d320-600d-4d86-a518-8594611f0a3c\") " pod="openstack-operators/d48c0f09d4f85e6997d52028ed3b1a3994c4f4b73b6b561c4a3bbe5136czw6z" Dec 11 10:29:06 crc kubenswrapper[4953]: I1211 10:29:06.099680 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/d48c0f09d4f85e6997d52028ed3b1a3994c4f4b73b6b561c4a3bbe5136czw6z" Dec 11 10:29:06 crc kubenswrapper[4953]: I1211 10:29:06.443895 4953 generic.go:334] "Generic (PLEG): container finished" podID="7d052962-681b-4149-a93f-25b2146477d6" containerID="9c9fdf85b22db95f064988718d0c445d20b6ce91c26847a90ed26eef01c26b08" exitCode=0 Dec 11 10:29:06 crc kubenswrapper[4953]: I1211 10:29:06.443953 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ff4hk" event={"ID":"7d052962-681b-4149-a93f-25b2146477d6","Type":"ContainerDied","Data":"9c9fdf85b22db95f064988718d0c445d20b6ce91c26847a90ed26eef01c26b08"} Dec 11 10:29:06 crc kubenswrapper[4953]: I1211 10:29:06.713126 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/d48c0f09d4f85e6997d52028ed3b1a3994c4f4b73b6b561c4a3bbe5136czw6z"] Dec 11 10:29:07 crc kubenswrapper[4953]: I1211 10:29:07.451942 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ff4hk" event={"ID":"7d052962-681b-4149-a93f-25b2146477d6","Type":"ContainerStarted","Data":"38f81c6a33b9ebe388f33b3dec3063c1c6e2ddd3932605d681ab2850ad1c2415"} Dec 11 10:29:07 crc kubenswrapper[4953]: I1211 10:29:07.453673 4953 generic.go:334] "Generic (PLEG): container finished" podID="6612d320-600d-4d86-a518-8594611f0a3c" containerID="4da1c2168e2e371cdfc12150afc17453808f0d70d413bc49fb8e9f3b73bd0845" exitCode=0 Dec 11 10:29:07 crc kubenswrapper[4953]: I1211 10:29:07.453736 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/d48c0f09d4f85e6997d52028ed3b1a3994c4f4b73b6b561c4a3bbe5136czw6z" event={"ID":"6612d320-600d-4d86-a518-8594611f0a3c","Type":"ContainerDied","Data":"4da1c2168e2e371cdfc12150afc17453808f0d70d413bc49fb8e9f3b73bd0845"} Dec 11 10:29:07 crc kubenswrapper[4953]: I1211 10:29:07.453781 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/d48c0f09d4f85e6997d52028ed3b1a3994c4f4b73b6b561c4a3bbe5136czw6z" event={"ID":"6612d320-600d-4d86-a518-8594611f0a3c","Type":"ContainerStarted","Data":"ebe1b41ede5c99409f582f0c33ee3a83fc759e93eb71d9802c4b1e1b3fbfa8e7"} Dec 11 10:29:08 crc kubenswrapper[4953]: I1211 10:29:08.460749 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/d48c0f09d4f85e6997d52028ed3b1a3994c4f4b73b6b561c4a3bbe5136czw6z" event={"ID":"6612d320-600d-4d86-a518-8594611f0a3c","Type":"ContainerStarted","Data":"8cb30d0323b8190510e67b81020c445734532e8cd1d91fd2f591b290ae794a32"} Dec 11 10:29:08 crc kubenswrapper[4953]: I1211 10:29:08.464542 4953 generic.go:334] "Generic (PLEG): container finished" podID="7d052962-681b-4149-a93f-25b2146477d6" containerID="38f81c6a33b9ebe388f33b3dec3063c1c6e2ddd3932605d681ab2850ad1c2415" exitCode=0 Dec 11 10:29:08 crc kubenswrapper[4953]: I1211 10:29:08.464601 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ff4hk" event={"ID":"7d052962-681b-4149-a93f-25b2146477d6","Type":"ContainerDied","Data":"38f81c6a33b9ebe388f33b3dec3063c1c6e2ddd3932605d681ab2850ad1c2415"} Dec 11 10:29:09 crc kubenswrapper[4953]: I1211 10:29:09.474405 4953 generic.go:334] "Generic (PLEG): container finished" podID="6612d320-600d-4d86-a518-8594611f0a3c" containerID="8cb30d0323b8190510e67b81020c445734532e8cd1d91fd2f591b290ae794a32" exitCode=0 Dec 11 10:29:09 crc kubenswrapper[4953]: I1211 10:29:09.474692 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/d48c0f09d4f85e6997d52028ed3b1a3994c4f4b73b6b561c4a3bbe5136czw6z" event={"ID":"6612d320-600d-4d86-a518-8594611f0a3c","Type":"ContainerDied","Data":"8cb30d0323b8190510e67b81020c445734532e8cd1d91fd2f591b290ae794a32"} Dec 11 10:29:09 crc kubenswrapper[4953]: I1211 10:29:09.516821 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ff4hk" event={"ID":"7d052962-681b-4149-a93f-25b2146477d6","Type":"ContainerStarted","Data":"ac3dfcf7d0a49d2fc5678a26e031d482a0628baad136ca8795a0812c480f0cca"} Dec 11 10:29:09 crc kubenswrapper[4953]: I1211 10:29:09.562020 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-ff4hk" podStartSLOduration=2.851344147 podStartE2EDuration="5.561650297s" podCreationTimestamp="2025-12-11 10:29:04 +0000 UTC" firstStartedPulling="2025-12-11 10:29:06.445805198 +0000 UTC m=+1064.469664231" lastFinishedPulling="2025-12-11 10:29:09.156111348 +0000 UTC m=+1067.179970381" observedRunningTime="2025-12-11 10:29:09.556881344 +0000 UTC m=+1067.580740377" watchObservedRunningTime="2025-12-11 10:29:09.561650297 +0000 UTC m=+1067.585509350" Dec 11 10:29:10 crc kubenswrapper[4953]: I1211 10:29:10.527167 4953 generic.go:334] "Generic (PLEG): container finished" podID="6612d320-600d-4d86-a518-8594611f0a3c" containerID="dfb0254353035a54cc30ccf6030486ee6fbf79642e770e628a32bdd14b16ca44" exitCode=0 Dec 11 10:29:10 crc kubenswrapper[4953]: I1211 10:29:10.527322 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/d48c0f09d4f85e6997d52028ed3b1a3994c4f4b73b6b561c4a3bbe5136czw6z" event={"ID":"6612d320-600d-4d86-a518-8594611f0a3c","Type":"ContainerDied","Data":"dfb0254353035a54cc30ccf6030486ee6fbf79642e770e628a32bdd14b16ca44"} Dec 11 10:29:11 crc kubenswrapper[4953]: I1211 10:29:11.795296 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/d48c0f09d4f85e6997d52028ed3b1a3994c4f4b73b6b561c4a3bbe5136czw6z" Dec 11 10:29:11 crc kubenswrapper[4953]: I1211 10:29:11.974535 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6612d320-600d-4d86-a518-8594611f0a3c-util\") pod \"6612d320-600d-4d86-a518-8594611f0a3c\" (UID: \"6612d320-600d-4d86-a518-8594611f0a3c\") " Dec 11 10:29:11 crc kubenswrapper[4953]: I1211 10:29:11.974645 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wsgsm\" (UniqueName: \"kubernetes.io/projected/6612d320-600d-4d86-a518-8594611f0a3c-kube-api-access-wsgsm\") pod \"6612d320-600d-4d86-a518-8594611f0a3c\" (UID: \"6612d320-600d-4d86-a518-8594611f0a3c\") " Dec 11 10:29:11 crc kubenswrapper[4953]: I1211 10:29:11.974691 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6612d320-600d-4d86-a518-8594611f0a3c-bundle\") pod \"6612d320-600d-4d86-a518-8594611f0a3c\" (UID: \"6612d320-600d-4d86-a518-8594611f0a3c\") " Dec 11 10:29:11 crc kubenswrapper[4953]: I1211 10:29:11.975891 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6612d320-600d-4d86-a518-8594611f0a3c-bundle" (OuterVolumeSpecName: "bundle") pod "6612d320-600d-4d86-a518-8594611f0a3c" (UID: "6612d320-600d-4d86-a518-8594611f0a3c"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:29:11 crc kubenswrapper[4953]: I1211 10:29:11.980059 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6612d320-600d-4d86-a518-8594611f0a3c-kube-api-access-wsgsm" (OuterVolumeSpecName: "kube-api-access-wsgsm") pod "6612d320-600d-4d86-a518-8594611f0a3c" (UID: "6612d320-600d-4d86-a518-8594611f0a3c"). InnerVolumeSpecName "kube-api-access-wsgsm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:29:11 crc kubenswrapper[4953]: I1211 10:29:11.996751 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6612d320-600d-4d86-a518-8594611f0a3c-util" (OuterVolumeSpecName: "util") pod "6612d320-600d-4d86-a518-8594611f0a3c" (UID: "6612d320-600d-4d86-a518-8594611f0a3c"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:29:12 crc kubenswrapper[4953]: I1211 10:29:12.075632 4953 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6612d320-600d-4d86-a518-8594611f0a3c-util\") on node \"crc\" DevicePath \"\"" Dec 11 10:29:12 crc kubenswrapper[4953]: I1211 10:29:12.075899 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wsgsm\" (UniqueName: \"kubernetes.io/projected/6612d320-600d-4d86-a518-8594611f0a3c-kube-api-access-wsgsm\") on node \"crc\" DevicePath \"\"" Dec 11 10:29:12 crc kubenswrapper[4953]: I1211 10:29:12.075991 4953 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6612d320-600d-4d86-a518-8594611f0a3c-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 10:29:12 crc kubenswrapper[4953]: I1211 10:29:12.544637 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/d48c0f09d4f85e6997d52028ed3b1a3994c4f4b73b6b561c4a3bbe5136czw6z" event={"ID":"6612d320-600d-4d86-a518-8594611f0a3c","Type":"ContainerDied","Data":"ebe1b41ede5c99409f582f0c33ee3a83fc759e93eb71d9802c4b1e1b3fbfa8e7"} Dec 11 10:29:12 crc kubenswrapper[4953]: I1211 10:29:12.544688 4953 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ebe1b41ede5c99409f582f0c33ee3a83fc759e93eb71d9802c4b1e1b3fbfa8e7" Dec 11 10:29:12 crc kubenswrapper[4953]: I1211 10:29:12.544736 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/d48c0f09d4f85e6997d52028ed3b1a3994c4f4b73b6b561c4a3bbe5136czw6z" Dec 11 10:29:15 crc kubenswrapper[4953]: I1211 10:29:15.072098 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-ff4hk" Dec 11 10:29:15 crc kubenswrapper[4953]: I1211 10:29:15.072525 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-ff4hk" Dec 11 10:29:15 crc kubenswrapper[4953]: I1211 10:29:15.119502 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-ff4hk" Dec 11 10:29:15 crc kubenswrapper[4953]: I1211 10:29:15.618153 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-ff4hk" Dec 11 10:29:17 crc kubenswrapper[4953]: I1211 10:29:17.316856 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-operator-966884dd6-rflkh"] Dec 11 10:29:17 crc kubenswrapper[4953]: E1211 10:29:17.317189 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6612d320-600d-4d86-a518-8594611f0a3c" containerName="extract" Dec 11 10:29:17 crc kubenswrapper[4953]: I1211 10:29:17.317205 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="6612d320-600d-4d86-a518-8594611f0a3c" containerName="extract" Dec 11 10:29:17 crc kubenswrapper[4953]: E1211 10:29:17.317239 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6612d320-600d-4d86-a518-8594611f0a3c" containerName="util" Dec 11 10:29:17 crc kubenswrapper[4953]: I1211 10:29:17.317247 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="6612d320-600d-4d86-a518-8594611f0a3c" containerName="util" Dec 11 10:29:17 crc kubenswrapper[4953]: E1211 10:29:17.317256 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6612d320-600d-4d86-a518-8594611f0a3c" containerName="pull" Dec 11 10:29:17 crc kubenswrapper[4953]: I1211 10:29:17.317265 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="6612d320-600d-4d86-a518-8594611f0a3c" containerName="pull" Dec 11 10:29:17 crc kubenswrapper[4953]: I1211 10:29:17.317410 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="6612d320-600d-4d86-a518-8594611f0a3c" containerName="extract" Dec 11 10:29:17 crc kubenswrapper[4953]: I1211 10:29:17.318040 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-966884dd6-rflkh" Dec 11 10:29:17 crc kubenswrapper[4953]: I1211 10:29:17.321059 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-operator-dockercfg-kv6wc" Dec 11 10:29:17 crc kubenswrapper[4953]: I1211 10:29:17.344479 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-966884dd6-rflkh"] Dec 11 10:29:17 crc kubenswrapper[4953]: I1211 10:29:17.462742 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbng9\" (UniqueName: \"kubernetes.io/projected/02775e13-9835-4032-95b6-b554fd29bde1-kube-api-access-rbng9\") pod \"openstack-operator-controller-operator-966884dd6-rflkh\" (UID: \"02775e13-9835-4032-95b6-b554fd29bde1\") " pod="openstack-operators/openstack-operator-controller-operator-966884dd6-rflkh" Dec 11 10:29:17 crc kubenswrapper[4953]: I1211 10:29:17.528300 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ff4hk"] Dec 11 10:29:17 crc kubenswrapper[4953]: I1211 10:29:17.564433 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbng9\" (UniqueName: \"kubernetes.io/projected/02775e13-9835-4032-95b6-b554fd29bde1-kube-api-access-rbng9\") pod \"openstack-operator-controller-operator-966884dd6-rflkh\" (UID: \"02775e13-9835-4032-95b6-b554fd29bde1\") " pod="openstack-operators/openstack-operator-controller-operator-966884dd6-rflkh" Dec 11 10:29:17 crc kubenswrapper[4953]: I1211 10:29:17.573509 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-ff4hk" podUID="7d052962-681b-4149-a93f-25b2146477d6" containerName="registry-server" containerID="cri-o://ac3dfcf7d0a49d2fc5678a26e031d482a0628baad136ca8795a0812c480f0cca" gracePeriod=2 Dec 11 10:29:17 crc kubenswrapper[4953]: I1211 10:29:17.587297 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbng9\" (UniqueName: \"kubernetes.io/projected/02775e13-9835-4032-95b6-b554fd29bde1-kube-api-access-rbng9\") pod \"openstack-operator-controller-operator-966884dd6-rflkh\" (UID: \"02775e13-9835-4032-95b6-b554fd29bde1\") " pod="openstack-operators/openstack-operator-controller-operator-966884dd6-rflkh" Dec 11 10:29:17 crc kubenswrapper[4953]: I1211 10:29:17.638211 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-966884dd6-rflkh" Dec 11 10:29:18 crc kubenswrapper[4953]: I1211 10:29:18.023680 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ff4hk" Dec 11 10:29:18 crc kubenswrapper[4953]: I1211 10:29:18.172210 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d052962-681b-4149-a93f-25b2146477d6-catalog-content\") pod \"7d052962-681b-4149-a93f-25b2146477d6\" (UID: \"7d052962-681b-4149-a93f-25b2146477d6\") " Dec 11 10:29:18 crc kubenswrapper[4953]: I1211 10:29:18.172301 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d052962-681b-4149-a93f-25b2146477d6-utilities\") pod \"7d052962-681b-4149-a93f-25b2146477d6\" (UID: \"7d052962-681b-4149-a93f-25b2146477d6\") " Dec 11 10:29:18 crc kubenswrapper[4953]: I1211 10:29:18.172342 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bkhdf\" (UniqueName: \"kubernetes.io/projected/7d052962-681b-4149-a93f-25b2146477d6-kube-api-access-bkhdf\") pod \"7d052962-681b-4149-a93f-25b2146477d6\" (UID: \"7d052962-681b-4149-a93f-25b2146477d6\") " Dec 11 10:29:18 crc kubenswrapper[4953]: I1211 10:29:18.173148 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d052962-681b-4149-a93f-25b2146477d6-utilities" (OuterVolumeSpecName: "utilities") pod "7d052962-681b-4149-a93f-25b2146477d6" (UID: "7d052962-681b-4149-a93f-25b2146477d6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:29:18 crc kubenswrapper[4953]: I1211 10:29:18.176797 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d052962-681b-4149-a93f-25b2146477d6-kube-api-access-bkhdf" (OuterVolumeSpecName: "kube-api-access-bkhdf") pod "7d052962-681b-4149-a93f-25b2146477d6" (UID: "7d052962-681b-4149-a93f-25b2146477d6"). InnerVolumeSpecName "kube-api-access-bkhdf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:29:18 crc kubenswrapper[4953]: I1211 10:29:18.201935 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-966884dd6-rflkh"] Dec 11 10:29:18 crc kubenswrapper[4953]: I1211 10:29:18.221281 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d052962-681b-4149-a93f-25b2146477d6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7d052962-681b-4149-a93f-25b2146477d6" (UID: "7d052962-681b-4149-a93f-25b2146477d6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:29:18 crc kubenswrapper[4953]: I1211 10:29:18.275007 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bkhdf\" (UniqueName: \"kubernetes.io/projected/7d052962-681b-4149-a93f-25b2146477d6-kube-api-access-bkhdf\") on node \"crc\" DevicePath \"\"" Dec 11 10:29:18 crc kubenswrapper[4953]: I1211 10:29:18.275055 4953 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d052962-681b-4149-a93f-25b2146477d6-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 10:29:18 crc kubenswrapper[4953]: I1211 10:29:18.275065 4953 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d052962-681b-4149-a93f-25b2146477d6-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 10:29:18 crc kubenswrapper[4953]: I1211 10:29:18.582294 4953 generic.go:334] "Generic (PLEG): container finished" podID="7d052962-681b-4149-a93f-25b2146477d6" containerID="ac3dfcf7d0a49d2fc5678a26e031d482a0628baad136ca8795a0812c480f0cca" exitCode=0 Dec 11 10:29:18 crc kubenswrapper[4953]: I1211 10:29:18.582380 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ff4hk" Dec 11 10:29:18 crc kubenswrapper[4953]: I1211 10:29:18.582383 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ff4hk" event={"ID":"7d052962-681b-4149-a93f-25b2146477d6","Type":"ContainerDied","Data":"ac3dfcf7d0a49d2fc5678a26e031d482a0628baad136ca8795a0812c480f0cca"} Dec 11 10:29:18 crc kubenswrapper[4953]: I1211 10:29:18.582503 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ff4hk" event={"ID":"7d052962-681b-4149-a93f-25b2146477d6","Type":"ContainerDied","Data":"2bf387cef098885d42ac7780c4f75024ae4dec71c2a06d241d34a8c4f5065420"} Dec 11 10:29:18 crc kubenswrapper[4953]: I1211 10:29:18.582522 4953 scope.go:117] "RemoveContainer" containerID="ac3dfcf7d0a49d2fc5678a26e031d482a0628baad136ca8795a0812c480f0cca" Dec 11 10:29:18 crc kubenswrapper[4953]: I1211 10:29:18.583680 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-966884dd6-rflkh" event={"ID":"02775e13-9835-4032-95b6-b554fd29bde1","Type":"ContainerStarted","Data":"7564a74c8e127adee62c31657ee3e69592795460d82764ba21d69b3e9aeb2009"} Dec 11 10:29:18 crc kubenswrapper[4953]: I1211 10:29:18.618651 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ff4hk"] Dec 11 10:29:18 crc kubenswrapper[4953]: I1211 10:29:18.624240 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-ff4hk"] Dec 11 10:29:18 crc kubenswrapper[4953]: I1211 10:29:18.665638 4953 scope.go:117] "RemoveContainer" containerID="38f81c6a33b9ebe388f33b3dec3063c1c6e2ddd3932605d681ab2850ad1c2415" Dec 11 10:29:18 crc kubenswrapper[4953]: I1211 10:29:18.690722 4953 scope.go:117] "RemoveContainer" containerID="9c9fdf85b22db95f064988718d0c445d20b6ce91c26847a90ed26eef01c26b08" Dec 11 10:29:18 crc kubenswrapper[4953]: I1211 10:29:18.725237 4953 scope.go:117] "RemoveContainer" containerID="ac3dfcf7d0a49d2fc5678a26e031d482a0628baad136ca8795a0812c480f0cca" Dec 11 10:29:18 crc kubenswrapper[4953]: E1211 10:29:18.725834 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac3dfcf7d0a49d2fc5678a26e031d482a0628baad136ca8795a0812c480f0cca\": container with ID starting with ac3dfcf7d0a49d2fc5678a26e031d482a0628baad136ca8795a0812c480f0cca not found: ID does not exist" containerID="ac3dfcf7d0a49d2fc5678a26e031d482a0628baad136ca8795a0812c480f0cca" Dec 11 10:29:18 crc kubenswrapper[4953]: I1211 10:29:18.725867 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac3dfcf7d0a49d2fc5678a26e031d482a0628baad136ca8795a0812c480f0cca"} err="failed to get container status \"ac3dfcf7d0a49d2fc5678a26e031d482a0628baad136ca8795a0812c480f0cca\": rpc error: code = NotFound desc = could not find container \"ac3dfcf7d0a49d2fc5678a26e031d482a0628baad136ca8795a0812c480f0cca\": container with ID starting with ac3dfcf7d0a49d2fc5678a26e031d482a0628baad136ca8795a0812c480f0cca not found: ID does not exist" Dec 11 10:29:18 crc kubenswrapper[4953]: I1211 10:29:18.725892 4953 scope.go:117] "RemoveContainer" containerID="38f81c6a33b9ebe388f33b3dec3063c1c6e2ddd3932605d681ab2850ad1c2415" Dec 11 10:29:18 crc kubenswrapper[4953]: E1211 10:29:18.726152 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"38f81c6a33b9ebe388f33b3dec3063c1c6e2ddd3932605d681ab2850ad1c2415\": container with ID starting with 38f81c6a33b9ebe388f33b3dec3063c1c6e2ddd3932605d681ab2850ad1c2415 not found: ID does not exist" containerID="38f81c6a33b9ebe388f33b3dec3063c1c6e2ddd3932605d681ab2850ad1c2415" Dec 11 10:29:18 crc kubenswrapper[4953]: I1211 10:29:18.726172 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38f81c6a33b9ebe388f33b3dec3063c1c6e2ddd3932605d681ab2850ad1c2415"} err="failed to get container status \"38f81c6a33b9ebe388f33b3dec3063c1c6e2ddd3932605d681ab2850ad1c2415\": rpc error: code = NotFound desc = could not find container \"38f81c6a33b9ebe388f33b3dec3063c1c6e2ddd3932605d681ab2850ad1c2415\": container with ID starting with 38f81c6a33b9ebe388f33b3dec3063c1c6e2ddd3932605d681ab2850ad1c2415 not found: ID does not exist" Dec 11 10:29:18 crc kubenswrapper[4953]: I1211 10:29:18.726185 4953 scope.go:117] "RemoveContainer" containerID="9c9fdf85b22db95f064988718d0c445d20b6ce91c26847a90ed26eef01c26b08" Dec 11 10:29:18 crc kubenswrapper[4953]: E1211 10:29:18.726439 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c9fdf85b22db95f064988718d0c445d20b6ce91c26847a90ed26eef01c26b08\": container with ID starting with 9c9fdf85b22db95f064988718d0c445d20b6ce91c26847a90ed26eef01c26b08 not found: ID does not exist" containerID="9c9fdf85b22db95f064988718d0c445d20b6ce91c26847a90ed26eef01c26b08" Dec 11 10:29:18 crc kubenswrapper[4953]: I1211 10:29:18.726459 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c9fdf85b22db95f064988718d0c445d20b6ce91c26847a90ed26eef01c26b08"} err="failed to get container status \"9c9fdf85b22db95f064988718d0c445d20b6ce91c26847a90ed26eef01c26b08\": rpc error: code = NotFound desc = could not find container \"9c9fdf85b22db95f064988718d0c445d20b6ce91c26847a90ed26eef01c26b08\": container with ID starting with 9c9fdf85b22db95f064988718d0c445d20b6ce91c26847a90ed26eef01c26b08 not found: ID does not exist" Dec 11 10:29:20 crc kubenswrapper[4953]: I1211 10:29:20.492108 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d052962-681b-4149-a93f-25b2146477d6" path="/var/lib/kubelet/pods/7d052962-681b-4149-a93f-25b2146477d6/volumes" Dec 11 10:29:23 crc kubenswrapper[4953]: I1211 10:29:23.969741 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-966884dd6-rflkh" event={"ID":"02775e13-9835-4032-95b6-b554fd29bde1","Type":"ContainerStarted","Data":"906ee0fd274123587176d76058eb000442b2473e356c8d332a6311424fa5afaf"} Dec 11 10:29:23 crc kubenswrapper[4953]: I1211 10:29:23.971676 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-966884dd6-rflkh" Dec 11 10:29:24 crc kubenswrapper[4953]: I1211 10:29:24.009093 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-operator-966884dd6-rflkh" podStartSLOduration=2.169445366 podStartE2EDuration="7.009073032s" podCreationTimestamp="2025-12-11 10:29:17 +0000 UTC" firstStartedPulling="2025-12-11 10:29:18.207030213 +0000 UTC m=+1076.230889266" lastFinishedPulling="2025-12-11 10:29:23.046657899 +0000 UTC m=+1081.070516932" observedRunningTime="2025-12-11 10:29:24.003829473 +0000 UTC m=+1082.027688516" watchObservedRunningTime="2025-12-11 10:29:24.009073032 +0000 UTC m=+1082.032932065" Dec 11 10:29:37 crc kubenswrapper[4953]: I1211 10:29:37.642724 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-operator-966884dd6-rflkh" Dec 11 10:29:58 crc kubenswrapper[4953]: I1211 10:29:58.047802 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7d9dfd778-4qtwl"] Dec 11 10:29:58 crc kubenswrapper[4953]: E1211 10:29:58.048922 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d052962-681b-4149-a93f-25b2146477d6" containerName="registry-server" Dec 11 10:29:58 crc kubenswrapper[4953]: I1211 10:29:58.048957 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d052962-681b-4149-a93f-25b2146477d6" containerName="registry-server" Dec 11 10:29:58 crc kubenswrapper[4953]: E1211 10:29:58.048987 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d052962-681b-4149-a93f-25b2146477d6" containerName="extract-utilities" Dec 11 10:29:58 crc kubenswrapper[4953]: I1211 10:29:58.048999 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d052962-681b-4149-a93f-25b2146477d6" containerName="extract-utilities" Dec 11 10:29:58 crc kubenswrapper[4953]: E1211 10:29:58.049024 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d052962-681b-4149-a93f-25b2146477d6" containerName="extract-content" Dec 11 10:29:58 crc kubenswrapper[4953]: I1211 10:29:58.049035 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d052962-681b-4149-a93f-25b2146477d6" containerName="extract-content" Dec 11 10:29:58 crc kubenswrapper[4953]: I1211 10:29:58.049259 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d052962-681b-4149-a93f-25b2146477d6" containerName="registry-server" Dec 11 10:29:58 crc kubenswrapper[4953]: I1211 10:29:58.050260 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-4qtwl" Dec 11 10:29:58 crc kubenswrapper[4953]: I1211 10:29:58.052358 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-lkfvr" Dec 11 10:29:58 crc kubenswrapper[4953]: I1211 10:29:58.084822 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-6c677c69b-jz7t2"] Dec 11 10:29:58 crc kubenswrapper[4953]: I1211 10:29:58.086099 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-jz7t2" Dec 11 10:29:58 crc kubenswrapper[4953]: I1211 10:29:58.094094 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-rx8vd" Dec 11 10:29:58 crc kubenswrapper[4953]: I1211 10:29:58.105746 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-6c677c69b-jz7t2"] Dec 11 10:29:58 crc kubenswrapper[4953]: I1211 10:29:58.113999 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7d9dfd778-4qtwl"] Dec 11 10:29:58 crc kubenswrapper[4953]: I1211 10:29:58.122622 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-697fb699cf-c8jqf"] Dec 11 10:29:58 crc kubenswrapper[4953]: I1211 10:29:58.124169 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-c8jqf" Dec 11 10:29:58 crc kubenswrapper[4953]: I1211 10:29:58.128264 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-l78lc" Dec 11 10:29:58 crc kubenswrapper[4953]: I1211 10:29:58.132470 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-697fb699cf-c8jqf"] Dec 11 10:29:58 crc kubenswrapper[4953]: I1211 10:29:58.149332 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhdpb\" (UniqueName: \"kubernetes.io/projected/6b26e336-7c68-4ba3-979b-211c05708639-kube-api-access-rhdpb\") pod \"barbican-operator-controller-manager-7d9dfd778-4qtwl\" (UID: \"6b26e336-7c68-4ba3-979b-211c05708639\") " pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-4qtwl" Dec 11 10:29:58 crc kubenswrapper[4953]: I1211 10:29:58.149418 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sc5v8\" (UniqueName: \"kubernetes.io/projected/95010a68-4a99-4e84-8785-cb970f7085e1-kube-api-access-sc5v8\") pod \"cinder-operator-controller-manager-6c677c69b-jz7t2\" (UID: \"95010a68-4a99-4e84-8785-cb970f7085e1\") " pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-jz7t2" Dec 11 10:29:58 crc kubenswrapper[4953]: I1211 10:29:58.149485 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4gsf\" (UniqueName: \"kubernetes.io/projected/b97b8317-f4e7-440c-8d72-df1cf55afe09-kube-api-access-v4gsf\") pod \"designate-operator-controller-manager-697fb699cf-c8jqf\" (UID: \"b97b8317-f4e7-440c-8d72-df1cf55afe09\") " pod="openstack-operators/designate-operator-controller-manager-697fb699cf-c8jqf" Dec 11 10:29:58 crc kubenswrapper[4953]: I1211 10:29:58.149586 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-5f64f6f8bb-zqt7s"] Dec 11 10:29:58 crc kubenswrapper[4953]: I1211 10:29:58.150761 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-zqt7s" Dec 11 10:29:58 crc kubenswrapper[4953]: I1211 10:29:58.153264 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-wvpv9" Dec 11 10:29:58 crc kubenswrapper[4953]: I1211 10:29:58.169730 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-5697bb5779-dx46k"] Dec 11 10:29:58 crc kubenswrapper[4953]: I1211 10:29:58.170805 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-dx46k" Dec 11 10:29:58 crc kubenswrapper[4953]: I1211 10:29:58.177334 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-gsbtz" Dec 11 10:29:58 crc kubenswrapper[4953]: I1211 10:29:58.215987 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-5f64f6f8bb-zqt7s"] Dec 11 10:29:58 crc kubenswrapper[4953]: I1211 10:29:58.235124 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c6d99b8f-jz2zb"] Dec 11 10:29:58 crc kubenswrapper[4953]: I1211 10:29:58.236448 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-jz2zb" Dec 11 10:29:58 crc kubenswrapper[4953]: I1211 10:29:58.240829 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-jtzxc" Dec 11 10:29:58 crc kubenswrapper[4953]: I1211 10:29:58.241668 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-5697bb5779-dx46k"] Dec 11 10:29:58 crc kubenswrapper[4953]: I1211 10:29:58.250149 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sc5v8\" (UniqueName: \"kubernetes.io/projected/95010a68-4a99-4e84-8785-cb970f7085e1-kube-api-access-sc5v8\") pod \"cinder-operator-controller-manager-6c677c69b-jz7t2\" (UID: \"95010a68-4a99-4e84-8785-cb970f7085e1\") " pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-jz7t2" Dec 11 10:29:58 crc kubenswrapper[4953]: I1211 10:29:58.250216 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7bss\" (UniqueName: \"kubernetes.io/projected/c77a72a9-141b-4be9-99e2-406e16b68c2b-kube-api-access-d7bss\") pod \"glance-operator-controller-manager-5697bb5779-dx46k\" (UID: \"c77a72a9-141b-4be9-99e2-406e16b68c2b\") " pod="openstack-operators/glance-operator-controller-manager-5697bb5779-dx46k" Dec 11 10:29:58 crc kubenswrapper[4953]: I1211 10:29:58.250289 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4gsf\" (UniqueName: \"kubernetes.io/projected/b97b8317-f4e7-440c-8d72-df1cf55afe09-kube-api-access-v4gsf\") pod \"designate-operator-controller-manager-697fb699cf-c8jqf\" (UID: \"b97b8317-f4e7-440c-8d72-df1cf55afe09\") " pod="openstack-operators/designate-operator-controller-manager-697fb699cf-c8jqf" Dec 11 10:29:58 crc kubenswrapper[4953]: I1211 10:29:58.250346 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2znc8\" (UniqueName: \"kubernetes.io/projected/43c3d99b-4ce9-421a-9212-c99b50e671af-kube-api-access-2znc8\") pod \"heat-operator-controller-manager-5f64f6f8bb-zqt7s\" (UID: \"43c3d99b-4ce9-421a-9212-c99b50e671af\") " pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-zqt7s" Dec 11 10:29:58 crc kubenswrapper[4953]: I1211 10:29:58.250372 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rhdpb\" (UniqueName: \"kubernetes.io/projected/6b26e336-7c68-4ba3-979b-211c05708639-kube-api-access-rhdpb\") pod \"barbican-operator-controller-manager-7d9dfd778-4qtwl\" (UID: \"6b26e336-7c68-4ba3-979b-211c05708639\") " pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-4qtwl" Dec 11 10:29:58 crc kubenswrapper[4953]: I1211 10:29:58.250428 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b228g\" (UniqueName: \"kubernetes.io/projected/b81d3c69-eb5d-406f-8f14-330eaf0edec3-kube-api-access-b228g\") pod \"horizon-operator-controller-manager-68c6d99b8f-jz2zb\" (UID: \"b81d3c69-eb5d-406f-8f14-330eaf0edec3\") " pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-jz2zb" Dec 11 10:29:58 crc kubenswrapper[4953]: I1211 10:29:58.282736 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-78d48bff9d-nczzd"] Dec 11 10:29:58 crc kubenswrapper[4953]: I1211 10:29:58.283875 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-nczzd" Dec 11 10:29:58 crc kubenswrapper[4953]: I1211 10:29:58.292116 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Dec 11 10:29:58 crc kubenswrapper[4953]: I1211 10:29:58.294142 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4gsf\" (UniqueName: \"kubernetes.io/projected/b97b8317-f4e7-440c-8d72-df1cf55afe09-kube-api-access-v4gsf\") pod \"designate-operator-controller-manager-697fb699cf-c8jqf\" (UID: \"b97b8317-f4e7-440c-8d72-df1cf55afe09\") " pod="openstack-operators/designate-operator-controller-manager-697fb699cf-c8jqf" Dec 11 10:29:58 crc kubenswrapper[4953]: I1211 10:29:58.294608 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-wckt8" Dec 11 10:29:58 crc kubenswrapper[4953]: I1211 10:29:58.298298 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sc5v8\" (UniqueName: \"kubernetes.io/projected/95010a68-4a99-4e84-8785-cb970f7085e1-kube-api-access-sc5v8\") pod \"cinder-operator-controller-manager-6c677c69b-jz7t2\" (UID: \"95010a68-4a99-4e84-8785-cb970f7085e1\") " pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-jz7t2" Dec 11 10:29:58 crc kubenswrapper[4953]: I1211 10:29:58.301976 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c6d99b8f-jz2zb"] Dec 11 10:29:58 crc kubenswrapper[4953]: I1211 10:29:58.303136 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rhdpb\" (UniqueName: \"kubernetes.io/projected/6b26e336-7c68-4ba3-979b-211c05708639-kube-api-access-rhdpb\") pod \"barbican-operator-controller-manager-7d9dfd778-4qtwl\" (UID: \"6b26e336-7c68-4ba3-979b-211c05708639\") " pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-4qtwl" Dec 11 10:29:58 crc kubenswrapper[4953]: I1211 10:29:58.313870 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-78d48bff9d-nczzd"] Dec 11 10:29:58 crc kubenswrapper[4953]: I1211 10:29:58.323929 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7765d96ddf-vz6q6"] Dec 11 10:29:58 crc kubenswrapper[4953]: I1211 10:29:58.325316 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-vz6q6" Dec 11 10:29:58 crc kubenswrapper[4953]: I1211 10:29:58.329604 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-967d97867-f75n2"] Dec 11 10:29:58 crc kubenswrapper[4953]: I1211 10:29:58.330629 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-967d97867-f75n2" Dec 11 10:29:58 crc kubenswrapper[4953]: I1211 10:29:58.333405 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-5zl5w" Dec 11 10:29:58 crc kubenswrapper[4953]: I1211 10:29:58.346225 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-5b5fd79c9c-c95px"] Dec 11 10:29:58 crc kubenswrapper[4953]: I1211 10:29:58.347278 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-c95px" Dec 11 10:29:58 crc kubenswrapper[4953]: I1211 10:29:58.351846 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2znc8\" (UniqueName: \"kubernetes.io/projected/43c3d99b-4ce9-421a-9212-c99b50e671af-kube-api-access-2znc8\") pod \"heat-operator-controller-manager-5f64f6f8bb-zqt7s\" (UID: \"43c3d99b-4ce9-421a-9212-c99b50e671af\") " pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-zqt7s" Dec 11 10:29:58 crc kubenswrapper[4953]: I1211 10:29:58.352456 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-54j2z" Dec 11 10:29:58 crc kubenswrapper[4953]: I1211 10:29:58.353224 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b228g\" (UniqueName: \"kubernetes.io/projected/b81d3c69-eb5d-406f-8f14-330eaf0edec3-kube-api-access-b228g\") pod \"horizon-operator-controller-manager-68c6d99b8f-jz2zb\" (UID: \"b81d3c69-eb5d-406f-8f14-330eaf0edec3\") " pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-jz2zb" Dec 11 10:29:58 crc kubenswrapper[4953]: I1211 10:29:58.353289 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d7bss\" (UniqueName: \"kubernetes.io/projected/c77a72a9-141b-4be9-99e2-406e16b68c2b-kube-api-access-d7bss\") pod \"glance-operator-controller-manager-5697bb5779-dx46k\" (UID: \"c77a72a9-141b-4be9-99e2-406e16b68c2b\") " pod="openstack-operators/glance-operator-controller-manager-5697bb5779-dx46k" Dec 11 10:29:58 crc kubenswrapper[4953]: I1211 10:29:58.371565 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-79c8c4686c-c2pkq"] Dec 11 10:29:58 crc kubenswrapper[4953]: I1211 10:29:58.372883 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-c2pkq" Dec 11 10:29:58 crc kubenswrapper[4953]: I1211 10:29:58.373813 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-kbq24" Dec 11 10:29:58 crc kubenswrapper[4953]: I1211 10:29:58.376978 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-lzc7c" Dec 11 10:29:58 crc kubenswrapper[4953]: I1211 10:29:58.392610 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-z6knw"] Dec 11 10:29:58 crc kubenswrapper[4953]: I1211 10:29:58.393683 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-z6knw" Dec 11 10:29:58 crc kubenswrapper[4953]: I1211 10:29:58.395017 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-4qtwl" Dec 11 10:29:58 crc kubenswrapper[4953]: I1211 10:29:58.395677 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-72n5f" Dec 11 10:29:58 crc kubenswrapper[4953]: I1211 10:29:58.397281 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7bss\" (UniqueName: \"kubernetes.io/projected/c77a72a9-141b-4be9-99e2-406e16b68c2b-kube-api-access-d7bss\") pod \"glance-operator-controller-manager-5697bb5779-dx46k\" (UID: \"c77a72a9-141b-4be9-99e2-406e16b68c2b\") " pod="openstack-operators/glance-operator-controller-manager-5697bb5779-dx46k" Dec 11 10:29:58 crc kubenswrapper[4953]: I1211 10:29:58.397898 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-697bc559fc-wvw7n"] Dec 11 10:29:58 crc kubenswrapper[4953]: I1211 10:29:58.398927 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-wvw7n" Dec 11 10:29:58 crc kubenswrapper[4953]: I1211 10:29:58.403447 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-998648c74-6nwkm"] Dec 11 10:29:58 crc kubenswrapper[4953]: I1211 10:29:58.404009 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-2m9xn" Dec 11 10:29:58 crc kubenswrapper[4953]: I1211 10:29:58.404426 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b228g\" (UniqueName: \"kubernetes.io/projected/b81d3c69-eb5d-406f-8f14-330eaf0edec3-kube-api-access-b228g\") pod \"horizon-operator-controller-manager-68c6d99b8f-jz2zb\" (UID: \"b81d3c69-eb5d-406f-8f14-330eaf0edec3\") " pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-jz2zb" Dec 11 10:29:58 crc kubenswrapper[4953]: I1211 10:29:58.405236 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-998648c74-6nwkm" Dec 11 10:29:58 crc kubenswrapper[4953]: I1211 10:29:58.411247 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-rvjkn" Dec 11 10:29:58 crc kubenswrapper[4953]: I1211 10:29:58.423546 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-jz7t2" Dec 11 10:29:58 crc kubenswrapper[4953]: I1211 10:29:58.424061 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2znc8\" (UniqueName: \"kubernetes.io/projected/43c3d99b-4ce9-421a-9212-c99b50e671af-kube-api-access-2znc8\") pod \"heat-operator-controller-manager-5f64f6f8bb-zqt7s\" (UID: \"43c3d99b-4ce9-421a-9212-c99b50e671af\") " pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-zqt7s" Dec 11 10:29:58 crc kubenswrapper[4953]: I1211 10:29:58.435303 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-z6knw"] Dec 11 10:29:58 crc kubenswrapper[4953]: I1211 10:29:58.447552 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-79c8c4686c-c2pkq"] Dec 11 10:29:58 crc kubenswrapper[4953]: I1211 10:29:58.449212 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-c8jqf" Dec 11 10:29:58 crc kubenswrapper[4953]: I1211 10:29:58.458591 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wrxb\" (UniqueName: \"kubernetes.io/projected/0ab27af2-4f6b-4e0f-b399-bef9b137ce63-kube-api-access-7wrxb\") pod \"mariadb-operator-controller-manager-79c8c4686c-c2pkq\" (UID: \"0ab27af2-4f6b-4e0f-b399-bef9b137ce63\") " pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-c2pkq" Dec 11 10:29:58 crc kubenswrapper[4953]: I1211 10:29:58.458653 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qcsd\" (UniqueName: \"kubernetes.io/projected/33ec47dc-5b73-4fd2-b0e1-eee01b12110f-kube-api-access-8qcsd\") pod \"keystone-operator-controller-manager-7765d96ddf-vz6q6\" (UID: \"33ec47dc-5b73-4fd2-b0e1-eee01b12110f\") " pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-vz6q6" Dec 11 10:29:58 crc kubenswrapper[4953]: I1211 10:29:58.458683 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/954a102c-a60d-405a-b579-450e6b8e5c8b-cert\") pod \"infra-operator-controller-manager-78d48bff9d-nczzd\" (UID: \"954a102c-a60d-405a-b579-450e6b8e5c8b\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-nczzd" Dec 11 10:29:58 crc kubenswrapper[4953]: I1211 10:29:58.458725 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-776vr\" (UniqueName: \"kubernetes.io/projected/e905c779-8570-480f-a7b9-7bba299bee6b-kube-api-access-776vr\") pod \"octavia-operator-controller-manager-998648c74-6nwkm\" (UID: \"e905c779-8570-480f-a7b9-7bba299bee6b\") " pod="openstack-operators/octavia-operator-controller-manager-998648c74-6nwkm" Dec 11 10:29:58 crc kubenswrapper[4953]: I1211 10:29:58.458756 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ngc5\" (UniqueName: \"kubernetes.io/projected/9feb23b4-0b52-42c6-98a8-6b1de2241028-kube-api-access-7ngc5\") pod \"nova-operator-controller-manager-697bc559fc-wvw7n\" (UID: \"9feb23b4-0b52-42c6-98a8-6b1de2241028\") " pod="openstack-operators/nova-operator-controller-manager-697bc559fc-wvw7n" Dec 11 10:29:58 crc kubenswrapper[4953]: I1211 10:29:58.458777 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tcjfk\" (UniqueName: \"kubernetes.io/projected/954a102c-a60d-405a-b579-450e6b8e5c8b-kube-api-access-tcjfk\") pod \"infra-operator-controller-manager-78d48bff9d-nczzd\" (UID: \"954a102c-a60d-405a-b579-450e6b8e5c8b\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-nczzd" Dec 11 10:29:58 crc kubenswrapper[4953]: I1211 10:29:58.458806 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pblp4\" (UniqueName: \"kubernetes.io/projected/b7626052-8b4d-46d2-8f66-5774f43643a0-kube-api-access-pblp4\") pod \"ironic-operator-controller-manager-967d97867-f75n2\" (UID: \"b7626052-8b4d-46d2-8f66-5774f43643a0\") " pod="openstack-operators/ironic-operator-controller-manager-967d97867-f75n2" Dec 11 10:29:58 crc kubenswrapper[4953]: I1211 10:29:58.458839 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t45p4\" (UniqueName: \"kubernetes.io/projected/b7ddbee0-c6cd-4571-912d-09744da61237-kube-api-access-t45p4\") pod \"manila-operator-controller-manager-5b5fd79c9c-c95px\" (UID: \"b7ddbee0-c6cd-4571-912d-09744da61237\") " pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-c95px" Dec 11 10:29:58 crc kubenswrapper[4953]: I1211 10:29:58.458865 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxb58\" (UniqueName: \"kubernetes.io/projected/46ad2123-023a-4bcb-9b05-2a6b223c2d02-kube-api-access-qxb58\") pod \"neutron-operator-controller-manager-5fdfd5b6b5-z6knw\" (UID: \"46ad2123-023a-4bcb-9b05-2a6b223c2d02\") " pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-z6knw" Dec 11 10:29:58 crc kubenswrapper[4953]: I1211 10:29:58.474782 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-zqt7s" Dec 11 10:29:58 crc kubenswrapper[4953]: I1211 10:29:58.483617 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-5b5fd79c9c-c95px"] Dec 11 10:29:58 crc kubenswrapper[4953]: I1211 10:29:58.483677 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-998648c74-6nwkm"] Dec 11 10:29:58 crc kubenswrapper[4953]: I1211 10:29:58.547340 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-dx46k" Dec 11 10:29:58 crc kubenswrapper[4953]: I1211 10:29:58.564841 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ngc5\" (UniqueName: \"kubernetes.io/projected/9feb23b4-0b52-42c6-98a8-6b1de2241028-kube-api-access-7ngc5\") pod \"nova-operator-controller-manager-697bc559fc-wvw7n\" (UID: \"9feb23b4-0b52-42c6-98a8-6b1de2241028\") " pod="openstack-operators/nova-operator-controller-manager-697bc559fc-wvw7n" Dec 11 10:29:58 crc kubenswrapper[4953]: I1211 10:29:58.564892 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tcjfk\" (UniqueName: \"kubernetes.io/projected/954a102c-a60d-405a-b579-450e6b8e5c8b-kube-api-access-tcjfk\") pod \"infra-operator-controller-manager-78d48bff9d-nczzd\" (UID: \"954a102c-a60d-405a-b579-450e6b8e5c8b\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-nczzd" Dec 11 10:29:58 crc kubenswrapper[4953]: I1211 10:29:58.565156 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pblp4\" (UniqueName: \"kubernetes.io/projected/b7626052-8b4d-46d2-8f66-5774f43643a0-kube-api-access-pblp4\") pod \"ironic-operator-controller-manager-967d97867-f75n2\" (UID: \"b7626052-8b4d-46d2-8f66-5774f43643a0\") " pod="openstack-operators/ironic-operator-controller-manager-967d97867-f75n2" Dec 11 10:29:58 crc kubenswrapper[4953]: I1211 10:29:58.565199 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t45p4\" (UniqueName: \"kubernetes.io/projected/b7ddbee0-c6cd-4571-912d-09744da61237-kube-api-access-t45p4\") pod \"manila-operator-controller-manager-5b5fd79c9c-c95px\" (UID: \"b7ddbee0-c6cd-4571-912d-09744da61237\") " pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-c95px" Dec 11 10:29:58 crc kubenswrapper[4953]: I1211 10:29:58.565251 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxb58\" (UniqueName: \"kubernetes.io/projected/46ad2123-023a-4bcb-9b05-2a6b223c2d02-kube-api-access-qxb58\") pod \"neutron-operator-controller-manager-5fdfd5b6b5-z6knw\" (UID: \"46ad2123-023a-4bcb-9b05-2a6b223c2d02\") " pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-z6knw" Dec 11 10:29:58 crc kubenswrapper[4953]: I1211 10:29:58.565301 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7wrxb\" (UniqueName: \"kubernetes.io/projected/0ab27af2-4f6b-4e0f-b399-bef9b137ce63-kube-api-access-7wrxb\") pod \"mariadb-operator-controller-manager-79c8c4686c-c2pkq\" (UID: \"0ab27af2-4f6b-4e0f-b399-bef9b137ce63\") " pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-c2pkq" Dec 11 10:29:58 crc kubenswrapper[4953]: I1211 10:29:58.565361 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8qcsd\" (UniqueName: \"kubernetes.io/projected/33ec47dc-5b73-4fd2-b0e1-eee01b12110f-kube-api-access-8qcsd\") pod \"keystone-operator-controller-manager-7765d96ddf-vz6q6\" (UID: \"33ec47dc-5b73-4fd2-b0e1-eee01b12110f\") " pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-vz6q6" Dec 11 10:29:58 crc kubenswrapper[4953]: I1211 10:29:58.565401 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/954a102c-a60d-405a-b579-450e6b8e5c8b-cert\") pod \"infra-operator-controller-manager-78d48bff9d-nczzd\" (UID: \"954a102c-a60d-405a-b579-450e6b8e5c8b\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-nczzd" Dec 11 10:29:58 crc kubenswrapper[4953]: I1211 10:29:58.565451 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-776vr\" (UniqueName: \"kubernetes.io/projected/e905c779-8570-480f-a7b9-7bba299bee6b-kube-api-access-776vr\") pod \"octavia-operator-controller-manager-998648c74-6nwkm\" (UID: \"e905c779-8570-480f-a7b9-7bba299bee6b\") " pod="openstack-operators/octavia-operator-controller-manager-998648c74-6nwkm" Dec 11 10:29:58 crc kubenswrapper[4953]: E1211 10:29:58.590539 4953 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 11 10:29:58 crc kubenswrapper[4953]: E1211 10:29:58.594118 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/954a102c-a60d-405a-b579-450e6b8e5c8b-cert podName:954a102c-a60d-405a-b579-450e6b8e5c8b nodeName:}" failed. No retries permitted until 2025-12-11 10:29:59.094082835 +0000 UTC m=+1117.117941868 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/954a102c-a60d-405a-b579-450e6b8e5c8b-cert") pod "infra-operator-controller-manager-78d48bff9d-nczzd" (UID: "954a102c-a60d-405a-b579-450e6b8e5c8b") : secret "infra-operator-webhook-server-cert" not found Dec 11 10:29:58 crc kubenswrapper[4953]: I1211 10:29:58.596421 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-jz2zb" Dec 11 10:29:58 crc kubenswrapper[4953]: I1211 10:29:58.598448 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-967d97867-f75n2"] Dec 11 10:29:58 crc kubenswrapper[4953]: I1211 10:29:58.617478 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7f95dc5b94bcm8w"] Dec 11 10:29:58 crc kubenswrapper[4953]: I1211 10:29:58.617592 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t45p4\" (UniqueName: \"kubernetes.io/projected/b7ddbee0-c6cd-4571-912d-09744da61237-kube-api-access-t45p4\") pod \"manila-operator-controller-manager-5b5fd79c9c-c95px\" (UID: \"b7ddbee0-c6cd-4571-912d-09744da61237\") " pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-c95px" Dec 11 10:29:58 crc kubenswrapper[4953]: I1211 10:29:58.615852 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7wrxb\" (UniqueName: \"kubernetes.io/projected/0ab27af2-4f6b-4e0f-b399-bef9b137ce63-kube-api-access-7wrxb\") pod \"mariadb-operator-controller-manager-79c8c4686c-c2pkq\" (UID: \"0ab27af2-4f6b-4e0f-b399-bef9b137ce63\") " pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-c2pkq" Dec 11 10:29:58 crc kubenswrapper[4953]: I1211 10:29:58.610405 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxb58\" (UniqueName: \"kubernetes.io/projected/46ad2123-023a-4bcb-9b05-2a6b223c2d02-kube-api-access-qxb58\") pod \"neutron-operator-controller-manager-5fdfd5b6b5-z6knw\" (UID: \"46ad2123-023a-4bcb-9b05-2a6b223c2d02\") " pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-z6knw" Dec 11 10:29:58 crc kubenswrapper[4953]: I1211 10:29:58.618830 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7765d96ddf-vz6q6"] Dec 11 10:29:58 crc kubenswrapper[4953]: I1211 10:29:58.618922 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7f95dc5b94bcm8w" Dec 11 10:29:58 crc kubenswrapper[4953]: I1211 10:29:58.615135 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ngc5\" (UniqueName: \"kubernetes.io/projected/9feb23b4-0b52-42c6-98a8-6b1de2241028-kube-api-access-7ngc5\") pod \"nova-operator-controller-manager-697bc559fc-wvw7n\" (UID: \"9feb23b4-0b52-42c6-98a8-6b1de2241028\") " pod="openstack-operators/nova-operator-controller-manager-697bc559fc-wvw7n" Dec 11 10:29:58 crc kubenswrapper[4953]: I1211 10:29:58.623395 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-wvw7n" Dec 11 10:29:58 crc kubenswrapper[4953]: I1211 10:29:58.625415 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Dec 11 10:29:58 crc kubenswrapper[4953]: I1211 10:29:58.634899 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-hbcx2" Dec 11 10:29:58 crc kubenswrapper[4953]: I1211 10:29:58.636933 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pblp4\" (UniqueName: \"kubernetes.io/projected/b7626052-8b4d-46d2-8f66-5774f43643a0-kube-api-access-pblp4\") pod \"ironic-operator-controller-manager-967d97867-f75n2\" (UID: \"b7626052-8b4d-46d2-8f66-5774f43643a0\") " pod="openstack-operators/ironic-operator-controller-manager-967d97867-f75n2" Dec 11 10:29:58 crc kubenswrapper[4953]: I1211 10:29:58.645438 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8qcsd\" (UniqueName: \"kubernetes.io/projected/33ec47dc-5b73-4fd2-b0e1-eee01b12110f-kube-api-access-8qcsd\") pod \"keystone-operator-controller-manager-7765d96ddf-vz6q6\" (UID: \"33ec47dc-5b73-4fd2-b0e1-eee01b12110f\") " pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-vz6q6" Dec 11 10:29:58 crc kubenswrapper[4953]: I1211 10:29:58.649103 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tcjfk\" (UniqueName: \"kubernetes.io/projected/954a102c-a60d-405a-b579-450e6b8e5c8b-kube-api-access-tcjfk\") pod \"infra-operator-controller-manager-78d48bff9d-nczzd\" (UID: \"954a102c-a60d-405a-b579-450e6b8e5c8b\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-nczzd" Dec 11 10:29:58 crc kubenswrapper[4953]: I1211 10:29:58.655279 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-776vr\" (UniqueName: \"kubernetes.io/projected/e905c779-8570-480f-a7b9-7bba299bee6b-kube-api-access-776vr\") pod \"octavia-operator-controller-manager-998648c74-6nwkm\" (UID: \"e905c779-8570-480f-a7b9-7bba299bee6b\") " pod="openstack-operators/octavia-operator-controller-manager-998648c74-6nwkm" Dec 11 10:29:58 crc kubenswrapper[4953]: I1211 10:29:58.676704 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-998648c74-6nwkm" Dec 11 10:29:58 crc kubenswrapper[4953]: I1211 10:29:58.687560 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-vz6q6" Dec 11 10:29:58 crc kubenswrapper[4953]: I1211 10:29:58.692796 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-b6456fdb6-gkctw"] Dec 11 10:29:58 crc kubenswrapper[4953]: I1211 10:29:58.693911 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-gkctw" Dec 11 10:29:58 crc kubenswrapper[4953]: I1211 10:29:58.696209 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-m4c76" Dec 11 10:29:58 crc kubenswrapper[4953]: I1211 10:29:58.758748 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-697bc559fc-wvw7n"] Dec 11 10:29:58 crc kubenswrapper[4953]: I1211 10:29:58.772353 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-78f8948974-5ths7"] Dec 11 10:29:58 crc kubenswrapper[4953]: I1211 10:29:58.773596 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-78f8948974-5ths7" Dec 11 10:29:58 crc kubenswrapper[4953]: I1211 10:29:58.774391 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-967d97867-f75n2" Dec 11 10:29:58 crc kubenswrapper[4953]: I1211 10:29:58.775698 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-frz6z" Dec 11 10:29:58 crc kubenswrapper[4953]: I1211 10:29:58.785590 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmhpk\" (UniqueName: \"kubernetes.io/projected/e7802018-7972-4d69-8b66-ea4bb637ff7f-kube-api-access-wmhpk\") pod \"openstack-baremetal-operator-controller-manager-7f95dc5b94bcm8w\" (UID: \"e7802018-7972-4d69-8b66-ea4bb637ff7f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7f95dc5b94bcm8w" Dec 11 10:29:58 crc kubenswrapper[4953]: I1211 10:29:58.785648 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-9d58d64bc-qx85w"] Dec 11 10:29:58 crc kubenswrapper[4953]: I1211 10:29:58.786258 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqwtp\" (UniqueName: \"kubernetes.io/projected/a27b4200-b26e-434d-be23-2940fe7a57c7-kube-api-access-bqwtp\") pod \"ovn-operator-controller-manager-b6456fdb6-gkctw\" (UID: \"a27b4200-b26e-434d-be23-2940fe7a57c7\") " pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-gkctw" Dec 11 10:29:58 crc kubenswrapper[4953]: I1211 10:29:58.786374 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e7802018-7972-4d69-8b66-ea4bb637ff7f-cert\") pod \"openstack-baremetal-operator-controller-manager-7f95dc5b94bcm8w\" (UID: \"e7802018-7972-4d69-8b66-ea4bb637ff7f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7f95dc5b94bcm8w" Dec 11 10:29:58 crc kubenswrapper[4953]: I1211 10:29:58.787209 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-qx85w" Dec 11 10:29:58 crc kubenswrapper[4953]: I1211 10:29:58.794206 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-2qfrg" Dec 11 10:29:58 crc kubenswrapper[4953]: I1211 10:29:58.797698 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-b6456fdb6-gkctw"] Dec 11 10:29:58 crc kubenswrapper[4953]: I1211 10:29:58.810252 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7f95dc5b94bcm8w"] Dec 11 10:29:58 crc kubenswrapper[4953]: I1211 10:29:58.810554 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-c95px" Dec 11 10:29:58 crc kubenswrapper[4953]: I1211 10:29:58.815045 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-78f8948974-5ths7"] Dec 11 10:29:58 crc kubenswrapper[4953]: I1211 10:29:58.827656 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-9d58d64bc-qx85w"] Dec 11 10:29:58 crc kubenswrapper[4953]: I1211 10:29:58.843667 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-58d5ff84df-5zt8b"] Dec 11 10:29:58 crc kubenswrapper[4953]: I1211 10:29:58.844896 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-58d5ff84df-5zt8b" Dec 11 10:29:58 crc kubenswrapper[4953]: I1211 10:29:58.868405 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-9q9br" Dec 11 10:29:58 crc kubenswrapper[4953]: I1211 10:29:58.873957 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-c2pkq" Dec 11 10:29:58 crc kubenswrapper[4953]: I1211 10:29:58.911011 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-5854674fcc-7n7sr"] Dec 11 10:29:58 crc kubenswrapper[4953]: I1211 10:29:58.912324 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5854674fcc-7n7sr" Dec 11 10:29:58 crc kubenswrapper[4953]: I1211 10:29:58.912944 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-z6knw" Dec 11 10:29:58 crc kubenswrapper[4953]: I1211 10:29:58.920019 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqwtp\" (UniqueName: \"kubernetes.io/projected/a27b4200-b26e-434d-be23-2940fe7a57c7-kube-api-access-bqwtp\") pod \"ovn-operator-controller-manager-b6456fdb6-gkctw\" (UID: \"a27b4200-b26e-434d-be23-2940fe7a57c7\") " pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-gkctw" Dec 11 10:29:58 crc kubenswrapper[4953]: I1211 10:29:58.920100 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e7802018-7972-4d69-8b66-ea4bb637ff7f-cert\") pod \"openstack-baremetal-operator-controller-manager-7f95dc5b94bcm8w\" (UID: \"e7802018-7972-4d69-8b66-ea4bb637ff7f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7f95dc5b94bcm8w" Dec 11 10:29:58 crc kubenswrapper[4953]: I1211 10:29:58.920216 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jj9p4\" (UniqueName: \"kubernetes.io/projected/a5873dea-ac09-449b-95ae-fc5f77f0e8d4-kube-api-access-jj9p4\") pod \"placement-operator-controller-manager-78f8948974-5ths7\" (UID: \"a5873dea-ac09-449b-95ae-fc5f77f0e8d4\") " pod="openstack-operators/placement-operator-controller-manager-78f8948974-5ths7" Dec 11 10:29:58 crc kubenswrapper[4953]: I1211 10:29:58.920247 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-795ls\" (UniqueName: \"kubernetes.io/projected/47b393e6-75c0-493f-83f5-d7e9d67ef5dd-kube-api-access-795ls\") pod \"telemetry-operator-controller-manager-58d5ff84df-5zt8b\" (UID: \"47b393e6-75c0-493f-83f5-d7e9d67ef5dd\") " pod="openstack-operators/telemetry-operator-controller-manager-58d5ff84df-5zt8b" Dec 11 10:29:58 crc kubenswrapper[4953]: I1211 10:29:58.920366 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wz97\" (UniqueName: \"kubernetes.io/projected/876fe2ae-127d-4e15-943a-3d3496252660-kube-api-access-4wz97\") pod \"swift-operator-controller-manager-9d58d64bc-qx85w\" (UID: \"876fe2ae-127d-4e15-943a-3d3496252660\") " pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-qx85w" Dec 11 10:29:58 crc kubenswrapper[4953]: I1211 10:29:58.920425 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kb9k\" (UniqueName: \"kubernetes.io/projected/905bc7ea-6d15-4d73-ad1c-71041c90e83f-kube-api-access-2kb9k\") pod \"test-operator-controller-manager-5854674fcc-7n7sr\" (UID: \"905bc7ea-6d15-4d73-ad1c-71041c90e83f\") " pod="openstack-operators/test-operator-controller-manager-5854674fcc-7n7sr" Dec 11 10:29:58 crc kubenswrapper[4953]: I1211 10:29:58.920471 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmhpk\" (UniqueName: \"kubernetes.io/projected/e7802018-7972-4d69-8b66-ea4bb637ff7f-kube-api-access-wmhpk\") pod \"openstack-baremetal-operator-controller-manager-7f95dc5b94bcm8w\" (UID: \"e7802018-7972-4d69-8b66-ea4bb637ff7f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7f95dc5b94bcm8w" Dec 11 10:29:58 crc kubenswrapper[4953]: E1211 10:29:58.921083 4953 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 11 10:29:58 crc kubenswrapper[4953]: E1211 10:29:58.921138 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e7802018-7972-4d69-8b66-ea4bb637ff7f-cert podName:e7802018-7972-4d69-8b66-ea4bb637ff7f nodeName:}" failed. No retries permitted until 2025-12-11 10:29:59.4211192 +0000 UTC m=+1117.444978233 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/e7802018-7972-4d69-8b66-ea4bb637ff7f-cert") pod "openstack-baremetal-operator-controller-manager-7f95dc5b94bcm8w" (UID: "e7802018-7972-4d69-8b66-ea4bb637ff7f") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 11 10:29:58 crc kubenswrapper[4953]: I1211 10:29:58.921468 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-58d5ff84df-5zt8b"] Dec 11 10:29:58 crc kubenswrapper[4953]: I1211 10:29:58.936424 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-fqbbp" Dec 11 10:29:58 crc kubenswrapper[4953]: I1211 10:29:58.954989 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqwtp\" (UniqueName: \"kubernetes.io/projected/a27b4200-b26e-434d-be23-2940fe7a57c7-kube-api-access-bqwtp\") pod \"ovn-operator-controller-manager-b6456fdb6-gkctw\" (UID: \"a27b4200-b26e-434d-be23-2940fe7a57c7\") " pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-gkctw" Dec 11 10:29:58 crc kubenswrapper[4953]: I1211 10:29:58.972375 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmhpk\" (UniqueName: \"kubernetes.io/projected/e7802018-7972-4d69-8b66-ea4bb637ff7f-kube-api-access-wmhpk\") pod \"openstack-baremetal-operator-controller-manager-7f95dc5b94bcm8w\" (UID: \"e7802018-7972-4d69-8b66-ea4bb637ff7f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7f95dc5b94bcm8w" Dec 11 10:29:58 crc kubenswrapper[4953]: I1211 10:29:58.982717 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5854674fcc-7n7sr"] Dec 11 10:29:59 crc kubenswrapper[4953]: I1211 10:29:59.016298 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-75944c9b7-gkqq9"] Dec 11 10:29:59 crc kubenswrapper[4953]: I1211 10:29:59.017451 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-gkqq9" Dec 11 10:29:59 crc kubenswrapper[4953]: I1211 10:29:59.028074 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-vrj2p" Dec 11 10:29:59 crc kubenswrapper[4953]: I1211 10:29:59.029840 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2kb9k\" (UniqueName: \"kubernetes.io/projected/905bc7ea-6d15-4d73-ad1c-71041c90e83f-kube-api-access-2kb9k\") pod \"test-operator-controller-manager-5854674fcc-7n7sr\" (UID: \"905bc7ea-6d15-4d73-ad1c-71041c90e83f\") " pod="openstack-operators/test-operator-controller-manager-5854674fcc-7n7sr" Dec 11 10:29:59 crc kubenswrapper[4953]: I1211 10:29:59.030126 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jj9p4\" (UniqueName: \"kubernetes.io/projected/a5873dea-ac09-449b-95ae-fc5f77f0e8d4-kube-api-access-jj9p4\") pod \"placement-operator-controller-manager-78f8948974-5ths7\" (UID: \"a5873dea-ac09-449b-95ae-fc5f77f0e8d4\") " pod="openstack-operators/placement-operator-controller-manager-78f8948974-5ths7" Dec 11 10:29:59 crc kubenswrapper[4953]: I1211 10:29:59.030152 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-795ls\" (UniqueName: \"kubernetes.io/projected/47b393e6-75c0-493f-83f5-d7e9d67ef5dd-kube-api-access-795ls\") pod \"telemetry-operator-controller-manager-58d5ff84df-5zt8b\" (UID: \"47b393e6-75c0-493f-83f5-d7e9d67ef5dd\") " pod="openstack-operators/telemetry-operator-controller-manager-58d5ff84df-5zt8b" Dec 11 10:29:59 crc kubenswrapper[4953]: I1211 10:29:59.030196 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4wz97\" (UniqueName: \"kubernetes.io/projected/876fe2ae-127d-4e15-943a-3d3496252660-kube-api-access-4wz97\") pod \"swift-operator-controller-manager-9d58d64bc-qx85w\" (UID: \"876fe2ae-127d-4e15-943a-3d3496252660\") " pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-qx85w" Dec 11 10:29:59 crc kubenswrapper[4953]: I1211 10:29:59.076863 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jj9p4\" (UniqueName: \"kubernetes.io/projected/a5873dea-ac09-449b-95ae-fc5f77f0e8d4-kube-api-access-jj9p4\") pod \"placement-operator-controller-manager-78f8948974-5ths7\" (UID: \"a5873dea-ac09-449b-95ae-fc5f77f0e8d4\") " pod="openstack-operators/placement-operator-controller-manager-78f8948974-5ths7" Dec 11 10:29:59 crc kubenswrapper[4953]: I1211 10:29:59.080622 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2kb9k\" (UniqueName: \"kubernetes.io/projected/905bc7ea-6d15-4d73-ad1c-71041c90e83f-kube-api-access-2kb9k\") pod \"test-operator-controller-manager-5854674fcc-7n7sr\" (UID: \"905bc7ea-6d15-4d73-ad1c-71041c90e83f\") " pod="openstack-operators/test-operator-controller-manager-5854674fcc-7n7sr" Dec 11 10:29:59 crc kubenswrapper[4953]: I1211 10:29:59.083798 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4wz97\" (UniqueName: \"kubernetes.io/projected/876fe2ae-127d-4e15-943a-3d3496252660-kube-api-access-4wz97\") pod \"swift-operator-controller-manager-9d58d64bc-qx85w\" (UID: \"876fe2ae-127d-4e15-943a-3d3496252660\") " pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-qx85w" Dec 11 10:29:59 crc kubenswrapper[4953]: I1211 10:29:59.093051 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-795ls\" (UniqueName: \"kubernetes.io/projected/47b393e6-75c0-493f-83f5-d7e9d67ef5dd-kube-api-access-795ls\") pod \"telemetry-operator-controller-manager-58d5ff84df-5zt8b\" (UID: \"47b393e6-75c0-493f-83f5-d7e9d67ef5dd\") " pod="openstack-operators/telemetry-operator-controller-manager-58d5ff84df-5zt8b" Dec 11 10:29:59 crc kubenswrapper[4953]: I1211 10:29:59.113854 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5854674fcc-7n7sr" Dec 11 10:29:59 crc kubenswrapper[4953]: I1211 10:29:59.114731 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-75944c9b7-gkqq9"] Dec 11 10:29:59 crc kubenswrapper[4953]: I1211 10:29:59.131615 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/954a102c-a60d-405a-b579-450e6b8e5c8b-cert\") pod \"infra-operator-controller-manager-78d48bff9d-nczzd\" (UID: \"954a102c-a60d-405a-b579-450e6b8e5c8b\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-nczzd" Dec 11 10:29:59 crc kubenswrapper[4953]: I1211 10:29:59.131688 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9r665\" (UniqueName: \"kubernetes.io/projected/d5261918-b44c-4d64-93d3-ab0742fdde80-kube-api-access-9r665\") pod \"watcher-operator-controller-manager-75944c9b7-gkqq9\" (UID: \"d5261918-b44c-4d64-93d3-ab0742fdde80\") " pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-gkqq9" Dec 11 10:29:59 crc kubenswrapper[4953]: E1211 10:29:59.133075 4953 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 11 10:29:59 crc kubenswrapper[4953]: E1211 10:29:59.133178 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/954a102c-a60d-405a-b579-450e6b8e5c8b-cert podName:954a102c-a60d-405a-b579-450e6b8e5c8b nodeName:}" failed. No retries permitted until 2025-12-11 10:30:00.133158506 +0000 UTC m=+1118.157017539 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/954a102c-a60d-405a-b579-450e6b8e5c8b-cert") pod "infra-operator-controller-manager-78d48bff9d-nczzd" (UID: "954a102c-a60d-405a-b579-450e6b8e5c8b") : secret "infra-operator-webhook-server-cert" not found Dec 11 10:29:59 crc kubenswrapper[4953]: I1211 10:29:59.221515 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-85cbc5886b-lxtqb"] Dec 11 10:29:59 crc kubenswrapper[4953]: I1211 10:29:59.225965 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-85cbc5886b-lxtqb" Dec 11 10:29:59 crc kubenswrapper[4953]: I1211 10:29:59.229721 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-85cbc5886b-lxtqb"] Dec 11 10:29:59 crc kubenswrapper[4953]: I1211 10:29:59.233912 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Dec 11 10:29:59 crc kubenswrapper[4953]: I1211 10:29:59.234147 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Dec 11 10:29:59 crc kubenswrapper[4953]: I1211 10:29:59.234502 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-nvpv5" Dec 11 10:29:59 crc kubenswrapper[4953]: I1211 10:29:59.236792 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1c0f14ca-80dd-4704-989d-ca02d722bf43-webhook-certs\") pod \"openstack-operator-controller-manager-85cbc5886b-lxtqb\" (UID: \"1c0f14ca-80dd-4704-989d-ca02d722bf43\") " pod="openstack-operators/openstack-operator-controller-manager-85cbc5886b-lxtqb" Dec 11 10:29:59 crc kubenswrapper[4953]: I1211 10:29:59.236945 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1c0f14ca-80dd-4704-989d-ca02d722bf43-metrics-certs\") pod \"openstack-operator-controller-manager-85cbc5886b-lxtqb\" (UID: \"1c0f14ca-80dd-4704-989d-ca02d722bf43\") " pod="openstack-operators/openstack-operator-controller-manager-85cbc5886b-lxtqb" Dec 11 10:29:59 crc kubenswrapper[4953]: I1211 10:29:59.237052 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gx4t5\" (UniqueName: \"kubernetes.io/projected/1c0f14ca-80dd-4704-989d-ca02d722bf43-kube-api-access-gx4t5\") pod \"openstack-operator-controller-manager-85cbc5886b-lxtqb\" (UID: \"1c0f14ca-80dd-4704-989d-ca02d722bf43\") " pod="openstack-operators/openstack-operator-controller-manager-85cbc5886b-lxtqb" Dec 11 10:29:59 crc kubenswrapper[4953]: I1211 10:29:59.237154 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9r665\" (UniqueName: \"kubernetes.io/projected/d5261918-b44c-4d64-93d3-ab0742fdde80-kube-api-access-9r665\") pod \"watcher-operator-controller-manager-75944c9b7-gkqq9\" (UID: \"d5261918-b44c-4d64-93d3-ab0742fdde80\") " pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-gkqq9" Dec 11 10:29:59 crc kubenswrapper[4953]: I1211 10:29:59.237735 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-gkctw" Dec 11 10:29:59 crc kubenswrapper[4953]: I1211 10:29:59.256376 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-78f8948974-5ths7" Dec 11 10:29:59 crc kubenswrapper[4953]: I1211 10:29:59.257882 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-k9zt4"] Dec 11 10:29:59 crc kubenswrapper[4953]: I1211 10:29:59.267310 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-k9zt4" Dec 11 10:29:59 crc kubenswrapper[4953]: I1211 10:29:59.267316 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9r665\" (UniqueName: \"kubernetes.io/projected/d5261918-b44c-4d64-93d3-ab0742fdde80-kube-api-access-9r665\") pod \"watcher-operator-controller-manager-75944c9b7-gkqq9\" (UID: \"d5261918-b44c-4d64-93d3-ab0742fdde80\") " pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-gkqq9" Dec 11 10:29:59 crc kubenswrapper[4953]: I1211 10:29:59.271175 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-p8f7p" Dec 11 10:29:59 crc kubenswrapper[4953]: I1211 10:29:59.279994 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-k9zt4"] Dec 11 10:29:59 crc kubenswrapper[4953]: I1211 10:29:59.292767 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-qx85w" Dec 11 10:29:59 crc kubenswrapper[4953]: I1211 10:29:59.338687 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gx4t5\" (UniqueName: \"kubernetes.io/projected/1c0f14ca-80dd-4704-989d-ca02d722bf43-kube-api-access-gx4t5\") pod \"openstack-operator-controller-manager-85cbc5886b-lxtqb\" (UID: \"1c0f14ca-80dd-4704-989d-ca02d722bf43\") " pod="openstack-operators/openstack-operator-controller-manager-85cbc5886b-lxtqb" Dec 11 10:29:59 crc kubenswrapper[4953]: I1211 10:29:59.338784 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1c0f14ca-80dd-4704-989d-ca02d722bf43-webhook-certs\") pod \"openstack-operator-controller-manager-85cbc5886b-lxtqb\" (UID: \"1c0f14ca-80dd-4704-989d-ca02d722bf43\") " pod="openstack-operators/openstack-operator-controller-manager-85cbc5886b-lxtqb" Dec 11 10:29:59 crc kubenswrapper[4953]: I1211 10:29:59.338868 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4m8kn\" (UniqueName: \"kubernetes.io/projected/e94f5882-5902-4e23-82b7-374766161807-kube-api-access-4m8kn\") pod \"rabbitmq-cluster-operator-manager-668c99d594-k9zt4\" (UID: \"e94f5882-5902-4e23-82b7-374766161807\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-k9zt4" Dec 11 10:29:59 crc kubenswrapper[4953]: I1211 10:29:59.338898 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1c0f14ca-80dd-4704-989d-ca02d722bf43-metrics-certs\") pod \"openstack-operator-controller-manager-85cbc5886b-lxtqb\" (UID: \"1c0f14ca-80dd-4704-989d-ca02d722bf43\") " pod="openstack-operators/openstack-operator-controller-manager-85cbc5886b-lxtqb" Dec 11 10:29:59 crc kubenswrapper[4953]: E1211 10:29:59.339110 4953 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 11 10:29:59 crc kubenswrapper[4953]: E1211 10:29:59.339177 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1c0f14ca-80dd-4704-989d-ca02d722bf43-webhook-certs podName:1c0f14ca-80dd-4704-989d-ca02d722bf43 nodeName:}" failed. No retries permitted until 2025-12-11 10:29:59.83915936 +0000 UTC m=+1117.863018393 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/1c0f14ca-80dd-4704-989d-ca02d722bf43-webhook-certs") pod "openstack-operator-controller-manager-85cbc5886b-lxtqb" (UID: "1c0f14ca-80dd-4704-989d-ca02d722bf43") : secret "webhook-server-cert" not found Dec 11 10:29:59 crc kubenswrapper[4953]: E1211 10:29:59.339269 4953 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 11 10:29:59 crc kubenswrapper[4953]: E1211 10:29:59.341047 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1c0f14ca-80dd-4704-989d-ca02d722bf43-metrics-certs podName:1c0f14ca-80dd-4704-989d-ca02d722bf43 nodeName:}" failed. No retries permitted until 2025-12-11 10:29:59.841028909 +0000 UTC m=+1117.864887932 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1c0f14ca-80dd-4704-989d-ca02d722bf43-metrics-certs") pod "openstack-operator-controller-manager-85cbc5886b-lxtqb" (UID: "1c0f14ca-80dd-4704-989d-ca02d722bf43") : secret "metrics-server-cert" not found Dec 11 10:29:59 crc kubenswrapper[4953]: I1211 10:29:59.359698 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gx4t5\" (UniqueName: \"kubernetes.io/projected/1c0f14ca-80dd-4704-989d-ca02d722bf43-kube-api-access-gx4t5\") pod \"openstack-operator-controller-manager-85cbc5886b-lxtqb\" (UID: \"1c0f14ca-80dd-4704-989d-ca02d722bf43\") " pod="openstack-operators/openstack-operator-controller-manager-85cbc5886b-lxtqb" Dec 11 10:29:59 crc kubenswrapper[4953]: I1211 10:29:59.386332 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-58d5ff84df-5zt8b" Dec 11 10:29:59 crc kubenswrapper[4953]: I1211 10:29:59.439930 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e7802018-7972-4d69-8b66-ea4bb637ff7f-cert\") pod \"openstack-baremetal-operator-controller-manager-7f95dc5b94bcm8w\" (UID: \"e7802018-7972-4d69-8b66-ea4bb637ff7f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7f95dc5b94bcm8w" Dec 11 10:29:59 crc kubenswrapper[4953]: I1211 10:29:59.439983 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4m8kn\" (UniqueName: \"kubernetes.io/projected/e94f5882-5902-4e23-82b7-374766161807-kube-api-access-4m8kn\") pod \"rabbitmq-cluster-operator-manager-668c99d594-k9zt4\" (UID: \"e94f5882-5902-4e23-82b7-374766161807\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-k9zt4" Dec 11 10:29:59 crc kubenswrapper[4953]: E1211 10:29:59.440221 4953 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 11 10:29:59 crc kubenswrapper[4953]: E1211 10:29:59.440356 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e7802018-7972-4d69-8b66-ea4bb637ff7f-cert podName:e7802018-7972-4d69-8b66-ea4bb637ff7f nodeName:}" failed. No retries permitted until 2025-12-11 10:30:00.440325595 +0000 UTC m=+1118.464184628 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/e7802018-7972-4d69-8b66-ea4bb637ff7f-cert") pod "openstack-baremetal-operator-controller-manager-7f95dc5b94bcm8w" (UID: "e7802018-7972-4d69-8b66-ea4bb637ff7f") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 11 10:29:59 crc kubenswrapper[4953]: I1211 10:29:59.464135 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4m8kn\" (UniqueName: \"kubernetes.io/projected/e94f5882-5902-4e23-82b7-374766161807-kube-api-access-4m8kn\") pod \"rabbitmq-cluster-operator-manager-668c99d594-k9zt4\" (UID: \"e94f5882-5902-4e23-82b7-374766161807\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-k9zt4" Dec 11 10:29:59 crc kubenswrapper[4953]: I1211 10:29:59.482155 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-gkqq9" Dec 11 10:29:59 crc kubenswrapper[4953]: I1211 10:29:59.613488 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-k9zt4" Dec 11 10:29:59 crc kubenswrapper[4953]: I1211 10:29:59.663508 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7d9dfd778-4qtwl"] Dec 11 10:29:59 crc kubenswrapper[4953]: I1211 10:29:59.855112 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1c0f14ca-80dd-4704-989d-ca02d722bf43-webhook-certs\") pod \"openstack-operator-controller-manager-85cbc5886b-lxtqb\" (UID: \"1c0f14ca-80dd-4704-989d-ca02d722bf43\") " pod="openstack-operators/openstack-operator-controller-manager-85cbc5886b-lxtqb" Dec 11 10:29:59 crc kubenswrapper[4953]: E1211 10:29:59.855256 4953 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 11 10:29:59 crc kubenswrapper[4953]: E1211 10:29:59.855463 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1c0f14ca-80dd-4704-989d-ca02d722bf43-webhook-certs podName:1c0f14ca-80dd-4704-989d-ca02d722bf43 nodeName:}" failed. No retries permitted until 2025-12-11 10:30:00.855441913 +0000 UTC m=+1118.879300946 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/1c0f14ca-80dd-4704-989d-ca02d722bf43-webhook-certs") pod "openstack-operator-controller-manager-85cbc5886b-lxtqb" (UID: "1c0f14ca-80dd-4704-989d-ca02d722bf43") : secret "webhook-server-cert" not found Dec 11 10:29:59 crc kubenswrapper[4953]: I1211 10:29:59.856046 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1c0f14ca-80dd-4704-989d-ca02d722bf43-metrics-certs\") pod \"openstack-operator-controller-manager-85cbc5886b-lxtqb\" (UID: \"1c0f14ca-80dd-4704-989d-ca02d722bf43\") " pod="openstack-operators/openstack-operator-controller-manager-85cbc5886b-lxtqb" Dec 11 10:29:59 crc kubenswrapper[4953]: E1211 10:29:59.856164 4953 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 11 10:29:59 crc kubenswrapper[4953]: E1211 10:29:59.856233 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1c0f14ca-80dd-4704-989d-ca02d722bf43-metrics-certs podName:1c0f14ca-80dd-4704-989d-ca02d722bf43 nodeName:}" failed. No retries permitted until 2025-12-11 10:30:00.856220676 +0000 UTC m=+1118.880079709 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1c0f14ca-80dd-4704-989d-ca02d722bf43-metrics-certs") pod "openstack-operator-controller-manager-85cbc5886b-lxtqb" (UID: "1c0f14ca-80dd-4704-989d-ca02d722bf43") : secret "metrics-server-cert" not found Dec 11 10:30:00 crc kubenswrapper[4953]: W1211 10:30:00.041475 4953 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc77a72a9_141b_4be9_99e2_406e16b68c2b.slice/crio-de839a1d373c2a5accf6e3f67a9f8914270a08300163733bb163734f51a49a34 WatchSource:0}: Error finding container de839a1d373c2a5accf6e3f67a9f8914270a08300163733bb163734f51a49a34: Status 404 returned error can't find the container with id de839a1d373c2a5accf6e3f67a9f8914270a08300163733bb163734f51a49a34 Dec 11 10:30:00 crc kubenswrapper[4953]: I1211 10:30:00.043044 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7765d96ddf-vz6q6"] Dec 11 10:30:00 crc kubenswrapper[4953]: W1211 10:30:00.046029 4953 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod33ec47dc_5b73_4fd2_b0e1_eee01b12110f.slice/crio-e3d8dde78d03613882e48de7ef4b0a813de4ad0d1f7eea290735414bf9a051af WatchSource:0}: Error finding container e3d8dde78d03613882e48de7ef4b0a813de4ad0d1f7eea290735414bf9a051af: Status 404 returned error can't find the container with id e3d8dde78d03613882e48de7ef4b0a813de4ad0d1f7eea290735414bf9a051af Dec 11 10:30:00 crc kubenswrapper[4953]: I1211 10:30:00.053004 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-5697bb5779-dx46k"] Dec 11 10:30:00 crc kubenswrapper[4953]: I1211 10:30:00.159219 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/954a102c-a60d-405a-b579-450e6b8e5c8b-cert\") pod \"infra-operator-controller-manager-78d48bff9d-nczzd\" (UID: \"954a102c-a60d-405a-b579-450e6b8e5c8b\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-nczzd" Dec 11 10:30:00 crc kubenswrapper[4953]: E1211 10:30:00.159524 4953 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 11 10:30:00 crc kubenswrapper[4953]: E1211 10:30:00.159621 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/954a102c-a60d-405a-b579-450e6b8e5c8b-cert podName:954a102c-a60d-405a-b579-450e6b8e5c8b nodeName:}" failed. No retries permitted until 2025-12-11 10:30:02.159602227 +0000 UTC m=+1120.183461260 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/954a102c-a60d-405a-b579-450e6b8e5c8b-cert") pod "infra-operator-controller-manager-78d48bff9d-nczzd" (UID: "954a102c-a60d-405a-b579-450e6b8e5c8b") : secret "infra-operator-webhook-server-cert" not found Dec 11 10:30:00 crc kubenswrapper[4953]: I1211 10:30:00.167688 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424150-978gp"] Dec 11 10:30:00 crc kubenswrapper[4953]: I1211 10:30:00.168841 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424150-978gp" Dec 11 10:30:00 crc kubenswrapper[4953]: I1211 10:30:00.172248 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 11 10:30:00 crc kubenswrapper[4953]: I1211 10:30:00.172415 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 11 10:30:00 crc kubenswrapper[4953]: I1211 10:30:00.183954 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424150-978gp"] Dec 11 10:30:00 crc kubenswrapper[4953]: I1211 10:30:00.261135 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b8937b8d-554d-44bf-9a69-b0e6350fd8f0-secret-volume\") pod \"collect-profiles-29424150-978gp\" (UID: \"b8937b8d-554d-44bf-9a69-b0e6350fd8f0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424150-978gp" Dec 11 10:30:00 crc kubenswrapper[4953]: I1211 10:30:00.261182 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2t7zr\" (UniqueName: \"kubernetes.io/projected/b8937b8d-554d-44bf-9a69-b0e6350fd8f0-kube-api-access-2t7zr\") pod \"collect-profiles-29424150-978gp\" (UID: \"b8937b8d-554d-44bf-9a69-b0e6350fd8f0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424150-978gp" Dec 11 10:30:00 crc kubenswrapper[4953]: I1211 10:30:00.261236 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b8937b8d-554d-44bf-9a69-b0e6350fd8f0-config-volume\") pod \"collect-profiles-29424150-978gp\" (UID: \"b8937b8d-554d-44bf-9a69-b0e6350fd8f0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424150-978gp" Dec 11 10:30:00 crc kubenswrapper[4953]: I1211 10:30:00.327742 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-5f64f6f8bb-zqt7s"] Dec 11 10:30:00 crc kubenswrapper[4953]: I1211 10:30:00.347677 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5854674fcc-7n7sr"] Dec 11 10:30:00 crc kubenswrapper[4953]: I1211 10:30:00.371747 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b8937b8d-554d-44bf-9a69-b0e6350fd8f0-config-volume\") pod \"collect-profiles-29424150-978gp\" (UID: \"b8937b8d-554d-44bf-9a69-b0e6350fd8f0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424150-978gp" Dec 11 10:30:00 crc kubenswrapper[4953]: I1211 10:30:00.371904 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b8937b8d-554d-44bf-9a69-b0e6350fd8f0-secret-volume\") pod \"collect-profiles-29424150-978gp\" (UID: \"b8937b8d-554d-44bf-9a69-b0e6350fd8f0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424150-978gp" Dec 11 10:30:00 crc kubenswrapper[4953]: I1211 10:30:00.371926 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2t7zr\" (UniqueName: \"kubernetes.io/projected/b8937b8d-554d-44bf-9a69-b0e6350fd8f0-kube-api-access-2t7zr\") pod \"collect-profiles-29424150-978gp\" (UID: \"b8937b8d-554d-44bf-9a69-b0e6350fd8f0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424150-978gp" Dec 11 10:30:00 crc kubenswrapper[4953]: I1211 10:30:00.373093 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b8937b8d-554d-44bf-9a69-b0e6350fd8f0-config-volume\") pod \"collect-profiles-29424150-978gp\" (UID: \"b8937b8d-554d-44bf-9a69-b0e6350fd8f0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424150-978gp" Dec 11 10:30:00 crc kubenswrapper[4953]: I1211 10:30:00.378023 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-998648c74-6nwkm"] Dec 11 10:30:00 crc kubenswrapper[4953]: I1211 10:30:00.385221 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-6c677c69b-jz7t2"] Dec 11 10:30:00 crc kubenswrapper[4953]: I1211 10:30:00.388614 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b8937b8d-554d-44bf-9a69-b0e6350fd8f0-secret-volume\") pod \"collect-profiles-29424150-978gp\" (UID: \"b8937b8d-554d-44bf-9a69-b0e6350fd8f0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424150-978gp" Dec 11 10:30:00 crc kubenswrapper[4953]: I1211 10:30:00.394983 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-z6knw"] Dec 11 10:30:00 crc kubenswrapper[4953]: I1211 10:30:00.403811 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2t7zr\" (UniqueName: \"kubernetes.io/projected/b8937b8d-554d-44bf-9a69-b0e6350fd8f0-kube-api-access-2t7zr\") pod \"collect-profiles-29424150-978gp\" (UID: \"b8937b8d-554d-44bf-9a69-b0e6350fd8f0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424150-978gp" Dec 11 10:30:00 crc kubenswrapper[4953]: I1211 10:30:00.412641 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-5b5fd79c9c-c95px"] Dec 11 10:30:00 crc kubenswrapper[4953]: I1211 10:30:00.430639 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-697fb699cf-c8jqf"] Dec 11 10:30:00 crc kubenswrapper[4953]: W1211 10:30:00.438051 4953 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb97b8317_f4e7_440c_8d72_df1cf55afe09.slice/crio-2984e01ea73394c32a7b00a61bec78894f1737a92365043d5a0f44f5164ebeee WatchSource:0}: Error finding container 2984e01ea73394c32a7b00a61bec78894f1737a92365043d5a0f44f5164ebeee: Status 404 returned error can't find the container with id 2984e01ea73394c32a7b00a61bec78894f1737a92365043d5a0f44f5164ebeee Dec 11 10:30:00 crc kubenswrapper[4953]: I1211 10:30:00.439310 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-78f8948974-5ths7"] Dec 11 10:30:00 crc kubenswrapper[4953]: I1211 10:30:00.446150 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-697bc559fc-wvw7n"] Dec 11 10:30:00 crc kubenswrapper[4953]: I1211 10:30:00.453901 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-b6456fdb6-gkctw"] Dec 11 10:30:00 crc kubenswrapper[4953]: E1211 10:30:00.465748 4953 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:f27e732ec1faee765461bf137d9be81278b2fa39675019a73622755e1e610b6f,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-795ls,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-58d5ff84df-5zt8b_openstack-operators(47b393e6-75c0-493f-83f5-d7e9d67ef5dd): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 11 10:30:00 crc kubenswrapper[4953]: E1211 10:30:00.465862 4953 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:d29650b006da97eb9178fcc58f2eb9fead8c2b414fac18f86a3c3a1507488c4f,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jj9p4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-78f8948974-5ths7_openstack-operators(a5873dea-ac09-449b-95ae-fc5f77f0e8d4): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 11 10:30:00 crc kubenswrapper[4953]: E1211 10:30:00.470361 4953 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-795ls,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-58d5ff84df-5zt8b_openstack-operators(47b393e6-75c0-493f-83f5-d7e9d67ef5dd): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 11 10:30:00 crc kubenswrapper[4953]: E1211 10:30:00.470726 4953 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jj9p4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-78f8948974-5ths7_openstack-operators(a5873dea-ac09-449b-95ae-fc5f77f0e8d4): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 11 10:30:00 crc kubenswrapper[4953]: E1211 10:30:00.471903 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/telemetry-operator-controller-manager-58d5ff84df-5zt8b" podUID="47b393e6-75c0-493f-83f5-d7e9d67ef5dd" Dec 11 10:30:00 crc kubenswrapper[4953]: E1211 10:30:00.471913 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/placement-operator-controller-manager-78f8948974-5ths7" podUID="a5873dea-ac09-449b-95ae-fc5f77f0e8d4" Dec 11 10:30:00 crc kubenswrapper[4953]: E1211 10:30:00.474336 4953 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:961417d59f527d925ac48ff6a11de747d0493315e496e34dc83d76a1a1fff58a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9r665,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-75944c9b7-gkqq9_openstack-operators(d5261918-b44c-4d64-93d3-ab0742fdde80): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 11 10:30:00 crc kubenswrapper[4953]: E1211 10:30:00.474919 4953 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 11 10:30:00 crc kubenswrapper[4953]: E1211 10:30:00.474999 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e7802018-7972-4d69-8b66-ea4bb637ff7f-cert podName:e7802018-7972-4d69-8b66-ea4bb637ff7f nodeName:}" failed. No retries permitted until 2025-12-11 10:30:02.474978885 +0000 UTC m=+1120.498837918 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/e7802018-7972-4d69-8b66-ea4bb637ff7f-cert") pod "openstack-baremetal-operator-controller-manager-7f95dc5b94bcm8w" (UID: "e7802018-7972-4d69-8b66-ea4bb637ff7f") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 11 10:30:00 crc kubenswrapper[4953]: I1211 10:30:00.475393 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e7802018-7972-4d69-8b66-ea4bb637ff7f-cert\") pod \"openstack-baremetal-operator-controller-manager-7f95dc5b94bcm8w\" (UID: \"e7802018-7972-4d69-8b66-ea4bb637ff7f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7f95dc5b94bcm8w" Dec 11 10:30:00 crc kubenswrapper[4953]: E1211 10:30:00.476699 4953 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9r665,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-75944c9b7-gkqq9_openstack-operators(d5261918-b44c-4d64-93d3-ab0742fdde80): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 11 10:30:00 crc kubenswrapper[4953]: E1211 10:30:00.477847 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-gkqq9" podUID="d5261918-b44c-4d64-93d3-ab0742fdde80" Dec 11 10:30:00 crc kubenswrapper[4953]: E1211 10:30:00.489586 4953 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ironic-operator@sha256:5bdb3685be3ddc1efd62e16aaf2fa96ead64315e26d52b1b2a7d8ac01baa1e87,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pblp4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-967d97867-f75n2_openstack-operators(b7626052-8b4d-46d2-8f66-5774f43643a0): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 11 10:30:00 crc kubenswrapper[4953]: I1211 10:30:00.490871 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424150-978gp" Dec 11 10:30:00 crc kubenswrapper[4953]: E1211 10:30:00.493463 4953 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pblp4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-967d97867-f75n2_openstack-operators(b7626052-8b4d-46d2-8f66-5774f43643a0): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 11 10:30:00 crc kubenswrapper[4953]: E1211 10:30:00.494745 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/ironic-operator-controller-manager-967d97867-f75n2" podUID="b7626052-8b4d-46d2-8f66-5774f43643a0" Dec 11 10:30:00 crc kubenswrapper[4953]: I1211 10:30:00.496354 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-58d5ff84df-5zt8b"] Dec 11 10:30:00 crc kubenswrapper[4953]: I1211 10:30:00.496399 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-9d58d64bc-qx85w"] Dec 11 10:30:00 crc kubenswrapper[4953]: I1211 10:30:00.496413 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c6d99b8f-jz2zb"] Dec 11 10:30:00 crc kubenswrapper[4953]: E1211 10:30:00.504138 4953 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:3aa109bb973253ae9dcf339b9b65abbd1176cdb4be672c93e538a5f113816991,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4wz97,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-9d58d64bc-qx85w_openstack-operators(876fe2ae-127d-4e15-943a-3d3496252660): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 11 10:30:00 crc kubenswrapper[4953]: I1211 10:30:00.506263 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-79c8c4686c-c2pkq"] Dec 11 10:30:00 crc kubenswrapper[4953]: E1211 10:30:00.506545 4953 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4wz97,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-9d58d64bc-qx85w_openstack-operators(876fe2ae-127d-4e15-943a-3d3496252660): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 11 10:30:00 crc kubenswrapper[4953]: W1211 10:30:00.506557 4953 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda27b4200_b26e_434d_be23_2940fe7a57c7.slice/crio-ed17344396910e10de5e5c248b52dbf5eb4d0feb6470b7e1ca86812604a5d627 WatchSource:0}: Error finding container ed17344396910e10de5e5c248b52dbf5eb4d0feb6470b7e1ca86812604a5d627: Status 404 returned error can't find the container with id ed17344396910e10de5e5c248b52dbf5eb4d0feb6470b7e1ca86812604a5d627 Dec 11 10:30:00 crc kubenswrapper[4953]: W1211 10:30:00.507620 4953 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode94f5882_5902_4e23_82b7_374766161807.slice/crio-f6e5120c1bb4bd2d4407ab35695ebead06f9cca80d494663da352ad41060be0c WatchSource:0}: Error finding container f6e5120c1bb4bd2d4407ab35695ebead06f9cca80d494663da352ad41060be0c: Status 404 returned error can't find the container with id f6e5120c1bb4bd2d4407ab35695ebead06f9cca80d494663da352ad41060be0c Dec 11 10:30:00 crc kubenswrapper[4953]: E1211 10:30:00.507738 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-qx85w" podUID="876fe2ae-127d-4e15-943a-3d3496252660" Dec 11 10:30:00 crc kubenswrapper[4953]: E1211 10:30:00.508725 4953 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bqwtp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-b6456fdb6-gkctw_openstack-operators(a27b4200-b26e-434d-be23-2940fe7a57c7): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 11 10:30:00 crc kubenswrapper[4953]: I1211 10:30:00.515674 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-75944c9b7-gkqq9"] Dec 11 10:30:00 crc kubenswrapper[4953]: E1211 10:30:00.517784 4953 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4m8kn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-k9zt4_openstack-operators(e94f5882-5902-4e23-82b7-374766161807): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 11 10:30:00 crc kubenswrapper[4953]: E1211 10:30:00.517883 4953 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bqwtp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-b6456fdb6-gkctw_openstack-operators(a27b4200-b26e-434d-be23-2940fe7a57c7): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 11 10:30:00 crc kubenswrapper[4953]: E1211 10:30:00.520822 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-gkctw" podUID="a27b4200-b26e-434d-be23-2940fe7a57c7" Dec 11 10:30:00 crc kubenswrapper[4953]: E1211 10:30:00.520911 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-k9zt4" podUID="e94f5882-5902-4e23-82b7-374766161807" Dec 11 10:30:00 crc kubenswrapper[4953]: I1211 10:30:00.523486 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-k9zt4"] Dec 11 10:30:00 crc kubenswrapper[4953]: I1211 10:30:00.530267 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-967d97867-f75n2"] Dec 11 10:30:00 crc kubenswrapper[4953]: I1211 10:30:00.628288 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-jz7t2" event={"ID":"95010a68-4a99-4e84-8785-cb970f7085e1","Type":"ContainerStarted","Data":"b925e9df85a20a02d2ce532329e9e8e309e23ac9f666945a4e879d86be828ea7"} Dec 11 10:30:00 crc kubenswrapper[4953]: I1211 10:30:00.630531 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-78f8948974-5ths7" event={"ID":"a5873dea-ac09-449b-95ae-fc5f77f0e8d4","Type":"ContainerStarted","Data":"3e1c75bab1d9fd8196922e64250ab42303e19a6e5f7a6ffd98a07b178cdcf55d"} Dec 11 10:30:00 crc kubenswrapper[4953]: I1211 10:30:00.633184 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-c95px" event={"ID":"b7ddbee0-c6cd-4571-912d-09744da61237","Type":"ContainerStarted","Data":"2db837942e21fedf217be34506bca8eb03bd08af6112c574782908f5e2faf683"} Dec 11 10:30:00 crc kubenswrapper[4953]: E1211 10:30:00.634075 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:d29650b006da97eb9178fcc58f2eb9fead8c2b414fac18f86a3c3a1507488c4f\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/placement-operator-controller-manager-78f8948974-5ths7" podUID="a5873dea-ac09-449b-95ae-fc5f77f0e8d4" Dec 11 10:30:00 crc kubenswrapper[4953]: I1211 10:30:00.634347 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-c8jqf" event={"ID":"b97b8317-f4e7-440c-8d72-df1cf55afe09","Type":"ContainerStarted","Data":"2984e01ea73394c32a7b00a61bec78894f1737a92365043d5a0f44f5164ebeee"} Dec 11 10:30:00 crc kubenswrapper[4953]: I1211 10:30:00.635042 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-qx85w" event={"ID":"876fe2ae-127d-4e15-943a-3d3496252660","Type":"ContainerStarted","Data":"c1432717e7876c62002deda75e1f4601362b5d8dee85099250339b1b8379e8fd"} Dec 11 10:30:00 crc kubenswrapper[4953]: E1211 10:30:00.637553 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:3aa109bb973253ae9dcf339b9b65abbd1176cdb4be672c93e538a5f113816991\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-qx85w" podUID="876fe2ae-127d-4e15-943a-3d3496252660" Dec 11 10:30:00 crc kubenswrapper[4953]: I1211 10:30:00.638212 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-998648c74-6nwkm" event={"ID":"e905c779-8570-480f-a7b9-7bba299bee6b","Type":"ContainerStarted","Data":"6f2ce8b2e8288a0792e7e079051b3abea061d767eb83bcacfbddb4cd965f127a"} Dec 11 10:30:00 crc kubenswrapper[4953]: I1211 10:30:00.639918 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-k9zt4" event={"ID":"e94f5882-5902-4e23-82b7-374766161807","Type":"ContainerStarted","Data":"f6e5120c1bb4bd2d4407ab35695ebead06f9cca80d494663da352ad41060be0c"} Dec 11 10:30:00 crc kubenswrapper[4953]: E1211 10:30:00.640909 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-k9zt4" podUID="e94f5882-5902-4e23-82b7-374766161807" Dec 11 10:30:00 crc kubenswrapper[4953]: I1211 10:30:00.641679 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-dx46k" event={"ID":"c77a72a9-141b-4be9-99e2-406e16b68c2b","Type":"ContainerStarted","Data":"de839a1d373c2a5accf6e3f67a9f8914270a08300163733bb163734f51a49a34"} Dec 11 10:30:00 crc kubenswrapper[4953]: I1211 10:30:00.644998 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-jz2zb" event={"ID":"b81d3c69-eb5d-406f-8f14-330eaf0edec3","Type":"ContainerStarted","Data":"cdef2ebf5797e451e55a1853b42765c3e260e92a4b29c037c832fbcb6acec7f5"} Dec 11 10:30:00 crc kubenswrapper[4953]: I1211 10:30:00.651254 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-58d5ff84df-5zt8b" event={"ID":"47b393e6-75c0-493f-83f5-d7e9d67ef5dd","Type":"ContainerStarted","Data":"1f5ecf34e866ac2b3400e588129cb22dbbf4fde240e7f1b86ab63a3620d1345c"} Dec 11 10:30:00 crc kubenswrapper[4953]: I1211 10:30:00.655615 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-zqt7s" event={"ID":"43c3d99b-4ce9-421a-9212-c99b50e671af","Type":"ContainerStarted","Data":"2c50d41651cd1ad65c88349e4ffab6151e5d45b429475a757621e8369c447fa2"} Dec 11 10:30:00 crc kubenswrapper[4953]: E1211 10:30:00.657641 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:f27e732ec1faee765461bf137d9be81278b2fa39675019a73622755e1e610b6f\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/telemetry-operator-controller-manager-58d5ff84df-5zt8b" podUID="47b393e6-75c0-493f-83f5-d7e9d67ef5dd" Dec 11 10:30:00 crc kubenswrapper[4953]: I1211 10:30:00.658292 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-gkqq9" event={"ID":"d5261918-b44c-4d64-93d3-ab0742fdde80","Type":"ContainerStarted","Data":"703f2216f278f3ee417f1c61f4f995df0908f27b69ce055c3b183687957d3a82"} Dec 11 10:30:00 crc kubenswrapper[4953]: I1211 10:30:00.659740 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-vz6q6" event={"ID":"33ec47dc-5b73-4fd2-b0e1-eee01b12110f","Type":"ContainerStarted","Data":"e3d8dde78d03613882e48de7ef4b0a813de4ad0d1f7eea290735414bf9a051af"} Dec 11 10:30:00 crc kubenswrapper[4953]: E1211 10:30:00.660333 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:961417d59f527d925ac48ff6a11de747d0493315e496e34dc83d76a1a1fff58a\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-gkqq9" podUID="d5261918-b44c-4d64-93d3-ab0742fdde80" Dec 11 10:30:00 crc kubenswrapper[4953]: I1211 10:30:00.668034 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-wvw7n" event={"ID":"9feb23b4-0b52-42c6-98a8-6b1de2241028","Type":"ContainerStarted","Data":"f098ae3d6ec1748c00eec64530f29a799261cce1a113902ccec21b526bf7b62f"} Dec 11 10:30:00 crc kubenswrapper[4953]: I1211 10:30:00.669393 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-967d97867-f75n2" event={"ID":"b7626052-8b4d-46d2-8f66-5774f43643a0","Type":"ContainerStarted","Data":"5893a27516b53c9a134f1a9bd803cb1c886d54471a922d14036209321f484980"} Dec 11 10:30:00 crc kubenswrapper[4953]: E1211 10:30:00.674908 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ironic-operator@sha256:5bdb3685be3ddc1efd62e16aaf2fa96ead64315e26d52b1b2a7d8ac01baa1e87\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/ironic-operator-controller-manager-967d97867-f75n2" podUID="b7626052-8b4d-46d2-8f66-5774f43643a0" Dec 11 10:30:00 crc kubenswrapper[4953]: I1211 10:30:00.698103 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-4qtwl" event={"ID":"6b26e336-7c68-4ba3-979b-211c05708639","Type":"ContainerStarted","Data":"41556039e958cf236841331e599ee2c26f8ac10b5fb774599043554939ab4709"} Dec 11 10:30:00 crc kubenswrapper[4953]: I1211 10:30:00.708363 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-z6knw" event={"ID":"46ad2123-023a-4bcb-9b05-2a6b223c2d02","Type":"ContainerStarted","Data":"0c3aad52d2860649572ef82622e82cde996007d9c2aa97de67d48869795b6bfd"} Dec 11 10:30:00 crc kubenswrapper[4953]: I1211 10:30:00.712924 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-c2pkq" event={"ID":"0ab27af2-4f6b-4e0f-b399-bef9b137ce63","Type":"ContainerStarted","Data":"9a13deeb69f5cc8da5a7dac3388e11dd5e326009e1b9569b2340268f546a1ba4"} Dec 11 10:30:00 crc kubenswrapper[4953]: I1211 10:30:00.731855 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-gkctw" event={"ID":"a27b4200-b26e-434d-be23-2940fe7a57c7","Type":"ContainerStarted","Data":"ed17344396910e10de5e5c248b52dbf5eb4d0feb6470b7e1ca86812604a5d627"} Dec 11 10:30:00 crc kubenswrapper[4953]: E1211 10:30:00.737311 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-gkctw" podUID="a27b4200-b26e-434d-be23-2940fe7a57c7" Dec 11 10:30:00 crc kubenswrapper[4953]: I1211 10:30:00.737915 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5854674fcc-7n7sr" event={"ID":"905bc7ea-6d15-4d73-ad1c-71041c90e83f","Type":"ContainerStarted","Data":"ae32df7f75cb723f2f9f39150dafaec7a30daee17cd826fe0fa4012496bbd3de"} Dec 11 10:30:00 crc kubenswrapper[4953]: I1211 10:30:00.767716 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424150-978gp"] Dec 11 10:30:00 crc kubenswrapper[4953]: I1211 10:30:00.882109 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1c0f14ca-80dd-4704-989d-ca02d722bf43-metrics-certs\") pod \"openstack-operator-controller-manager-85cbc5886b-lxtqb\" (UID: \"1c0f14ca-80dd-4704-989d-ca02d722bf43\") " pod="openstack-operators/openstack-operator-controller-manager-85cbc5886b-lxtqb" Dec 11 10:30:00 crc kubenswrapper[4953]: I1211 10:30:00.882260 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1c0f14ca-80dd-4704-989d-ca02d722bf43-webhook-certs\") pod \"openstack-operator-controller-manager-85cbc5886b-lxtqb\" (UID: \"1c0f14ca-80dd-4704-989d-ca02d722bf43\") " pod="openstack-operators/openstack-operator-controller-manager-85cbc5886b-lxtqb" Dec 11 10:30:00 crc kubenswrapper[4953]: E1211 10:30:00.882315 4953 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 11 10:30:00 crc kubenswrapper[4953]: E1211 10:30:00.882405 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1c0f14ca-80dd-4704-989d-ca02d722bf43-metrics-certs podName:1c0f14ca-80dd-4704-989d-ca02d722bf43 nodeName:}" failed. No retries permitted until 2025-12-11 10:30:02.88238194 +0000 UTC m=+1120.906240973 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1c0f14ca-80dd-4704-989d-ca02d722bf43-metrics-certs") pod "openstack-operator-controller-manager-85cbc5886b-lxtqb" (UID: "1c0f14ca-80dd-4704-989d-ca02d722bf43") : secret "metrics-server-cert" not found Dec 11 10:30:00 crc kubenswrapper[4953]: E1211 10:30:00.882405 4953 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 11 10:30:00 crc kubenswrapper[4953]: E1211 10:30:00.882468 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1c0f14ca-80dd-4704-989d-ca02d722bf43-webhook-certs podName:1c0f14ca-80dd-4704-989d-ca02d722bf43 nodeName:}" failed. No retries permitted until 2025-12-11 10:30:02.882459332 +0000 UTC m=+1120.906318465 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/1c0f14ca-80dd-4704-989d-ca02d722bf43-webhook-certs") pod "openstack-operator-controller-manager-85cbc5886b-lxtqb" (UID: "1c0f14ca-80dd-4704-989d-ca02d722bf43") : secret "webhook-server-cert" not found Dec 11 10:30:01 crc kubenswrapper[4953]: I1211 10:30:01.770248 4953 generic.go:334] "Generic (PLEG): container finished" podID="b8937b8d-554d-44bf-9a69-b0e6350fd8f0" containerID="84cec215e2bf94f3194c2af17671802a197cf5821efaa3f0276adce66ccc2d68" exitCode=0 Dec 11 10:30:01 crc kubenswrapper[4953]: I1211 10:30:01.770313 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29424150-978gp" event={"ID":"b8937b8d-554d-44bf-9a69-b0e6350fd8f0","Type":"ContainerDied","Data":"84cec215e2bf94f3194c2af17671802a197cf5821efaa3f0276adce66ccc2d68"} Dec 11 10:30:01 crc kubenswrapper[4953]: I1211 10:30:01.771550 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29424150-978gp" event={"ID":"b8937b8d-554d-44bf-9a69-b0e6350fd8f0","Type":"ContainerStarted","Data":"b7f3422925b9eb303e0f6fa991484a4b845fae2ea1aac6d694bf5148efbfb02c"} Dec 11 10:30:01 crc kubenswrapper[4953]: E1211 10:30:01.773453 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-k9zt4" podUID="e94f5882-5902-4e23-82b7-374766161807" Dec 11 10:30:01 crc kubenswrapper[4953]: E1211 10:30:01.774510 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:3aa109bb973253ae9dcf339b9b65abbd1176cdb4be672c93e538a5f113816991\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-qx85w" podUID="876fe2ae-127d-4e15-943a-3d3496252660" Dec 11 10:30:01 crc kubenswrapper[4953]: E1211 10:30:01.778238 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:f27e732ec1faee765461bf137d9be81278b2fa39675019a73622755e1e610b6f\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/telemetry-operator-controller-manager-58d5ff84df-5zt8b" podUID="47b393e6-75c0-493f-83f5-d7e9d67ef5dd" Dec 11 10:30:01 crc kubenswrapper[4953]: E1211 10:30:01.778424 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:d29650b006da97eb9178fcc58f2eb9fead8c2b414fac18f86a3c3a1507488c4f\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/placement-operator-controller-manager-78f8948974-5ths7" podUID="a5873dea-ac09-449b-95ae-fc5f77f0e8d4" Dec 11 10:30:01 crc kubenswrapper[4953]: E1211 10:30:01.779431 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-gkctw" podUID="a27b4200-b26e-434d-be23-2940fe7a57c7" Dec 11 10:30:01 crc kubenswrapper[4953]: E1211 10:30:01.779803 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:961417d59f527d925ac48ff6a11de747d0493315e496e34dc83d76a1a1fff58a\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-gkqq9" podUID="d5261918-b44c-4d64-93d3-ab0742fdde80" Dec 11 10:30:01 crc kubenswrapper[4953]: E1211 10:30:01.781364 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ironic-operator@sha256:5bdb3685be3ddc1efd62e16aaf2fa96ead64315e26d52b1b2a7d8ac01baa1e87\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/ironic-operator-controller-manager-967d97867-f75n2" podUID="b7626052-8b4d-46d2-8f66-5774f43643a0" Dec 11 10:30:02 crc kubenswrapper[4953]: I1211 10:30:02.208734 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/954a102c-a60d-405a-b579-450e6b8e5c8b-cert\") pod \"infra-operator-controller-manager-78d48bff9d-nczzd\" (UID: \"954a102c-a60d-405a-b579-450e6b8e5c8b\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-nczzd" Dec 11 10:30:02 crc kubenswrapper[4953]: E1211 10:30:02.208966 4953 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 11 10:30:02 crc kubenswrapper[4953]: E1211 10:30:02.209058 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/954a102c-a60d-405a-b579-450e6b8e5c8b-cert podName:954a102c-a60d-405a-b579-450e6b8e5c8b nodeName:}" failed. No retries permitted until 2025-12-11 10:30:06.209031942 +0000 UTC m=+1124.232890975 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/954a102c-a60d-405a-b579-450e6b8e5c8b-cert") pod "infra-operator-controller-manager-78d48bff9d-nczzd" (UID: "954a102c-a60d-405a-b579-450e6b8e5c8b") : secret "infra-operator-webhook-server-cert" not found Dec 11 10:30:02 crc kubenswrapper[4953]: I1211 10:30:02.527960 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e7802018-7972-4d69-8b66-ea4bb637ff7f-cert\") pod \"openstack-baremetal-operator-controller-manager-7f95dc5b94bcm8w\" (UID: \"e7802018-7972-4d69-8b66-ea4bb637ff7f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7f95dc5b94bcm8w" Dec 11 10:30:02 crc kubenswrapper[4953]: E1211 10:30:02.528667 4953 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 11 10:30:02 crc kubenswrapper[4953]: E1211 10:30:02.529287 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e7802018-7972-4d69-8b66-ea4bb637ff7f-cert podName:e7802018-7972-4d69-8b66-ea4bb637ff7f nodeName:}" failed. No retries permitted until 2025-12-11 10:30:06.529261143 +0000 UTC m=+1124.553120176 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/e7802018-7972-4d69-8b66-ea4bb637ff7f-cert") pod "openstack-baremetal-operator-controller-manager-7f95dc5b94bcm8w" (UID: "e7802018-7972-4d69-8b66-ea4bb637ff7f") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 11 10:30:02 crc kubenswrapper[4953]: I1211 10:30:02.937104 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1c0f14ca-80dd-4704-989d-ca02d722bf43-webhook-certs\") pod \"openstack-operator-controller-manager-85cbc5886b-lxtqb\" (UID: \"1c0f14ca-80dd-4704-989d-ca02d722bf43\") " pod="openstack-operators/openstack-operator-controller-manager-85cbc5886b-lxtqb" Dec 11 10:30:02 crc kubenswrapper[4953]: E1211 10:30:02.937450 4953 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 11 10:30:02 crc kubenswrapper[4953]: E1211 10:30:02.937641 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1c0f14ca-80dd-4704-989d-ca02d722bf43-webhook-certs podName:1c0f14ca-80dd-4704-989d-ca02d722bf43 nodeName:}" failed. No retries permitted until 2025-12-11 10:30:06.937626168 +0000 UTC m=+1124.961485201 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/1c0f14ca-80dd-4704-989d-ca02d722bf43-webhook-certs") pod "openstack-operator-controller-manager-85cbc5886b-lxtqb" (UID: "1c0f14ca-80dd-4704-989d-ca02d722bf43") : secret "webhook-server-cert" not found Dec 11 10:30:02 crc kubenswrapper[4953]: I1211 10:30:02.938014 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1c0f14ca-80dd-4704-989d-ca02d722bf43-metrics-certs\") pod \"openstack-operator-controller-manager-85cbc5886b-lxtqb\" (UID: \"1c0f14ca-80dd-4704-989d-ca02d722bf43\") " pod="openstack-operators/openstack-operator-controller-manager-85cbc5886b-lxtqb" Dec 11 10:30:02 crc kubenswrapper[4953]: E1211 10:30:02.938140 4953 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 11 10:30:02 crc kubenswrapper[4953]: E1211 10:30:02.938177 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1c0f14ca-80dd-4704-989d-ca02d722bf43-metrics-certs podName:1c0f14ca-80dd-4704-989d-ca02d722bf43 nodeName:}" failed. No retries permitted until 2025-12-11 10:30:06.938166595 +0000 UTC m=+1124.962025628 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1c0f14ca-80dd-4704-989d-ca02d722bf43-metrics-certs") pod "openstack-operator-controller-manager-85cbc5886b-lxtqb" (UID: "1c0f14ca-80dd-4704-989d-ca02d722bf43") : secret "metrics-server-cert" not found Dec 11 10:30:06 crc kubenswrapper[4953]: I1211 10:30:06.296500 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/954a102c-a60d-405a-b579-450e6b8e5c8b-cert\") pod \"infra-operator-controller-manager-78d48bff9d-nczzd\" (UID: \"954a102c-a60d-405a-b579-450e6b8e5c8b\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-nczzd" Dec 11 10:30:06 crc kubenswrapper[4953]: E1211 10:30:06.296760 4953 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 11 10:30:06 crc kubenswrapper[4953]: E1211 10:30:06.296994 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/954a102c-a60d-405a-b579-450e6b8e5c8b-cert podName:954a102c-a60d-405a-b579-450e6b8e5c8b nodeName:}" failed. No retries permitted until 2025-12-11 10:30:14.296972068 +0000 UTC m=+1132.320831141 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/954a102c-a60d-405a-b579-450e6b8e5c8b-cert") pod "infra-operator-controller-manager-78d48bff9d-nczzd" (UID: "954a102c-a60d-405a-b579-450e6b8e5c8b") : secret "infra-operator-webhook-server-cert" not found Dec 11 10:30:06 crc kubenswrapper[4953]: I1211 10:30:06.602515 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e7802018-7972-4d69-8b66-ea4bb637ff7f-cert\") pod \"openstack-baremetal-operator-controller-manager-7f95dc5b94bcm8w\" (UID: \"e7802018-7972-4d69-8b66-ea4bb637ff7f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7f95dc5b94bcm8w" Dec 11 10:30:06 crc kubenswrapper[4953]: E1211 10:30:06.604765 4953 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 11 10:30:06 crc kubenswrapper[4953]: E1211 10:30:06.604889 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e7802018-7972-4d69-8b66-ea4bb637ff7f-cert podName:e7802018-7972-4d69-8b66-ea4bb637ff7f nodeName:}" failed. No retries permitted until 2025-12-11 10:30:14.60485524 +0000 UTC m=+1132.628714313 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/e7802018-7972-4d69-8b66-ea4bb637ff7f-cert") pod "openstack-baremetal-operator-controller-manager-7f95dc5b94bcm8w" (UID: "e7802018-7972-4d69-8b66-ea4bb637ff7f") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 11 10:30:07 crc kubenswrapper[4953]: I1211 10:30:07.011283 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1c0f14ca-80dd-4704-989d-ca02d722bf43-webhook-certs\") pod \"openstack-operator-controller-manager-85cbc5886b-lxtqb\" (UID: \"1c0f14ca-80dd-4704-989d-ca02d722bf43\") " pod="openstack-operators/openstack-operator-controller-manager-85cbc5886b-lxtqb" Dec 11 10:30:07 crc kubenswrapper[4953]: E1211 10:30:07.011497 4953 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 11 10:30:07 crc kubenswrapper[4953]: I1211 10:30:07.011897 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1c0f14ca-80dd-4704-989d-ca02d722bf43-metrics-certs\") pod \"openstack-operator-controller-manager-85cbc5886b-lxtqb\" (UID: \"1c0f14ca-80dd-4704-989d-ca02d722bf43\") " pod="openstack-operators/openstack-operator-controller-manager-85cbc5886b-lxtqb" Dec 11 10:30:07 crc kubenswrapper[4953]: E1211 10:30:07.011918 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1c0f14ca-80dd-4704-989d-ca02d722bf43-webhook-certs podName:1c0f14ca-80dd-4704-989d-ca02d722bf43 nodeName:}" failed. No retries permitted until 2025-12-11 10:30:15.011897873 +0000 UTC m=+1133.035756906 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/1c0f14ca-80dd-4704-989d-ca02d722bf43-webhook-certs") pod "openstack-operator-controller-manager-85cbc5886b-lxtqb" (UID: "1c0f14ca-80dd-4704-989d-ca02d722bf43") : secret "webhook-server-cert" not found Dec 11 10:30:07 crc kubenswrapper[4953]: E1211 10:30:07.012229 4953 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 11 10:30:07 crc kubenswrapper[4953]: E1211 10:30:07.012341 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1c0f14ca-80dd-4704-989d-ca02d722bf43-metrics-certs podName:1c0f14ca-80dd-4704-989d-ca02d722bf43 nodeName:}" failed. No retries permitted until 2025-12-11 10:30:15.012319466 +0000 UTC m=+1133.036178509 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1c0f14ca-80dd-4704-989d-ca02d722bf43-metrics-certs") pod "openstack-operator-controller-manager-85cbc5886b-lxtqb" (UID: "1c0f14ca-80dd-4704-989d-ca02d722bf43") : secret "metrics-server-cert" not found Dec 11 10:30:08 crc kubenswrapper[4953]: I1211 10:30:08.768326 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424150-978gp" Dec 11 10:30:08 crc kubenswrapper[4953]: I1211 10:30:08.823405 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29424150-978gp" event={"ID":"b8937b8d-554d-44bf-9a69-b0e6350fd8f0","Type":"ContainerDied","Data":"b7f3422925b9eb303e0f6fa991484a4b845fae2ea1aac6d694bf5148efbfb02c"} Dec 11 10:30:08 crc kubenswrapper[4953]: I1211 10:30:08.823448 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424150-978gp" Dec 11 10:30:08 crc kubenswrapper[4953]: I1211 10:30:08.823451 4953 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b7f3422925b9eb303e0f6fa991484a4b845fae2ea1aac6d694bf5148efbfb02c" Dec 11 10:30:08 crc kubenswrapper[4953]: I1211 10:30:08.839900 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b8937b8d-554d-44bf-9a69-b0e6350fd8f0-config-volume\") pod \"b8937b8d-554d-44bf-9a69-b0e6350fd8f0\" (UID: \"b8937b8d-554d-44bf-9a69-b0e6350fd8f0\") " Dec 11 10:30:08 crc kubenswrapper[4953]: I1211 10:30:08.840190 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b8937b8d-554d-44bf-9a69-b0e6350fd8f0-secret-volume\") pod \"b8937b8d-554d-44bf-9a69-b0e6350fd8f0\" (UID: \"b8937b8d-554d-44bf-9a69-b0e6350fd8f0\") " Dec 11 10:30:08 crc kubenswrapper[4953]: I1211 10:30:08.840297 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2t7zr\" (UniqueName: \"kubernetes.io/projected/b8937b8d-554d-44bf-9a69-b0e6350fd8f0-kube-api-access-2t7zr\") pod \"b8937b8d-554d-44bf-9a69-b0e6350fd8f0\" (UID: \"b8937b8d-554d-44bf-9a69-b0e6350fd8f0\") " Dec 11 10:30:08 crc kubenswrapper[4953]: I1211 10:30:08.840868 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8937b8d-554d-44bf-9a69-b0e6350fd8f0-config-volume" (OuterVolumeSpecName: "config-volume") pod "b8937b8d-554d-44bf-9a69-b0e6350fd8f0" (UID: "b8937b8d-554d-44bf-9a69-b0e6350fd8f0"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:30:08 crc kubenswrapper[4953]: I1211 10:30:08.846494 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8937b8d-554d-44bf-9a69-b0e6350fd8f0-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "b8937b8d-554d-44bf-9a69-b0e6350fd8f0" (UID: "b8937b8d-554d-44bf-9a69-b0e6350fd8f0"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:30:08 crc kubenswrapper[4953]: I1211 10:30:08.854883 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8937b8d-554d-44bf-9a69-b0e6350fd8f0-kube-api-access-2t7zr" (OuterVolumeSpecName: "kube-api-access-2t7zr") pod "b8937b8d-554d-44bf-9a69-b0e6350fd8f0" (UID: "b8937b8d-554d-44bf-9a69-b0e6350fd8f0"). InnerVolumeSpecName "kube-api-access-2t7zr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:30:08 crc kubenswrapper[4953]: I1211 10:30:08.942705 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2t7zr\" (UniqueName: \"kubernetes.io/projected/b8937b8d-554d-44bf-9a69-b0e6350fd8f0-kube-api-access-2t7zr\") on node \"crc\" DevicePath \"\"" Dec 11 10:30:08 crc kubenswrapper[4953]: I1211 10:30:08.942759 4953 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b8937b8d-554d-44bf-9a69-b0e6350fd8f0-config-volume\") on node \"crc\" DevicePath \"\"" Dec 11 10:30:08 crc kubenswrapper[4953]: I1211 10:30:08.942771 4953 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b8937b8d-554d-44bf-9a69-b0e6350fd8f0-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 11 10:30:14 crc kubenswrapper[4953]: I1211 10:30:14.333812 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/954a102c-a60d-405a-b579-450e6b8e5c8b-cert\") pod \"infra-operator-controller-manager-78d48bff9d-nczzd\" (UID: \"954a102c-a60d-405a-b579-450e6b8e5c8b\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-nczzd" Dec 11 10:30:14 crc kubenswrapper[4953]: E1211 10:30:14.334047 4953 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 11 10:30:14 crc kubenswrapper[4953]: E1211 10:30:14.334442 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/954a102c-a60d-405a-b579-450e6b8e5c8b-cert podName:954a102c-a60d-405a-b579-450e6b8e5c8b nodeName:}" failed. No retries permitted until 2025-12-11 10:30:30.334422312 +0000 UTC m=+1148.358281345 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/954a102c-a60d-405a-b579-450e6b8e5c8b-cert") pod "infra-operator-controller-manager-78d48bff9d-nczzd" (UID: "954a102c-a60d-405a-b579-450e6b8e5c8b") : secret "infra-operator-webhook-server-cert" not found Dec 11 10:30:14 crc kubenswrapper[4953]: I1211 10:30:14.638796 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e7802018-7972-4d69-8b66-ea4bb637ff7f-cert\") pod \"openstack-baremetal-operator-controller-manager-7f95dc5b94bcm8w\" (UID: \"e7802018-7972-4d69-8b66-ea4bb637ff7f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7f95dc5b94bcm8w" Dec 11 10:30:14 crc kubenswrapper[4953]: E1211 10:30:14.639077 4953 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 11 10:30:14 crc kubenswrapper[4953]: E1211 10:30:14.639217 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e7802018-7972-4d69-8b66-ea4bb637ff7f-cert podName:e7802018-7972-4d69-8b66-ea4bb637ff7f nodeName:}" failed. No retries permitted until 2025-12-11 10:30:30.639188736 +0000 UTC m=+1148.663047809 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/e7802018-7972-4d69-8b66-ea4bb637ff7f-cert") pod "openstack-baremetal-operator-controller-manager-7f95dc5b94bcm8w" (UID: "e7802018-7972-4d69-8b66-ea4bb637ff7f") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 11 10:30:14 crc kubenswrapper[4953]: E1211 10:30:14.729616 4953 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:0b3fb69f35c151895d3dffd514974a9f9fe1c77c3bca69b78b81efb183cf4557" Dec 11 10:30:14 crc kubenswrapper[4953]: E1211 10:30:14.730666 4953 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:0b3fb69f35c151895d3dffd514974a9f9fe1c77c3bca69b78b81efb183cf4557,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qxb58,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-5fdfd5b6b5-z6knw_openstack-operators(46ad2123-023a-4bcb-9b05-2a6b223c2d02): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 11 10:30:15 crc kubenswrapper[4953]: I1211 10:30:15.065144 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1c0f14ca-80dd-4704-989d-ca02d722bf43-webhook-certs\") pod \"openstack-operator-controller-manager-85cbc5886b-lxtqb\" (UID: \"1c0f14ca-80dd-4704-989d-ca02d722bf43\") " pod="openstack-operators/openstack-operator-controller-manager-85cbc5886b-lxtqb" Dec 11 10:30:15 crc kubenswrapper[4953]: I1211 10:30:15.065302 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1c0f14ca-80dd-4704-989d-ca02d722bf43-metrics-certs\") pod \"openstack-operator-controller-manager-85cbc5886b-lxtqb\" (UID: \"1c0f14ca-80dd-4704-989d-ca02d722bf43\") " pod="openstack-operators/openstack-operator-controller-manager-85cbc5886b-lxtqb" Dec 11 10:30:15 crc kubenswrapper[4953]: E1211 10:30:15.065529 4953 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 11 10:30:15 crc kubenswrapper[4953]: E1211 10:30:15.065646 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1c0f14ca-80dd-4704-989d-ca02d722bf43-metrics-certs podName:1c0f14ca-80dd-4704-989d-ca02d722bf43 nodeName:}" failed. No retries permitted until 2025-12-11 10:30:31.0656152 +0000 UTC m=+1149.089474233 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1c0f14ca-80dd-4704-989d-ca02d722bf43-metrics-certs") pod "openstack-operator-controller-manager-85cbc5886b-lxtqb" (UID: "1c0f14ca-80dd-4704-989d-ca02d722bf43") : secret "metrics-server-cert" not found Dec 11 10:30:15 crc kubenswrapper[4953]: E1211 10:30:15.065964 4953 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 11 10:30:15 crc kubenswrapper[4953]: E1211 10:30:15.066015 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1c0f14ca-80dd-4704-989d-ca02d722bf43-webhook-certs podName:1c0f14ca-80dd-4704-989d-ca02d722bf43 nodeName:}" failed. No retries permitted until 2025-12-11 10:30:31.066004001 +0000 UTC m=+1149.089863034 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/1c0f14ca-80dd-4704-989d-ca02d722bf43-webhook-certs") pod "openstack-operator-controller-manager-85cbc5886b-lxtqb" (UID: "1c0f14ca-80dd-4704-989d-ca02d722bf43") : secret "webhook-server-cert" not found Dec 11 10:30:15 crc kubenswrapper[4953]: E1211 10:30:15.589108 4953 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/mariadb-operator@sha256:424da951f13f1fbe9083215dc9f5088f90676dd813f01fdf3c1a8639b61cbaad" Dec 11 10:30:15 crc kubenswrapper[4953]: E1211 10:30:15.589532 4953 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:424da951f13f1fbe9083215dc9f5088f90676dd813f01fdf3c1a8639b61cbaad,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7wrxb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-79c8c4686c-c2pkq_openstack-operators(0ab27af2-4f6b-4e0f-b399-bef9b137ce63): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 11 10:30:16 crc kubenswrapper[4953]: E1211 10:30:16.307658 4953 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/designate-operator@sha256:900050d3501c0785b227db34b89883efe68247816e5c7427cacb74f8aa10605a" Dec 11 10:30:16 crc kubenswrapper[4953]: E1211 10:30:16.308876 4953 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/designate-operator@sha256:900050d3501c0785b227db34b89883efe68247816e5c7427cacb74f8aa10605a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-v4gsf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-697fb699cf-c8jqf_openstack-operators(b97b8317-f4e7-440c-8d72-df1cf55afe09): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 11 10:30:17 crc kubenswrapper[4953]: E1211 10:30:17.613451 4953 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670" Dec 11 10:30:17 crc kubenswrapper[4953]: E1211 10:30:17.614732 4953 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7ngc5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-697bc559fc-wvw7n_openstack-operators(9feb23b4-0b52-42c6-98a8-6b1de2241028): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 11 10:30:18 crc kubenswrapper[4953]: I1211 10:30:18.194373 4953 patch_prober.go:28] interesting pod/machine-config-daemon-q2898 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 10:30:18 crc kubenswrapper[4953]: I1211 10:30:18.194499 4953 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 10:30:18 crc kubenswrapper[4953]: E1211 10:30:18.230788 4953 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:44126f9c6b1d2bf752ddf989e20a4fc4cc1c07723d4fcb78465ccb2f55da6b3a" Dec 11 10:30:18 crc kubenswrapper[4953]: E1211 10:30:18.231065 4953 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:44126f9c6b1d2bf752ddf989e20a4fc4cc1c07723d4fcb78465ccb2f55da6b3a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-t45p4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-5b5fd79c9c-c95px_openstack-operators(b7ddbee0-c6cd-4571-912d-09744da61237): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 11 10:30:18 crc kubenswrapper[4953]: E1211 10:30:18.956737 4953 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:72ad6517987f674af0d0ae092cbb874aeae909c8b8b60188099c311762ebc8f7" Dec 11 10:30:18 crc kubenswrapper[4953]: E1211 10:30:18.956944 4953 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:72ad6517987f674af0d0ae092cbb874aeae909c8b8b60188099c311762ebc8f7,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8qcsd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-7765d96ddf-vz6q6_openstack-operators(33ec47dc-5b73-4fd2-b0e1-eee01b12110f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 11 10:30:25 crc kubenswrapper[4953]: I1211 10:30:25.851532 4953 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 11 10:30:25 crc kubenswrapper[4953]: I1211 10:30:25.973010 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5854674fcc-7n7sr" event={"ID":"905bc7ea-6d15-4d73-ad1c-71041c90e83f","Type":"ContainerStarted","Data":"7892bf63cb6990fbf5fcf41bfcbe06ad4de81f3e10c4b3dc2159859ebe04b725"} Dec 11 10:30:25 crc kubenswrapper[4953]: I1211 10:30:25.976895 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-zqt7s" event={"ID":"43c3d99b-4ce9-421a-9212-c99b50e671af","Type":"ContainerStarted","Data":"2f1801515dd17be58b7c8fcddacdda3239ad6dee96d4d47e517741dd572e3a57"} Dec 11 10:30:25 crc kubenswrapper[4953]: I1211 10:30:25.988499 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-gkctw" event={"ID":"a27b4200-b26e-434d-be23-2940fe7a57c7","Type":"ContainerStarted","Data":"1185220a0fabc652aeb0ed7723b8a462021a5fff60b15ffc89ff0184d889982c"} Dec 11 10:30:27 crc kubenswrapper[4953]: I1211 10:30:27.016762 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-78f8948974-5ths7" event={"ID":"a5873dea-ac09-449b-95ae-fc5f77f0e8d4","Type":"ContainerStarted","Data":"614217ab8059ff7407c3d00731d309df5e2f8fce7a860d5dc62490389276a6ea"} Dec 11 10:30:27 crc kubenswrapper[4953]: I1211 10:30:27.032130 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-967d97867-f75n2" event={"ID":"b7626052-8b4d-46d2-8f66-5774f43643a0","Type":"ContainerStarted","Data":"95a36e7cb09512334a2e3a3b26520799dd0113625c9946daddce19bc3001ff40"} Dec 11 10:30:27 crc kubenswrapper[4953]: I1211 10:30:27.040882 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-gkqq9" event={"ID":"d5261918-b44c-4d64-93d3-ab0742fdde80","Type":"ContainerStarted","Data":"919a81b00b900a027d667b0e84b79f48bc4b5ee39e165149c0fef57ebcfc4332"} Dec 11 10:30:27 crc kubenswrapper[4953]: I1211 10:30:27.056644 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-4qtwl" event={"ID":"6b26e336-7c68-4ba3-979b-211c05708639","Type":"ContainerStarted","Data":"959bb81a5498fc2d48489ed5a5cb4b1ec29e26b02eb01a09926056392fd07e22"} Dec 11 10:30:27 crc kubenswrapper[4953]: I1211 10:30:27.066319 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-jz7t2" event={"ID":"95010a68-4a99-4e84-8785-cb970f7085e1","Type":"ContainerStarted","Data":"8557d08c853f452173c8280dd503b8162cf05e2a8650741533c189a30cfbc0f1"} Dec 11 10:30:27 crc kubenswrapper[4953]: I1211 10:30:27.068939 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-998648c74-6nwkm" event={"ID":"e905c779-8570-480f-a7b9-7bba299bee6b","Type":"ContainerStarted","Data":"f57fceee0a0c174e69e2565f7e562891c89fe66b2d458291addf653e38c0c8e3"} Dec 11 10:30:27 crc kubenswrapper[4953]: I1211 10:30:27.070190 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-dx46k" event={"ID":"c77a72a9-141b-4be9-99e2-406e16b68c2b","Type":"ContainerStarted","Data":"ee3f35c7cd510ff1c4e5e44060e274df72be9a753e6c0fdf4932fe9e6f179c9d"} Dec 11 10:30:27 crc kubenswrapper[4953]: I1211 10:30:27.071937 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-jz2zb" event={"ID":"b81d3c69-eb5d-406f-8f14-330eaf0edec3","Type":"ContainerStarted","Data":"fbd4b9250d21f080ce408e339174ef53e7b4f0be6f1fdc00138046417b1110d7"} Dec 11 10:30:28 crc kubenswrapper[4953]: I1211 10:30:28.082459 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-58d5ff84df-5zt8b" event={"ID":"47b393e6-75c0-493f-83f5-d7e9d67ef5dd","Type":"ContainerStarted","Data":"c43b439d43a3798100b1e328b06479b38e1335378404dddeff6b66a6471610c1"} Dec 11 10:30:28 crc kubenswrapper[4953]: I1211 10:30:28.086163 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-qx85w" event={"ID":"876fe2ae-127d-4e15-943a-3d3496252660","Type":"ContainerStarted","Data":"81847c14644945b505f94440a0e06e98e3c7b3c4348117a2e757ba1c7c44f16b"} Dec 11 10:30:29 crc kubenswrapper[4953]: I1211 10:30:29.098076 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-k9zt4" event={"ID":"e94f5882-5902-4e23-82b7-374766161807","Type":"ContainerStarted","Data":"3fd04004422fd4fd3668a7ab8de8b9551e0709551c23d19388cd8d0bccc39028"} Dec 11 10:30:29 crc kubenswrapper[4953]: I1211 10:30:29.133306 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-k9zt4" podStartSLOduration=3.6765693600000002 podStartE2EDuration="30.133288111s" podCreationTimestamp="2025-12-11 10:29:59 +0000 UTC" firstStartedPulling="2025-12-11 10:30:00.517661809 +0000 UTC m=+1118.541520842" lastFinishedPulling="2025-12-11 10:30:26.97438055 +0000 UTC m=+1144.998239593" observedRunningTime="2025-12-11 10:30:29.120494768 +0000 UTC m=+1147.144353821" watchObservedRunningTime="2025-12-11 10:30:29.133288111 +0000 UTC m=+1147.157147144" Dec 11 10:30:30 crc kubenswrapper[4953]: I1211 10:30:30.346162 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/954a102c-a60d-405a-b579-450e6b8e5c8b-cert\") pod \"infra-operator-controller-manager-78d48bff9d-nczzd\" (UID: \"954a102c-a60d-405a-b579-450e6b8e5c8b\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-nczzd" Dec 11 10:30:30 crc kubenswrapper[4953]: I1211 10:30:30.363437 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/954a102c-a60d-405a-b579-450e6b8e5c8b-cert\") pod \"infra-operator-controller-manager-78d48bff9d-nczzd\" (UID: \"954a102c-a60d-405a-b579-450e6b8e5c8b\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-nczzd" Dec 11 10:30:30 crc kubenswrapper[4953]: I1211 10:30:30.467457 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-wckt8" Dec 11 10:30:30 crc kubenswrapper[4953]: I1211 10:30:30.508176 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-nczzd" Dec 11 10:30:30 crc kubenswrapper[4953]: I1211 10:30:30.655173 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e7802018-7972-4d69-8b66-ea4bb637ff7f-cert\") pod \"openstack-baremetal-operator-controller-manager-7f95dc5b94bcm8w\" (UID: \"e7802018-7972-4d69-8b66-ea4bb637ff7f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7f95dc5b94bcm8w" Dec 11 10:30:30 crc kubenswrapper[4953]: I1211 10:30:30.659110 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e7802018-7972-4d69-8b66-ea4bb637ff7f-cert\") pod \"openstack-baremetal-operator-controller-manager-7f95dc5b94bcm8w\" (UID: \"e7802018-7972-4d69-8b66-ea4bb637ff7f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7f95dc5b94bcm8w" Dec 11 10:30:30 crc kubenswrapper[4953]: E1211 10:30:30.665406 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-vz6q6" podUID="33ec47dc-5b73-4fd2-b0e1-eee01b12110f" Dec 11 10:30:30 crc kubenswrapper[4953]: E1211 10:30:30.679035 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-c2pkq" podUID="0ab27af2-4f6b-4e0f-b399-bef9b137ce63" Dec 11 10:30:30 crc kubenswrapper[4953]: E1211 10:30:30.689805 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-c8jqf" podUID="b97b8317-f4e7-440c-8d72-df1cf55afe09" Dec 11 10:30:30 crc kubenswrapper[4953]: E1211 10:30:30.843410 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-z6knw" podUID="46ad2123-023a-4bcb-9b05-2a6b223c2d02" Dec 11 10:30:30 crc kubenswrapper[4953]: I1211 10:30:30.882996 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-hbcx2" Dec 11 10:30:30 crc kubenswrapper[4953]: I1211 10:30:30.898812 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7f95dc5b94bcm8w" Dec 11 10:30:31 crc kubenswrapper[4953]: I1211 10:30:31.190235 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1c0f14ca-80dd-4704-989d-ca02d722bf43-metrics-certs\") pod \"openstack-operator-controller-manager-85cbc5886b-lxtqb\" (UID: \"1c0f14ca-80dd-4704-989d-ca02d722bf43\") " pod="openstack-operators/openstack-operator-controller-manager-85cbc5886b-lxtqb" Dec 11 10:30:31 crc kubenswrapper[4953]: I1211 10:30:31.190451 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1c0f14ca-80dd-4704-989d-ca02d722bf43-webhook-certs\") pod \"openstack-operator-controller-manager-85cbc5886b-lxtqb\" (UID: \"1c0f14ca-80dd-4704-989d-ca02d722bf43\") " pod="openstack-operators/openstack-operator-controller-manager-85cbc5886b-lxtqb" Dec 11 10:30:31 crc kubenswrapper[4953]: I1211 10:30:31.205761 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1c0f14ca-80dd-4704-989d-ca02d722bf43-metrics-certs\") pod \"openstack-operator-controller-manager-85cbc5886b-lxtqb\" (UID: \"1c0f14ca-80dd-4704-989d-ca02d722bf43\") " pod="openstack-operators/openstack-operator-controller-manager-85cbc5886b-lxtqb" Dec 11 10:30:31 crc kubenswrapper[4953]: I1211 10:30:31.211438 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1c0f14ca-80dd-4704-989d-ca02d722bf43-webhook-certs\") pod \"openstack-operator-controller-manager-85cbc5886b-lxtqb\" (UID: \"1c0f14ca-80dd-4704-989d-ca02d722bf43\") " pod="openstack-operators/openstack-operator-controller-manager-85cbc5886b-lxtqb" Dec 11 10:30:31 crc kubenswrapper[4953]: I1211 10:30:31.258738 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-c2pkq" event={"ID":"0ab27af2-4f6b-4e0f-b399-bef9b137ce63","Type":"ContainerStarted","Data":"4d5513b72fb9a7e6125d5069c6ef643857999686c2916cf926017ac8c46a6efb"} Dec 11 10:30:31 crc kubenswrapper[4953]: I1211 10:30:31.275098 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-jz2zb" event={"ID":"b81d3c69-eb5d-406f-8f14-330eaf0edec3","Type":"ContainerStarted","Data":"052da23d08627c50b253933332df62f3c5f671a05464343452631c2f5dec9195"} Dec 11 10:30:31 crc kubenswrapper[4953]: I1211 10:30:31.276116 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-jz2zb" Dec 11 10:30:31 crc kubenswrapper[4953]: I1211 10:30:31.282868 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-967d97867-f75n2" event={"ID":"b7626052-8b4d-46d2-8f66-5774f43643a0","Type":"ContainerStarted","Data":"8e8945261073ee782e47fdc637d3cb43ed6d943cf779e2f79d1285c2c727245a"} Dec 11 10:30:31 crc kubenswrapper[4953]: I1211 10:30:31.285110 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-967d97867-f75n2" Dec 11 10:30:31 crc kubenswrapper[4953]: I1211 10:30:31.289215 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-967d97867-f75n2" Dec 11 10:30:31 crc kubenswrapper[4953]: I1211 10:30:31.292728 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-jz2zb" Dec 11 10:30:31 crc kubenswrapper[4953]: I1211 10:30:31.294141 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-qx85w" event={"ID":"876fe2ae-127d-4e15-943a-3d3496252660","Type":"ContainerStarted","Data":"e2d6c9cbd0656d98cdb063f70b634333f9b0305fe0b87a651373ec691de22d83"} Dec 11 10:30:31 crc kubenswrapper[4953]: I1211 10:30:31.295004 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-qx85w" Dec 11 10:30:31 crc kubenswrapper[4953]: I1211 10:30:31.400638 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-nvpv5" Dec 11 10:30:31 crc kubenswrapper[4953]: I1211 10:30:31.409508 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-85cbc5886b-lxtqb" Dec 11 10:30:31 crc kubenswrapper[4953]: I1211 10:30:31.429875 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-4qtwl" event={"ID":"6b26e336-7c68-4ba3-979b-211c05708639","Type":"ContainerStarted","Data":"2e2c0b7ca609107be280fb6953e9f23e79ebb6c48a552e3d1115b022ce39338c"} Dec 11 10:30:31 crc kubenswrapper[4953]: I1211 10:30:31.431180 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-4qtwl" Dec 11 10:30:31 crc kubenswrapper[4953]: I1211 10:30:31.444926 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-4qtwl" Dec 11 10:30:31 crc kubenswrapper[4953]: I1211 10:30:31.454963 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-z6knw" event={"ID":"46ad2123-023a-4bcb-9b05-2a6b223c2d02","Type":"ContainerStarted","Data":"ea2d0dac51f5c1caee86bd3b733d4568f187b29d4b9fbc4cc2f3659d63a6b8f5"} Dec 11 10:30:31 crc kubenswrapper[4953]: I1211 10:30:31.470132 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-jz2zb" podStartSLOduration=4.178019416 podStartE2EDuration="33.470109093s" podCreationTimestamp="2025-12-11 10:29:58 +0000 UTC" firstStartedPulling="2025-12-11 10:30:00.463186704 +0000 UTC m=+1118.487045737" lastFinishedPulling="2025-12-11 10:30:29.755276391 +0000 UTC m=+1147.779135414" observedRunningTime="2025-12-11 10:30:31.44492291 +0000 UTC m=+1149.468781943" watchObservedRunningTime="2025-12-11 10:30:31.470109093 +0000 UTC m=+1149.493968126" Dec 11 10:30:31 crc kubenswrapper[4953]: I1211 10:30:31.472664 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-78d48bff9d-nczzd"] Dec 11 10:30:31 crc kubenswrapper[4953]: I1211 10:30:31.478088 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-998648c74-6nwkm" event={"ID":"e905c779-8570-480f-a7b9-7bba299bee6b","Type":"ContainerStarted","Data":"1c67d9465969349ae5fad6754a53055bab1a1e565094a4bd9a1d7749ccac848c"} Dec 11 10:30:31 crc kubenswrapper[4953]: I1211 10:30:31.479124 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-998648c74-6nwkm" Dec 11 10:30:31 crc kubenswrapper[4953]: I1211 10:30:31.481601 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-gkctw" event={"ID":"a27b4200-b26e-434d-be23-2940fe7a57c7","Type":"ContainerStarted","Data":"e1c5ecce2de481d7e958445287bfeea4a39dd6ec05ea208215e3782bc9478505"} Dec 11 10:30:31 crc kubenswrapper[4953]: I1211 10:30:31.484219 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-gkctw" Dec 11 10:30:31 crc kubenswrapper[4953]: I1211 10:30:31.485193 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-gkctw" Dec 11 10:30:31 crc kubenswrapper[4953]: I1211 10:30:31.487724 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-58d5ff84df-5zt8b" event={"ID":"47b393e6-75c0-493f-83f5-d7e9d67ef5dd","Type":"ContainerStarted","Data":"a2f3fc14913fd561db38e5feb0191875070795ba6d276580127e9f88681e42b5"} Dec 11 10:30:31 crc kubenswrapper[4953]: I1211 10:30:31.488689 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-58d5ff84df-5zt8b" Dec 11 10:30:31 crc kubenswrapper[4953]: I1211 10:30:31.494800 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-998648c74-6nwkm" Dec 11 10:30:31 crc kubenswrapper[4953]: I1211 10:30:31.499999 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-c8jqf" event={"ID":"b97b8317-f4e7-440c-8d72-df1cf55afe09","Type":"ContainerStarted","Data":"ca6a50c1d6520ab9e68f18681d6e7982510a1d92f03ebe152e2c9a01fbc55a6d"} Dec 11 10:30:31 crc kubenswrapper[4953]: I1211 10:30:31.500767 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-967d97867-f75n2" podStartSLOduration=4.093125863 podStartE2EDuration="33.500475808s" podCreationTimestamp="2025-12-11 10:29:58 +0000 UTC" firstStartedPulling="2025-12-11 10:30:00.489395569 +0000 UTC m=+1118.513254602" lastFinishedPulling="2025-12-11 10:30:29.896745514 +0000 UTC m=+1147.920604547" observedRunningTime="2025-12-11 10:30:31.492706804 +0000 UTC m=+1149.516565837" watchObservedRunningTime="2025-12-11 10:30:31.500475808 +0000 UTC m=+1149.524334841" Dec 11 10:30:31 crc kubenswrapper[4953]: I1211 10:30:31.507335 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-vz6q6" event={"ID":"33ec47dc-5b73-4fd2-b0e1-eee01b12110f","Type":"ContainerStarted","Data":"54b0ed5c68c9824e468b7d66a52c3c0bb2a8c53d14ebb33d9cfe7cc3c36c38d6"} Dec 11 10:30:31 crc kubenswrapper[4953]: I1211 10:30:31.516892 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-qx85w" podStartSLOduration=3.816175135 podStartE2EDuration="33.516875104s" podCreationTimestamp="2025-12-11 10:29:58 +0000 UTC" firstStartedPulling="2025-12-11 10:30:00.503973468 +0000 UTC m=+1118.527832501" lastFinishedPulling="2025-12-11 10:30:30.204673437 +0000 UTC m=+1148.228532470" observedRunningTime="2025-12-11 10:30:31.510421002 +0000 UTC m=+1149.534280045" watchObservedRunningTime="2025-12-11 10:30:31.516875104 +0000 UTC m=+1149.540734137" Dec 11 10:30:31 crc kubenswrapper[4953]: I1211 10:30:31.598518 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-998648c74-6nwkm" podStartSLOduration=3.804689623 podStartE2EDuration="33.598482884s" podCreationTimestamp="2025-12-11 10:29:58 +0000 UTC" firstStartedPulling="2025-12-11 10:30:00.421846172 +0000 UTC m=+1118.445705205" lastFinishedPulling="2025-12-11 10:30:30.215639433 +0000 UTC m=+1148.239498466" observedRunningTime="2025-12-11 10:30:31.578139353 +0000 UTC m=+1149.601998386" watchObservedRunningTime="2025-12-11 10:30:31.598482884 +0000 UTC m=+1149.622341917" Dec 11 10:30:31 crc kubenswrapper[4953]: I1211 10:30:31.672316 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-4qtwl" podStartSLOduration=3.460982114 podStartE2EDuration="33.672288607s" podCreationTimestamp="2025-12-11 10:29:58 +0000 UTC" firstStartedPulling="2025-12-11 10:29:59.684075298 +0000 UTC m=+1117.707934331" lastFinishedPulling="2025-12-11 10:30:29.895381791 +0000 UTC m=+1147.919240824" observedRunningTime="2025-12-11 10:30:31.667654041 +0000 UTC m=+1149.691513074" watchObservedRunningTime="2025-12-11 10:30:31.672288607 +0000 UTC m=+1149.696147650" Dec 11 10:30:31 crc kubenswrapper[4953]: I1211 10:30:31.736777 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-58d5ff84df-5zt8b" podStartSLOduration=4.454318194 podStartE2EDuration="33.736757797s" podCreationTimestamp="2025-12-11 10:29:58 +0000 UTC" firstStartedPulling="2025-12-11 10:30:00.465618551 +0000 UTC m=+1118.489477574" lastFinishedPulling="2025-12-11 10:30:29.748058143 +0000 UTC m=+1147.771917177" observedRunningTime="2025-12-11 10:30:31.705792852 +0000 UTC m=+1149.729651895" watchObservedRunningTime="2025-12-11 10:30:31.736757797 +0000 UTC m=+1149.760616830" Dec 11 10:30:31 crc kubenswrapper[4953]: W1211 10:30:31.766813 4953 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod954a102c_a60d_405a_b579_450e6b8e5c8b.slice/crio-1a03c3a2da2d1409d46d0b0d13cfa9431cdb40ee4325c1a5bd66ced7a622df96 WatchSource:0}: Error finding container 1a03c3a2da2d1409d46d0b0d13cfa9431cdb40ee4325c1a5bd66ced7a622df96: Status 404 returned error can't find the container with id 1a03c3a2da2d1409d46d0b0d13cfa9431cdb40ee4325c1a5bd66ced7a622df96 Dec 11 10:30:31 crc kubenswrapper[4953]: I1211 10:30:31.814837 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-gkctw" podStartSLOduration=4.608445995 podStartE2EDuration="33.814820614s" podCreationTimestamp="2025-12-11 10:29:58 +0000 UTC" firstStartedPulling="2025-12-11 10:30:00.508558692 +0000 UTC m=+1118.532417725" lastFinishedPulling="2025-12-11 10:30:29.714933311 +0000 UTC m=+1147.738792344" observedRunningTime="2025-12-11 10:30:31.808216036 +0000 UTC m=+1149.832075069" watchObservedRunningTime="2025-12-11 10:30:31.814820614 +0000 UTC m=+1149.838679637" Dec 11 10:30:31 crc kubenswrapper[4953]: I1211 10:30:31.938909 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7f95dc5b94bcm8w"] Dec 11 10:30:32 crc kubenswrapper[4953]: E1211 10:30:32.225975 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-wvw7n" podUID="9feb23b4-0b52-42c6-98a8-6b1de2241028" Dec 11 10:30:32 crc kubenswrapper[4953]: E1211 10:30:32.238179 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-c95px" podUID="b7ddbee0-c6cd-4571-912d-09744da61237" Dec 11 10:30:32 crc kubenswrapper[4953]: I1211 10:30:32.415224 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-85cbc5886b-lxtqb"] Dec 11 10:30:32 crc kubenswrapper[4953]: I1211 10:30:32.552192 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-jz7t2" event={"ID":"95010a68-4a99-4e84-8785-cb970f7085e1","Type":"ContainerStarted","Data":"711621aca5b3364c1bd409c20d8ee58361571d07f17710d67193b0f16cc22b8e"} Dec 11 10:30:32 crc kubenswrapper[4953]: I1211 10:30:32.552793 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-jz7t2" Dec 11 10:30:32 crc kubenswrapper[4953]: I1211 10:30:32.555116 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-dx46k" event={"ID":"c77a72a9-141b-4be9-99e2-406e16b68c2b","Type":"ContainerStarted","Data":"589a0666992d0239235ef129a1a697ca9a153192d733498b4b2439e414d3f253"} Dec 11 10:30:32 crc kubenswrapper[4953]: I1211 10:30:32.555856 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-dx46k" Dec 11 10:30:32 crc kubenswrapper[4953]: I1211 10:30:32.561647 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-wvw7n" event={"ID":"9feb23b4-0b52-42c6-98a8-6b1de2241028","Type":"ContainerStarted","Data":"17d9acf377c106ccc1e2bb3a79ef8bfe636f5577537b9c12863376d8260bdc88"} Dec 11 10:30:32 crc kubenswrapper[4953]: I1211 10:30:32.562314 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-dx46k" Dec 11 10:30:32 crc kubenswrapper[4953]: I1211 10:30:32.565371 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-jz7t2" Dec 11 10:30:32 crc kubenswrapper[4953]: I1211 10:30:32.575712 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-c95px" event={"ID":"b7ddbee0-c6cd-4571-912d-09744da61237","Type":"ContainerStarted","Data":"79690124a5b676bb4a812a0af618bade1b747b7f3f17c4988c2d53d8fb4aa651"} Dec 11 10:30:32 crc kubenswrapper[4953]: I1211 10:30:32.584694 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7f95dc5b94bcm8w" event={"ID":"e7802018-7972-4d69-8b66-ea4bb637ff7f","Type":"ContainerStarted","Data":"c726f6c8cf275b537fc3a7a5c709205581b884a251923308f95693237a9cdd80"} Dec 11 10:30:32 crc kubenswrapper[4953]: I1211 10:30:32.588243 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-c2pkq" event={"ID":"0ab27af2-4f6b-4e0f-b399-bef9b137ce63","Type":"ContainerStarted","Data":"0c9efa6696197ffa637538658c43f360799ded47fd4d736d4c591863f6849fe9"} Dec 11 10:30:32 crc kubenswrapper[4953]: I1211 10:30:32.589152 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-c2pkq" Dec 11 10:30:32 crc kubenswrapper[4953]: I1211 10:30:32.592900 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-78f8948974-5ths7" event={"ID":"a5873dea-ac09-449b-95ae-fc5f77f0e8d4","Type":"ContainerStarted","Data":"46cac8cfc9fcda2fbf1a8f6a127dc8839da3016a3bbdb188f1dff85fb4fa787f"} Dec 11 10:30:32 crc kubenswrapper[4953]: I1211 10:30:32.594114 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-78f8948974-5ths7" Dec 11 10:30:32 crc kubenswrapper[4953]: I1211 10:30:32.598248 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-78f8948974-5ths7" Dec 11 10:30:32 crc kubenswrapper[4953]: I1211 10:30:32.599765 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5854674fcc-7n7sr" event={"ID":"905bc7ea-6d15-4d73-ad1c-71041c90e83f","Type":"ContainerStarted","Data":"4526ff09d7b7cdf9a8170ecb0e2e2da0a8d0e8a65b801ad01454feeb73741313"} Dec 11 10:30:32 crc kubenswrapper[4953]: I1211 10:30:32.600507 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-5854674fcc-7n7sr" Dec 11 10:30:32 crc kubenswrapper[4953]: I1211 10:30:32.607419 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-5854674fcc-7n7sr" Dec 11 10:30:32 crc kubenswrapper[4953]: I1211 10:30:32.608202 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-85cbc5886b-lxtqb" event={"ID":"1c0f14ca-80dd-4704-989d-ca02d722bf43","Type":"ContainerStarted","Data":"4af879c8467ec1871b98a0cc889aab6ef1dd215041e2b2a12053d9ec15263eff"} Dec 11 10:30:32 crc kubenswrapper[4953]: I1211 10:30:32.619103 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-nczzd" event={"ID":"954a102c-a60d-405a-b579-450e6b8e5c8b","Type":"ContainerStarted","Data":"1a03c3a2da2d1409d46d0b0d13cfa9431cdb40ee4325c1a5bd66ced7a622df96"} Dec 11 10:30:32 crc kubenswrapper[4953]: I1211 10:30:32.629598 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-gkqq9" event={"ID":"d5261918-b44c-4d64-93d3-ab0742fdde80","Type":"ContainerStarted","Data":"6f96d00ca14d7be63d341828ec9044f835aeb6e50dba6f78c6f7d33c30398856"} Dec 11 10:30:32 crc kubenswrapper[4953]: I1211 10:30:32.631397 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-gkqq9" Dec 11 10:30:32 crc kubenswrapper[4953]: I1211 10:30:32.640902 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-58d5ff84df-5zt8b" Dec 11 10:30:32 crc kubenswrapper[4953]: I1211 10:30:32.641013 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-qx85w" Dec 11 10:30:32 crc kubenswrapper[4953]: I1211 10:30:32.641084 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-gkqq9" Dec 11 10:30:32 crc kubenswrapper[4953]: I1211 10:30:32.717380 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-jz7t2" podStartSLOduration=4.209766574 podStartE2EDuration="34.717357575s" podCreationTimestamp="2025-12-11 10:29:58 +0000 UTC" firstStartedPulling="2025-12-11 10:30:00.388085659 +0000 UTC m=+1118.411944692" lastFinishedPulling="2025-12-11 10:30:30.89567666 +0000 UTC m=+1148.919535693" observedRunningTime="2025-12-11 10:30:32.640650931 +0000 UTC m=+1150.664509964" watchObservedRunningTime="2025-12-11 10:30:32.717357575 +0000 UTC m=+1150.741216608" Dec 11 10:30:32 crc kubenswrapper[4953]: I1211 10:30:32.718786 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-5854674fcc-7n7sr" podStartSLOduration=4.457141913 podStartE2EDuration="34.71877381s" podCreationTimestamp="2025-12-11 10:29:58 +0000 UTC" firstStartedPulling="2025-12-11 10:30:00.392770217 +0000 UTC m=+1118.416629250" lastFinishedPulling="2025-12-11 10:30:30.654402114 +0000 UTC m=+1148.678261147" observedRunningTime="2025-12-11 10:30:32.695744125 +0000 UTC m=+1150.719603158" watchObservedRunningTime="2025-12-11 10:30:32.71877381 +0000 UTC m=+1150.742632843" Dec 11 10:30:32 crc kubenswrapper[4953]: I1211 10:30:32.887327 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-78f8948974-5ths7" podStartSLOduration=4.643979334 podStartE2EDuration="34.887303495s" podCreationTimestamp="2025-12-11 10:29:58 +0000 UTC" firstStartedPulling="2025-12-11 10:30:00.465803657 +0000 UTC m=+1118.489662690" lastFinishedPulling="2025-12-11 10:30:30.709127808 +0000 UTC m=+1148.732986851" observedRunningTime="2025-12-11 10:30:32.861198253 +0000 UTC m=+1150.885057306" watchObservedRunningTime="2025-12-11 10:30:32.887303495 +0000 UTC m=+1150.911162528" Dec 11 10:30:32 crc kubenswrapper[4953]: I1211 10:30:32.936294 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-dx46k" podStartSLOduration=4.115157036 podStartE2EDuration="34.936266686s" podCreationTimestamp="2025-12-11 10:29:58 +0000 UTC" firstStartedPulling="2025-12-11 10:30:00.045613519 +0000 UTC m=+1118.069472552" lastFinishedPulling="2025-12-11 10:30:30.866723169 +0000 UTC m=+1148.890582202" observedRunningTime="2025-12-11 10:30:32.901005237 +0000 UTC m=+1150.924864280" watchObservedRunningTime="2025-12-11 10:30:32.936266686 +0000 UTC m=+1150.960125719" Dec 11 10:30:32 crc kubenswrapper[4953]: I1211 10:30:32.940268 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-c2pkq" podStartSLOduration=3.2877224 podStartE2EDuration="34.940254012s" podCreationTimestamp="2025-12-11 10:29:58 +0000 UTC" firstStartedPulling="2025-12-11 10:30:00.462477542 +0000 UTC m=+1118.486336575" lastFinishedPulling="2025-12-11 10:30:32.115009164 +0000 UTC m=+1150.138868187" observedRunningTime="2025-12-11 10:30:32.938024782 +0000 UTC m=+1150.961883815" watchObservedRunningTime="2025-12-11 10:30:32.940254012 +0000 UTC m=+1150.964113045" Dec 11 10:30:33 crc kubenswrapper[4953]: I1211 10:30:33.216704 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-gkqq9" podStartSLOduration=4.828221295 podStartE2EDuration="35.216681534s" podCreationTimestamp="2025-12-11 10:29:58 +0000 UTC" firstStartedPulling="2025-12-11 10:30:00.474190481 +0000 UTC m=+1118.498049514" lastFinishedPulling="2025-12-11 10:30:30.86265072 +0000 UTC m=+1148.886509753" observedRunningTime="2025-12-11 10:30:33.124997148 +0000 UTC m=+1151.148856181" watchObservedRunningTime="2025-12-11 10:30:33.216681534 +0000 UTC m=+1151.240540557" Dec 11 10:30:33 crc kubenswrapper[4953]: I1211 10:30:33.727073 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-85cbc5886b-lxtqb" event={"ID":"1c0f14ca-80dd-4704-989d-ca02d722bf43","Type":"ContainerStarted","Data":"6b859e84cc0c15ce3ef20e237b40d8b7592fe71c4780bd2c80efe60e4cda0ab6"} Dec 11 10:30:33 crc kubenswrapper[4953]: I1211 10:30:33.728237 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-85cbc5886b-lxtqb" Dec 11 10:30:33 crc kubenswrapper[4953]: I1211 10:30:33.742444 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-vz6q6" event={"ID":"33ec47dc-5b73-4fd2-b0e1-eee01b12110f","Type":"ContainerStarted","Data":"6f966efec1e14e707c10b5b732f2944bff8ca87bf5394680d86af630c8ef1989"} Dec 11 10:30:33 crc kubenswrapper[4953]: I1211 10:30:33.743294 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-vz6q6" Dec 11 10:30:33 crc kubenswrapper[4953]: I1211 10:30:33.751628 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-z6knw" event={"ID":"46ad2123-023a-4bcb-9b05-2a6b223c2d02","Type":"ContainerStarted","Data":"9dd8ee70e398d7b3d2e3922be5e4c652f74ff2a93d4abb23a24bc507f11003a2"} Dec 11 10:30:33 crc kubenswrapper[4953]: I1211 10:30:33.752449 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-z6knw" Dec 11 10:30:33 crc kubenswrapper[4953]: I1211 10:30:33.760110 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-zqt7s" event={"ID":"43c3d99b-4ce9-421a-9212-c99b50e671af","Type":"ContainerStarted","Data":"b7cbf26bccdd1702cc451b91b65c3861568fd8338b4ff057ce53b2bc5ac60671"} Dec 11 10:30:33 crc kubenswrapper[4953]: I1211 10:30:33.761268 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-zqt7s" Dec 11 10:30:33 crc kubenswrapper[4953]: I1211 10:30:33.766797 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-zqt7s" Dec 11 10:30:33 crc kubenswrapper[4953]: I1211 10:30:33.777101 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-c8jqf" event={"ID":"b97b8317-f4e7-440c-8d72-df1cf55afe09","Type":"ContainerStarted","Data":"ab12aa8e7893c71568c1887107a9adc0bb3a9768b7292221dfa686c14cc82110"} Dec 11 10:30:33 crc kubenswrapper[4953]: I1211 10:30:33.777168 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-c8jqf" Dec 11 10:30:33 crc kubenswrapper[4953]: I1211 10:30:33.780227 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-85cbc5886b-lxtqb" podStartSLOduration=35.780198803 podStartE2EDuration="35.780198803s" podCreationTimestamp="2025-12-11 10:29:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:30:33.776279179 +0000 UTC m=+1151.800138212" watchObservedRunningTime="2025-12-11 10:30:33.780198803 +0000 UTC m=+1151.804057836" Dec 11 10:30:33 crc kubenswrapper[4953]: I1211 10:30:33.877267 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-z6knw" podStartSLOduration=4.003856623 podStartE2EDuration="35.877248488s" podCreationTimestamp="2025-12-11 10:29:58 +0000 UTC" firstStartedPulling="2025-12-11 10:30:00.416412121 +0000 UTC m=+1118.440271154" lastFinishedPulling="2025-12-11 10:30:32.289803986 +0000 UTC m=+1150.313663019" observedRunningTime="2025-12-11 10:30:33.875361629 +0000 UTC m=+1151.899220672" watchObservedRunningTime="2025-12-11 10:30:33.877248488 +0000 UTC m=+1151.901107521" Dec 11 10:30:33 crc kubenswrapper[4953]: I1211 10:30:33.918776 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-vz6q6" podStartSLOduration=3.641612699 podStartE2EDuration="35.918753334s" podCreationTimestamp="2025-12-11 10:29:58 +0000 UTC" firstStartedPulling="2025-12-11 10:30:00.049842832 +0000 UTC m=+1118.073701865" lastFinishedPulling="2025-12-11 10:30:32.326983467 +0000 UTC m=+1150.350842500" observedRunningTime="2025-12-11 10:30:33.91608838 +0000 UTC m=+1151.939947423" watchObservedRunningTime="2025-12-11 10:30:33.918753334 +0000 UTC m=+1151.942612367" Dec 11 10:30:33 crc kubenswrapper[4953]: I1211 10:30:33.984090 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-zqt7s" podStartSLOduration=4.201898988 podStartE2EDuration="35.984026229s" podCreationTimestamp="2025-12-11 10:29:58 +0000 UTC" firstStartedPulling="2025-12-11 10:30:00.328069771 +0000 UTC m=+1118.351928794" lastFinishedPulling="2025-12-11 10:30:32.110197002 +0000 UTC m=+1150.134056035" observedRunningTime="2025-12-11 10:30:33.979245089 +0000 UTC m=+1152.003104122" watchObservedRunningTime="2025-12-11 10:30:33.984026229 +0000 UTC m=+1152.007885292" Dec 11 10:30:34 crc kubenswrapper[4953]: I1211 10:30:34.009893 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-c8jqf" podStartSLOduration=4.350064761 podStartE2EDuration="36.009877593s" podCreationTimestamp="2025-12-11 10:29:58 +0000 UTC" firstStartedPulling="2025-12-11 10:30:00.463456142 +0000 UTC m=+1118.487315175" lastFinishedPulling="2025-12-11 10:30:32.123268974 +0000 UTC m=+1150.147128007" observedRunningTime="2025-12-11 10:30:34.007942302 +0000 UTC m=+1152.031801355" watchObservedRunningTime="2025-12-11 10:30:34.009877593 +0000 UTC m=+1152.033736626" Dec 11 10:30:34 crc kubenswrapper[4953]: I1211 10:30:34.909375 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-wvw7n" event={"ID":"9feb23b4-0b52-42c6-98a8-6b1de2241028","Type":"ContainerStarted","Data":"8e94bf242f6948f4497d8acff6ad8ce43d8cf8fecc22028a710a1fcae23c70d6"} Dec 11 10:30:34 crc kubenswrapper[4953]: I1211 10:30:34.909761 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-wvw7n" Dec 11 10:30:34 crc kubenswrapper[4953]: I1211 10:30:34.913803 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-c95px" event={"ID":"b7ddbee0-c6cd-4571-912d-09744da61237","Type":"ContainerStarted","Data":"2b6b67e1c375103e63b027c765e33a1fffd5ba49b3061c1732669794e5f5e828"} Dec 11 10:30:34 crc kubenswrapper[4953]: I1211 10:30:34.951355 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-wvw7n" podStartSLOduration=3.97107284 podStartE2EDuration="36.951333499s" podCreationTimestamp="2025-12-11 10:29:58 +0000 UTC" firstStartedPulling="2025-12-11 10:30:00.463442541 +0000 UTC m=+1118.487301574" lastFinishedPulling="2025-12-11 10:30:33.4437032 +0000 UTC m=+1151.467562233" observedRunningTime="2025-12-11 10:30:34.949724029 +0000 UTC m=+1152.973583072" watchObservedRunningTime="2025-12-11 10:30:34.951333499 +0000 UTC m=+1152.975192532" Dec 11 10:30:34 crc kubenswrapper[4953]: I1211 10:30:34.977769 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-c95px" podStartSLOduration=4.193248466 podStartE2EDuration="36.977751671s" podCreationTimestamp="2025-12-11 10:29:58 +0000 UTC" firstStartedPulling="2025-12-11 10:30:00.446266502 +0000 UTC m=+1118.470125535" lastFinishedPulling="2025-12-11 10:30:33.230769707 +0000 UTC m=+1151.254628740" observedRunningTime="2025-12-11 10:30:34.97168153 +0000 UTC m=+1152.995540583" watchObservedRunningTime="2025-12-11 10:30:34.977751671 +0000 UTC m=+1153.001610704" Dec 11 10:30:35 crc kubenswrapper[4953]: I1211 10:30:35.935533 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-c95px" Dec 11 10:30:38 crc kubenswrapper[4953]: I1211 10:30:38.452486 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-c8jqf" Dec 11 10:30:38 crc kubenswrapper[4953]: I1211 10:30:38.628734 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-wvw7n" Dec 11 10:30:38 crc kubenswrapper[4953]: I1211 10:30:38.691773 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-vz6q6" Dec 11 10:30:38 crc kubenswrapper[4953]: I1211 10:30:38.814461 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-c95px" Dec 11 10:30:38 crc kubenswrapper[4953]: I1211 10:30:38.879872 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-c2pkq" Dec 11 10:30:38 crc kubenswrapper[4953]: I1211 10:30:38.917603 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-z6knw" Dec 11 10:30:41 crc kubenswrapper[4953]: I1211 10:30:41.417425 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-85cbc5886b-lxtqb" Dec 11 10:30:46 crc kubenswrapper[4953]: E1211 10:30:46.223118 4953 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:9d539fb6b72f91cfc6200bb91b7c6dbaeab17c7711342dd3a9549c66762a2d48" Dec 11 10:30:46 crc kubenswrapper[4953]: E1211 10:30:46.224514 4953 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:9d539fb6b72f91cfc6200bb91b7c6dbaeab17c7711342dd3a9549c66762a2d48,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:true,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-baremetal-operator-agent@sha256:add611bf73d5aab1ac07ef665281ed0e5ad1aded495b8b32927aa2e726abb29a,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_ANSIBLEEE_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-ansibleee-runner@sha256:2f23894a78a13a0ae52fa2f8ae1e1b99282bebecd0cfda3db696e5d371097eaa,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-api@sha256:36946a77001110f391fb254ec77129803a6b7c34dacfa1a4c8c51aa8d23d57c5,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_EVALUATOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-evaluator@sha256:dd58b29b5d88662a621c685c2b76fe8a71cc9e82aa85dff22a66182a6ceef3ae,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_LISTENER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-listener@sha256:fc47ed1c6249c9f6ef13ef1eac82d5a34819a715dea5117d33df0d0dc69ace8b,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_NOTIFIER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-notifier@sha256:e21d35c272d016f4dbd323dc827ee83538c96674adfb188e362aa652ce167b61,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_APACHE_IMAGE_URL_DEFAULT,Value:registry.redhat.io/ubi9/httpd-24@sha256:6b929971283d69f485a7d3e449fb5a3dd65d5a4de585c73419e776821d00062c,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-api@sha256:fe32d3ea620f0c7ecfdde9bbf28417fde03bc18c6f60b1408fa8da24d8188f16,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_KEYSTONE_LISTENER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-keystone-listener@sha256:c2ace235f775334be02d78928802b76309543e869cc6b4b55843ee546691e6c3,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-worker@sha256:be77cc58b87f299b42bb2cbe74f3f8d028b8c887851a53209441b60e1363aeb5,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_CENTRAL_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-central@sha256:5a548c25fe3d02f7a042cb0a6d28fc8039a34c4a3b3d07aadda4aba3a926e777,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_COMPUTE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-compute@sha256:41dc9cf27a902d9c7b392d730bd761cf3c391a548a841e9e4d38e1571f3c53bf,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_IPMI_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi@sha256:174f8f712eb5fdda5061a1a68624befb27bbe766842653788583ec74c5ae506a,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_MYSQLD_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/prometheus/mysqld-exporter@sha256:7211a617ec657701ca819aa0ba28e1d5750f5bf2c1391b755cc4a48cc360b0fa,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_NOTIFICATION_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-notification@sha256:df14f6de785b8aefc38ceb5b47088405224cfa914977c9ab811514cc77b08a67,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_SGCORE_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/sg-core@sha256:09b5017c95d7697e66b9c64846bc48ef5826a009cba89b956ec54561e5f4a2d1,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:b59b7445e581cc720038107e421371c86c5765b2967e77d884ef29b1d9fd0f49,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_BACKUP_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-backup@sha256:b8d76f96b6f17a3318d089c0b5c0e6c292d969ab392cdcc708ec0f0188c953ae,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-scheduler@sha256:43c55407c7c9b4141482533546e6570535373f7e36df374dfbbe388293c19dbf,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_VOLUME_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-volume@sha256:097816f289af117f14cd8ee1678a9635e8da6de4a1bde834d02199c4ef65c5c0,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CLOUDKITTY_API_IMAGE_URL_DEFAULT,Value:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api@sha256:9b4547f0bbb29be8d91f7adbf4914712fcca39a6841293c334ee97340c4eb570,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CLOUDKITTY_PROC_IMAGE_URL_DEFAULT,Value:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-processor@sha256:e29f7d54ba2134b90fc17e5781773331b7d67b936419dbc81e20b8a6ce0866b9,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-api@sha256:281668af8ed34c2464f3593d350cf7b695b41b81f40cc539ad74b7b65822afb9,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_BACKENDBIND9_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-backend-bind9@sha256:84319e5dd6569ea531e64b688557c2a2e20deb5225f3d349e402e34858f00fe7,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_CENTRAL_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-central@sha256:acb53e0e210562091843c212bc0cf5541daacd6f2bd18923430bae8c36578731,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_MDNS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-mdns@sha256:be6f4002842ebadf30d035721567a7e669f12a6eef8c00dc89030b3b08f3dd2c,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_PRODUCER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-producer@sha256:988635be61f6ed8c0d707622193b7efe8e9b1dc7effbf9b09d2db5ec593b59e7,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_UNBOUND_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-unbound@sha256:63e08752678a68571e1c54ceea42c113af493a04cdc22198a3713df7b53f87e5,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-worker@sha256:6741d06b0f1bbeb2968807dc5be45853cdd3dfb9cc7ea6ef23e909ae24f3cbf4,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_FRR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-frr@sha256:1803a36d1a397a5595dddb4a2f791ab9443d3af97391a53928fa495ca7032d93,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_ISCSID_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-iscsid@sha256:d163fcf801d67d9c67b2ae4368675b75714db7c531de842aad43979a888c5d57,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_KEPLER_IMAGE_URL_DEFAULT,Value:quay.io/sustainable_computing_io/kepler@sha256:581b65b646301e0fcb07582150ba63438f1353a85bf9acf1eb2acb4ce71c58bd,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_LOGROTATE_CROND_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cron@sha256:15bf81d933a44128cb6f3264632a9563337eb3bfe82c4a33c746595467d3b0c3,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_MULTIPATHD_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_DHCP_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent@sha256:3a08e21338f651a90ee83ae46242b8c80c64488144f27a77848517049c3a8f5d,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_METADATA_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_OVN_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-ovn-agent@sha256:ebeb4443ab9f9360925f7abd9c24b7a453390d678f79ed247d2042dcc6f9c3fc,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_SRIOV_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent@sha256:04bb4cd601b08034c6cba18e701fcd36026ec4340402ed710a0bbd09d8e4884d,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NODE_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_OVN_BGP_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-bgp-agent@sha256:27b80783b7d4658d89dda9a09924e9ee472908a8fa1c86bcf3f773d17a4196e0,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_PODMAN_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_GLANCE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-glance-api@sha256:e4aa4ebbb1e581a12040e9ad2ae2709ac31b5d965bb64fc4252d1028b05c565f,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-api@sha256:8cb133c5a5551e1aa11ef3326149db1babbf00924d0ff493ebe3346b69fd4b5b,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_CFNAPI_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-api-cfn@sha256:13c3567176bb2d033f6c6b30e20404bd67a217e2537210bf222f3afe0c8619b7,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_ENGINE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-engine@sha256:60ac3446d57f1a97a6ca2d8e6584b00aa18704bc2707a7ac1a6a28c6d685d215,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HORIZON_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-horizon@sha256:dd7600bc5278c663cfcfecafd3fb051a2cd2ddc3c1efb07738bf09512aa23ae7,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_INFRA_MEMCACHED_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-memcached@sha256:e47191ba776414b781b3e27b856ab45a03b9480c7dc2b1addb939608794882dc,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_INFRA_REDIS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-redis@sha256:7e7788d1aae251e60f4012870140c65bce9760cd27feaeec5f65c42fe4ffce77,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-api@sha256:6a401117007514660c694248adce8136d83559caf1b38e475935335e09ac954a,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_CONDUCTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-conductor@sha256:364d50f873551805782c23264570eff40e3807f35d9bccdd456515b4e31da488,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_INSPECTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-inspector@sha256:2d72dd490576e0cb670d21a08420888f3758d64ed0cbd2ef8b9aa8488ad2ce40,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_NEUTRON_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-neutron-agent@sha256:96fdf7cddf31509ee63950a9d61320d0b01beb1212e28f37a6e872d6589ded22,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_PXE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-pxe@sha256:8b7534a2999075f919fc162d21f76026e8bf781913cc3d2ac07e484e9b2fc596,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_PYTHON_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/ironic-python-agent@sha256:d65eaaea2ab02d63af9d8a106619908fa01a2e56bd6753edc5590e66e46270db,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_KEYSTONE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-keystone@sha256:d042d7f91bafb002affff8cf750d694a0da129377255c502028528fe2280e790,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_KSM_IMAGE_URL_DEFAULT,Value:registry.k8s.io/kube-state-metrics/kube-state-metrics@sha256:db384bf43222b066c378e77027a675d4cd9911107adba46c2922b3a55e10d6fb,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-api@sha256:a8faef9ea5e8ef8327b7fbb9b9cafc74c38c09c7e3b2365a7cad5eb49766f71d,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-scheduler@sha256:88aa46ea03a5584560806aa4b093584fda6b2f54c562005b72be2e3615688090,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_SHARE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-share@sha256:c08ecdfb7638c1897004347d835bdbabacff40a345f64c2b3111c377096bfa56,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MARIADB_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NET_UTILS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-netutils@sha256:8b4025a4f30e83acc0b51ac063eea701006a302a1acbdec53f54b540270887f7,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NEUTRON_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-api@sha256:4992f5ddbd20cca07e750846b2dbe7c51c5766c3002c388f8d8a158e347ec63d,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_COMPUTE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_CONDUCTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-conductor@sha256:22f097cb86b28ac48dc670ed7e0e841280bef1608f11b2b4536fbc2d2a6a90be,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_NOVNC_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-novncproxy@sha256:20b3ad38accb9eb8849599280a263d3436a5af03d89645e5ec4508586297ffde,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-scheduler@sha256:378ed518b68ea809cffa2ff7a93d51e52cfc53af14eedc978924fdabccef0325,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-api@sha256:8c3632033f8c004f31a1c7c57c5ca7b450a11e9170a220b8943b57f80717c70c,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_HEALTHMANAGER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-health-manager@sha256:3f746f7c6a8c48c0f4a800dcb4bc49bfbc4de4a9ca6a55d8f22bc515a92ea1d9,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_HOUSEKEEPING_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-housekeeping@sha256:e1f7bf105190c3cbbfcf0aeeb77a92d1466100ba8377221ed5eee228949e05bd,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_RSYSLOG_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-rsyslog@sha256:954b4c60705b229a968aba3b5b35ab02759378706103ed1189fae3e3316fac35,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-worker@sha256:f2e0025727efb95efa65e6af6338ae3fc79bf61095d6d54931a0be8d7fe9acac,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_CLIENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-openstackclient@sha256:2b4f8494513a3af102066fec5868ab167ac8664aceb2f0c639d7a0b60260a944,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_MUST_GATHER_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-must-gather@sha256:854a802357b4f565a366fce3bf29b20c1b768ec4ab7e822ef52dfc2fef000d2c,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_NETWORK_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OS_CONTAINER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/edpm-hardened-uefi@sha256:194121c2d79401bd41f75428a437fe32a5806a6a160f7d80798ff66baed9afa5,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_CONTROLLER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_CONTROLLER_OVS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-base@sha256:df45459c449f64cc6471e98c0890ac00dcc77a940f85d4e7e9d9dd52990d65b3,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_NB_DBCLUSTER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server@sha256:947c1bb9373b7d3f2acea104a5666e394c830111bf80d133f1fe7238e4d06f28,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_NORTHD_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-northd@sha256:425ebddc9d6851ee9c730e67eaf43039943dc7937fb11332a41335a9114b2d44,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_SB_DBCLUSTER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-sb-db-server@sha256:bea03c7c34dc6ef8bc163e12a8940011b8feebc44a2efaaba2d3c4c6c515d6c8,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_PLACEMENT_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-placement-api@sha256:33f4e5f7a715d48482ec46a42267ea992fa268585303c4f1bd3cbea072a6348b,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_RABBITMQ_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:e733252aab7f4bc0efbdd712bcd88e44c5498bf1773dba843bc9dcfac324fe3d,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_ACCOUNT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-account@sha256:a2280bc80b454dc9e5c95daf74b8a53d6f9e42fc16d45287e089fc41014fe1da,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_CONTAINER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-container@sha256:88d687a7bb593b2e61598b422baba84d67c114419590a6d83d15327d119ce208,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_OBJECT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-object@sha256:2635e02b99d380b2e547013c09c6c8da01bc89b3d3ce570e4d8f8656c7635b0e,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_PROXY_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-proxy-server@sha256:ac7fefe1c93839c7ccb2aaa0a18751df0e9f64a36a3b4cc1b81d82d7774b8b45,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_TEST_TEMPEST_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-tempest-all@sha256:a357cf166caaeea230f8a912aceb042e3170c5d680844e8f97b936baa10834ed,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-api@sha256:debe653cf73fece436c0fdc897a41f63b9b55b470ef04cddba573992f21ddf5d,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_APPLIER_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-applier@sha256:30cebe5bc6d290c90663bac2fc66122b38e677ec4714aaddb40a2dc239671ecd,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_DECISION_ENGINE_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-decision-engine@sha256:cb305c062a57fe0ec93b7ed6f6d0bb5b853872ed21dde1b354b853ceb569c6a3,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cert,ReadOnly:true,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wmhpk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-baremetal-operator-controller-manager-7f95dc5b94bcm8w_openstack-operators(e7802018-7972-4d69-8b66-ea4bb637ff7f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 11 10:30:48 crc kubenswrapper[4953]: I1211 10:30:48.194744 4953 patch_prober.go:28] interesting pod/machine-config-daemon-q2898 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 10:30:48 crc kubenswrapper[4953]: I1211 10:30:48.195468 4953 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 10:30:49 crc kubenswrapper[4953]: E1211 10:30:49.453065 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7f95dc5b94bcm8w" podUID="e7802018-7972-4d69-8b66-ea4bb637ff7f" Dec 11 10:30:50 crc kubenswrapper[4953]: I1211 10:30:50.229209 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7f95dc5b94bcm8w" event={"ID":"e7802018-7972-4d69-8b66-ea4bb637ff7f","Type":"ContainerStarted","Data":"9292c88cb80bc4df0f0e0858123fd35fe4a4822c1ce63031d73d60d5b560d23f"} Dec 11 10:30:50 crc kubenswrapper[4953]: E1211 10:30:50.232274 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:9d539fb6b72f91cfc6200bb91b7c6dbaeab17c7711342dd3a9549c66762a2d48\\\"\"" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7f95dc5b94bcm8w" podUID="e7802018-7972-4d69-8b66-ea4bb637ff7f" Dec 11 10:30:51 crc kubenswrapper[4953]: I1211 10:30:51.239655 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-nczzd" event={"ID":"954a102c-a60d-405a-b579-450e6b8e5c8b","Type":"ContainerStarted","Data":"fe3b28699ebdf46d0a8463c502c0d35750ff5aaa5b2a397087df674589c136cd"} Dec 11 10:30:51 crc kubenswrapper[4953]: E1211 10:30:51.242705 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:9d539fb6b72f91cfc6200bb91b7c6dbaeab17c7711342dd3a9549c66762a2d48\\\"\"" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7f95dc5b94bcm8w" podUID="e7802018-7972-4d69-8b66-ea4bb637ff7f" Dec 11 10:30:52 crc kubenswrapper[4953]: I1211 10:30:52.247865 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-nczzd" event={"ID":"954a102c-a60d-405a-b579-450e6b8e5c8b","Type":"ContainerStarted","Data":"03044a9a7da246c627407162c9e1797bcbc1d770eea1d531ffdf5d6fb53293d2"} Dec 11 10:30:52 crc kubenswrapper[4953]: I1211 10:30:52.248178 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-nczzd" Dec 11 10:30:52 crc kubenswrapper[4953]: I1211 10:30:52.266835 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-nczzd" podStartSLOduration=35.139681874 podStartE2EDuration="54.266748581s" podCreationTimestamp="2025-12-11 10:29:58 +0000 UTC" firstStartedPulling="2025-12-11 10:30:31.77276697 +0000 UTC m=+1149.796626003" lastFinishedPulling="2025-12-11 10:30:50.899833677 +0000 UTC m=+1168.923692710" observedRunningTime="2025-12-11 10:30:52.263183028 +0000 UTC m=+1170.287042061" watchObservedRunningTime="2025-12-11 10:30:52.266748581 +0000 UTC m=+1170.290607614" Dec 11 10:31:00 crc kubenswrapper[4953]: I1211 10:31:00.515731 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-nczzd" Dec 11 10:31:06 crc kubenswrapper[4953]: I1211 10:31:06.370486 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7f95dc5b94bcm8w" event={"ID":"e7802018-7972-4d69-8b66-ea4bb637ff7f","Type":"ContainerStarted","Data":"698e6cc5a5db0fc01b05d0ee41cf4837f7bf6638b448831e592054715acfb42c"} Dec 11 10:31:06 crc kubenswrapper[4953]: I1211 10:31:06.371264 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7f95dc5b94bcm8w" Dec 11 10:31:06 crc kubenswrapper[4953]: I1211 10:31:06.410359 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7f95dc5b94bcm8w" podStartSLOduration=34.866691391 podStartE2EDuration="1m8.410341254s" podCreationTimestamp="2025-12-11 10:29:58 +0000 UTC" firstStartedPulling="2025-12-11 10:30:31.966512119 +0000 UTC m=+1149.990371152" lastFinishedPulling="2025-12-11 10:31:05.510161982 +0000 UTC m=+1183.534021015" observedRunningTime="2025-12-11 10:31:06.404454547 +0000 UTC m=+1184.428313570" watchObservedRunningTime="2025-12-11 10:31:06.410341254 +0000 UTC m=+1184.434200287" Dec 11 10:31:10 crc kubenswrapper[4953]: I1211 10:31:10.905080 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7f95dc5b94bcm8w" Dec 11 10:31:18 crc kubenswrapper[4953]: I1211 10:31:18.194233 4953 patch_prober.go:28] interesting pod/machine-config-daemon-q2898 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 10:31:18 crc kubenswrapper[4953]: I1211 10:31:18.194964 4953 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 10:31:18 crc kubenswrapper[4953]: I1211 10:31:18.195034 4953 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q2898" Dec 11 10:31:18 crc kubenswrapper[4953]: I1211 10:31:18.195779 4953 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d7aacf4c14bd2bc98ec833613461a09282ac2ac960a4b2c012b1862a1a65908a"} pod="openshift-machine-config-operator/machine-config-daemon-q2898" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 11 10:31:18 crc kubenswrapper[4953]: I1211 10:31:18.195848 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" containerName="machine-config-daemon" containerID="cri-o://d7aacf4c14bd2bc98ec833613461a09282ac2ac960a4b2c012b1862a1a65908a" gracePeriod=600 Dec 11 10:31:19 crc kubenswrapper[4953]: I1211 10:31:19.689157 4953 generic.go:334] "Generic (PLEG): container finished" podID="ed741fb7-1326-48b7-a713-17c9f0243eac" containerID="d7aacf4c14bd2bc98ec833613461a09282ac2ac960a4b2c012b1862a1a65908a" exitCode=0 Dec 11 10:31:19 crc kubenswrapper[4953]: I1211 10:31:19.689227 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q2898" event={"ID":"ed741fb7-1326-48b7-a713-17c9f0243eac","Type":"ContainerDied","Data":"d7aacf4c14bd2bc98ec833613461a09282ac2ac960a4b2c012b1862a1a65908a"} Dec 11 10:31:19 crc kubenswrapper[4953]: I1211 10:31:19.689507 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q2898" event={"ID":"ed741fb7-1326-48b7-a713-17c9f0243eac","Type":"ContainerStarted","Data":"3a6e85260ff84ef604c5e7d3682ea7027e5daf751b9330364d08387a0213f214"} Dec 11 10:31:19 crc kubenswrapper[4953]: I1211 10:31:19.689527 4953 scope.go:117] "RemoveContainer" containerID="4128485b59765a5f0e1c236093ee311843a19fb26e6f522ba47964eefbd53b75" Dec 11 10:31:25 crc kubenswrapper[4953]: I1211 10:31:25.973378 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-ncvp2"] Dec 11 10:31:25 crc kubenswrapper[4953]: E1211 10:31:25.974254 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8937b8d-554d-44bf-9a69-b0e6350fd8f0" containerName="collect-profiles" Dec 11 10:31:25 crc kubenswrapper[4953]: I1211 10:31:25.974276 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8937b8d-554d-44bf-9a69-b0e6350fd8f0" containerName="collect-profiles" Dec 11 10:31:25 crc kubenswrapper[4953]: I1211 10:31:25.974484 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8937b8d-554d-44bf-9a69-b0e6350fd8f0" containerName="collect-profiles" Dec 11 10:31:25 crc kubenswrapper[4953]: I1211 10:31:25.975832 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84bb9d8bd9-ncvp2" Dec 11 10:31:25 crc kubenswrapper[4953]: I1211 10:31:25.980444 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Dec 11 10:31:25 crc kubenswrapper[4953]: I1211 10:31:25.980710 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-jhf2j" Dec 11 10:31:25 crc kubenswrapper[4953]: I1211 10:31:25.987936 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Dec 11 10:31:25 crc kubenswrapper[4953]: I1211 10:31:25.988113 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Dec 11 10:31:25 crc kubenswrapper[4953]: I1211 10:31:25.991898 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-ncvp2"] Dec 11 10:31:26 crc kubenswrapper[4953]: I1211 10:31:26.195966 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h56d8\" (UniqueName: \"kubernetes.io/projected/f12edfc1-607b-4fd2-bf95-997de251003d-kube-api-access-h56d8\") pod \"dnsmasq-dns-84bb9d8bd9-ncvp2\" (UID: \"f12edfc1-607b-4fd2-bf95-997de251003d\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-ncvp2" Dec 11 10:31:26 crc kubenswrapper[4953]: I1211 10:31:26.196531 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f12edfc1-607b-4fd2-bf95-997de251003d-config\") pod \"dnsmasq-dns-84bb9d8bd9-ncvp2\" (UID: \"f12edfc1-607b-4fd2-bf95-997de251003d\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-ncvp2" Dec 11 10:31:26 crc kubenswrapper[4953]: I1211 10:31:26.240876 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-6xzcj"] Dec 11 10:31:26 crc kubenswrapper[4953]: I1211 10:31:26.242221 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f854695bc-6xzcj" Dec 11 10:31:26 crc kubenswrapper[4953]: I1211 10:31:26.244692 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Dec 11 10:31:26 crc kubenswrapper[4953]: I1211 10:31:26.254519 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-6xzcj"] Dec 11 10:31:26 crc kubenswrapper[4953]: I1211 10:31:26.297840 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f12edfc1-607b-4fd2-bf95-997de251003d-config\") pod \"dnsmasq-dns-84bb9d8bd9-ncvp2\" (UID: \"f12edfc1-607b-4fd2-bf95-997de251003d\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-ncvp2" Dec 11 10:31:26 crc kubenswrapper[4953]: I1211 10:31:26.297930 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h56d8\" (UniqueName: \"kubernetes.io/projected/f12edfc1-607b-4fd2-bf95-997de251003d-kube-api-access-h56d8\") pod \"dnsmasq-dns-84bb9d8bd9-ncvp2\" (UID: \"f12edfc1-607b-4fd2-bf95-997de251003d\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-ncvp2" Dec 11 10:31:26 crc kubenswrapper[4953]: I1211 10:31:26.299330 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f12edfc1-607b-4fd2-bf95-997de251003d-config\") pod \"dnsmasq-dns-84bb9d8bd9-ncvp2\" (UID: \"f12edfc1-607b-4fd2-bf95-997de251003d\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-ncvp2" Dec 11 10:31:26 crc kubenswrapper[4953]: I1211 10:31:26.325857 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h56d8\" (UniqueName: \"kubernetes.io/projected/f12edfc1-607b-4fd2-bf95-997de251003d-kube-api-access-h56d8\") pod \"dnsmasq-dns-84bb9d8bd9-ncvp2\" (UID: \"f12edfc1-607b-4fd2-bf95-997de251003d\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-ncvp2" Dec 11 10:31:26 crc kubenswrapper[4953]: I1211 10:31:26.398892 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/908458a6-c175-4cc4-85b8-c8e6313c5501-dns-svc\") pod \"dnsmasq-dns-5f854695bc-6xzcj\" (UID: \"908458a6-c175-4cc4-85b8-c8e6313c5501\") " pod="openstack/dnsmasq-dns-5f854695bc-6xzcj" Dec 11 10:31:26 crc kubenswrapper[4953]: I1211 10:31:26.398944 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-td9pq\" (UniqueName: \"kubernetes.io/projected/908458a6-c175-4cc4-85b8-c8e6313c5501-kube-api-access-td9pq\") pod \"dnsmasq-dns-5f854695bc-6xzcj\" (UID: \"908458a6-c175-4cc4-85b8-c8e6313c5501\") " pod="openstack/dnsmasq-dns-5f854695bc-6xzcj" Dec 11 10:31:26 crc kubenswrapper[4953]: I1211 10:31:26.399089 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/908458a6-c175-4cc4-85b8-c8e6313c5501-config\") pod \"dnsmasq-dns-5f854695bc-6xzcj\" (UID: \"908458a6-c175-4cc4-85b8-c8e6313c5501\") " pod="openstack/dnsmasq-dns-5f854695bc-6xzcj" Dec 11 10:31:26 crc kubenswrapper[4953]: I1211 10:31:26.409590 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84bb9d8bd9-ncvp2" Dec 11 10:31:26 crc kubenswrapper[4953]: I1211 10:31:26.500189 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/908458a6-c175-4cc4-85b8-c8e6313c5501-dns-svc\") pod \"dnsmasq-dns-5f854695bc-6xzcj\" (UID: \"908458a6-c175-4cc4-85b8-c8e6313c5501\") " pod="openstack/dnsmasq-dns-5f854695bc-6xzcj" Dec 11 10:31:26 crc kubenswrapper[4953]: I1211 10:31:26.500236 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-td9pq\" (UniqueName: \"kubernetes.io/projected/908458a6-c175-4cc4-85b8-c8e6313c5501-kube-api-access-td9pq\") pod \"dnsmasq-dns-5f854695bc-6xzcj\" (UID: \"908458a6-c175-4cc4-85b8-c8e6313c5501\") " pod="openstack/dnsmasq-dns-5f854695bc-6xzcj" Dec 11 10:31:26 crc kubenswrapper[4953]: I1211 10:31:26.500323 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/908458a6-c175-4cc4-85b8-c8e6313c5501-config\") pod \"dnsmasq-dns-5f854695bc-6xzcj\" (UID: \"908458a6-c175-4cc4-85b8-c8e6313c5501\") " pod="openstack/dnsmasq-dns-5f854695bc-6xzcj" Dec 11 10:31:26 crc kubenswrapper[4953]: I1211 10:31:26.501339 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/908458a6-c175-4cc4-85b8-c8e6313c5501-config\") pod \"dnsmasq-dns-5f854695bc-6xzcj\" (UID: \"908458a6-c175-4cc4-85b8-c8e6313c5501\") " pod="openstack/dnsmasq-dns-5f854695bc-6xzcj" Dec 11 10:31:26 crc kubenswrapper[4953]: I1211 10:31:26.501409 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/908458a6-c175-4cc4-85b8-c8e6313c5501-dns-svc\") pod \"dnsmasq-dns-5f854695bc-6xzcj\" (UID: \"908458a6-c175-4cc4-85b8-c8e6313c5501\") " pod="openstack/dnsmasq-dns-5f854695bc-6xzcj" Dec 11 10:31:26 crc kubenswrapper[4953]: I1211 10:31:26.518503 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-td9pq\" (UniqueName: \"kubernetes.io/projected/908458a6-c175-4cc4-85b8-c8e6313c5501-kube-api-access-td9pq\") pod \"dnsmasq-dns-5f854695bc-6xzcj\" (UID: \"908458a6-c175-4cc4-85b8-c8e6313c5501\") " pod="openstack/dnsmasq-dns-5f854695bc-6xzcj" Dec 11 10:31:26 crc kubenswrapper[4953]: I1211 10:31:26.569269 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f854695bc-6xzcj" Dec 11 10:31:26 crc kubenswrapper[4953]: I1211 10:31:26.979202 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-ncvp2"] Dec 11 10:31:26 crc kubenswrapper[4953]: W1211 10:31:26.983844 4953 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf12edfc1_607b_4fd2_bf95_997de251003d.slice/crio-5d0f22fa79d60e7ad6ce2392cb373871888f17fcb786a0224b9b7992115ffe1a WatchSource:0}: Error finding container 5d0f22fa79d60e7ad6ce2392cb373871888f17fcb786a0224b9b7992115ffe1a: Status 404 returned error can't find the container with id 5d0f22fa79d60e7ad6ce2392cb373871888f17fcb786a0224b9b7992115ffe1a Dec 11 10:31:27 crc kubenswrapper[4953]: I1211 10:31:27.142713 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-6xzcj"] Dec 11 10:31:27 crc kubenswrapper[4953]: I1211 10:31:27.783333 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f854695bc-6xzcj" event={"ID":"908458a6-c175-4cc4-85b8-c8e6313c5501","Type":"ContainerStarted","Data":"b59333c0304de15e75f2f4163fba37d0d9d5488fda6797f169652b9aa5d691bd"} Dec 11 10:31:27 crc kubenswrapper[4953]: I1211 10:31:27.784955 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84bb9d8bd9-ncvp2" event={"ID":"f12edfc1-607b-4fd2-bf95-997de251003d","Type":"ContainerStarted","Data":"5d0f22fa79d60e7ad6ce2392cb373871888f17fcb786a0224b9b7992115ffe1a"} Dec 11 10:31:28 crc kubenswrapper[4953]: I1211 10:31:28.135335 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-ncvp2"] Dec 11 10:31:28 crc kubenswrapper[4953]: I1211 10:31:28.157019 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-744ffd65bc-sgr4j"] Dec 11 10:31:28 crc kubenswrapper[4953]: I1211 10:31:28.158868 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-744ffd65bc-sgr4j" Dec 11 10:31:28 crc kubenswrapper[4953]: I1211 10:31:28.168705 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-744ffd65bc-sgr4j"] Dec 11 10:31:28 crc kubenswrapper[4953]: I1211 10:31:28.219193 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/33fa5e5b-3be4-4fb2-8a05-e9f500184264-dns-svc\") pod \"dnsmasq-dns-744ffd65bc-sgr4j\" (UID: \"33fa5e5b-3be4-4fb2-8a05-e9f500184264\") " pod="openstack/dnsmasq-dns-744ffd65bc-sgr4j" Dec 11 10:31:28 crc kubenswrapper[4953]: I1211 10:31:28.219271 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33fa5e5b-3be4-4fb2-8a05-e9f500184264-config\") pod \"dnsmasq-dns-744ffd65bc-sgr4j\" (UID: \"33fa5e5b-3be4-4fb2-8a05-e9f500184264\") " pod="openstack/dnsmasq-dns-744ffd65bc-sgr4j" Dec 11 10:31:28 crc kubenswrapper[4953]: I1211 10:31:28.219426 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8xh9\" (UniqueName: \"kubernetes.io/projected/33fa5e5b-3be4-4fb2-8a05-e9f500184264-kube-api-access-t8xh9\") pod \"dnsmasq-dns-744ffd65bc-sgr4j\" (UID: \"33fa5e5b-3be4-4fb2-8a05-e9f500184264\") " pod="openstack/dnsmasq-dns-744ffd65bc-sgr4j" Dec 11 10:31:28 crc kubenswrapper[4953]: I1211 10:31:28.320411 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/33fa5e5b-3be4-4fb2-8a05-e9f500184264-dns-svc\") pod \"dnsmasq-dns-744ffd65bc-sgr4j\" (UID: \"33fa5e5b-3be4-4fb2-8a05-e9f500184264\") " pod="openstack/dnsmasq-dns-744ffd65bc-sgr4j" Dec 11 10:31:28 crc kubenswrapper[4953]: I1211 10:31:28.320495 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33fa5e5b-3be4-4fb2-8a05-e9f500184264-config\") pod \"dnsmasq-dns-744ffd65bc-sgr4j\" (UID: \"33fa5e5b-3be4-4fb2-8a05-e9f500184264\") " pod="openstack/dnsmasq-dns-744ffd65bc-sgr4j" Dec 11 10:31:28 crc kubenswrapper[4953]: I1211 10:31:28.320546 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8xh9\" (UniqueName: \"kubernetes.io/projected/33fa5e5b-3be4-4fb2-8a05-e9f500184264-kube-api-access-t8xh9\") pod \"dnsmasq-dns-744ffd65bc-sgr4j\" (UID: \"33fa5e5b-3be4-4fb2-8a05-e9f500184264\") " pod="openstack/dnsmasq-dns-744ffd65bc-sgr4j" Dec 11 10:31:28 crc kubenswrapper[4953]: I1211 10:31:28.321628 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33fa5e5b-3be4-4fb2-8a05-e9f500184264-config\") pod \"dnsmasq-dns-744ffd65bc-sgr4j\" (UID: \"33fa5e5b-3be4-4fb2-8a05-e9f500184264\") " pod="openstack/dnsmasq-dns-744ffd65bc-sgr4j" Dec 11 10:31:28 crc kubenswrapper[4953]: I1211 10:31:28.321658 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/33fa5e5b-3be4-4fb2-8a05-e9f500184264-dns-svc\") pod \"dnsmasq-dns-744ffd65bc-sgr4j\" (UID: \"33fa5e5b-3be4-4fb2-8a05-e9f500184264\") " pod="openstack/dnsmasq-dns-744ffd65bc-sgr4j" Dec 11 10:31:28 crc kubenswrapper[4953]: I1211 10:31:28.421360 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t8xh9\" (UniqueName: \"kubernetes.io/projected/33fa5e5b-3be4-4fb2-8a05-e9f500184264-kube-api-access-t8xh9\") pod \"dnsmasq-dns-744ffd65bc-sgr4j\" (UID: \"33fa5e5b-3be4-4fb2-8a05-e9f500184264\") " pod="openstack/dnsmasq-dns-744ffd65bc-sgr4j" Dec 11 10:31:28 crc kubenswrapper[4953]: I1211 10:31:28.421432 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8xh9\" (UniqueName: \"kubernetes.io/projected/33fa5e5b-3be4-4fb2-8a05-e9f500184264-kube-api-access-t8xh9\") pod \"dnsmasq-dns-744ffd65bc-sgr4j\" (UID: \"33fa5e5b-3be4-4fb2-8a05-e9f500184264\") " pod="openstack/dnsmasq-dns-744ffd65bc-sgr4j" Dec 11 10:31:28 crc kubenswrapper[4953]: I1211 10:31:28.421826 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t8xh9\" (UniqueName: \"kubernetes.io/projected/33fa5e5b-3be4-4fb2-8a05-e9f500184264-kube-api-access-t8xh9\") pod \"dnsmasq-dns-744ffd65bc-sgr4j\" (UID: \"33fa5e5b-3be4-4fb2-8a05-e9f500184264\") " pod="openstack/dnsmasq-dns-744ffd65bc-sgr4j" Dec 11 10:31:28 crc kubenswrapper[4953]: I1211 10:31:28.485092 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-744ffd65bc-sgr4j" Dec 11 10:31:28 crc kubenswrapper[4953]: I1211 10:31:28.553223 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-6xzcj"] Dec 11 10:31:28 crc kubenswrapper[4953]: I1211 10:31:28.678637 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-p9d9d"] Dec 11 10:31:28 crc kubenswrapper[4953]: I1211 10:31:28.680307 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95f5f6995-p9d9d" Dec 11 10:31:28 crc kubenswrapper[4953]: I1211 10:31:28.690737 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-p9d9d"] Dec 11 10:31:28 crc kubenswrapper[4953]: I1211 10:31:28.839746 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c3962e78-992c-4a5f-a874-2d744965e3bb-dns-svc\") pod \"dnsmasq-dns-95f5f6995-p9d9d\" (UID: \"c3962e78-992c-4a5f-a874-2d744965e3bb\") " pod="openstack/dnsmasq-dns-95f5f6995-p9d9d" Dec 11 10:31:28 crc kubenswrapper[4953]: I1211 10:31:28.839810 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6z6d\" (UniqueName: \"kubernetes.io/projected/c3962e78-992c-4a5f-a874-2d744965e3bb-kube-api-access-f6z6d\") pod \"dnsmasq-dns-95f5f6995-p9d9d\" (UID: \"c3962e78-992c-4a5f-a874-2d744965e3bb\") " pod="openstack/dnsmasq-dns-95f5f6995-p9d9d" Dec 11 10:31:28 crc kubenswrapper[4953]: I1211 10:31:28.840042 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3962e78-992c-4a5f-a874-2d744965e3bb-config\") pod \"dnsmasq-dns-95f5f6995-p9d9d\" (UID: \"c3962e78-992c-4a5f-a874-2d744965e3bb\") " pod="openstack/dnsmasq-dns-95f5f6995-p9d9d" Dec 11 10:31:28 crc kubenswrapper[4953]: I1211 10:31:28.941677 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c3962e78-992c-4a5f-a874-2d744965e3bb-dns-svc\") pod \"dnsmasq-dns-95f5f6995-p9d9d\" (UID: \"c3962e78-992c-4a5f-a874-2d744965e3bb\") " pod="openstack/dnsmasq-dns-95f5f6995-p9d9d" Dec 11 10:31:28 crc kubenswrapper[4953]: I1211 10:31:28.941798 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f6z6d\" (UniqueName: \"kubernetes.io/projected/c3962e78-992c-4a5f-a874-2d744965e3bb-kube-api-access-f6z6d\") pod \"dnsmasq-dns-95f5f6995-p9d9d\" (UID: \"c3962e78-992c-4a5f-a874-2d744965e3bb\") " pod="openstack/dnsmasq-dns-95f5f6995-p9d9d" Dec 11 10:31:28 crc kubenswrapper[4953]: I1211 10:31:28.941870 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3962e78-992c-4a5f-a874-2d744965e3bb-config\") pod \"dnsmasq-dns-95f5f6995-p9d9d\" (UID: \"c3962e78-992c-4a5f-a874-2d744965e3bb\") " pod="openstack/dnsmasq-dns-95f5f6995-p9d9d" Dec 11 10:31:28 crc kubenswrapper[4953]: I1211 10:31:28.943145 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3962e78-992c-4a5f-a874-2d744965e3bb-config\") pod \"dnsmasq-dns-95f5f6995-p9d9d\" (UID: \"c3962e78-992c-4a5f-a874-2d744965e3bb\") " pod="openstack/dnsmasq-dns-95f5f6995-p9d9d" Dec 11 10:31:28 crc kubenswrapper[4953]: I1211 10:31:28.943707 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c3962e78-992c-4a5f-a874-2d744965e3bb-dns-svc\") pod \"dnsmasq-dns-95f5f6995-p9d9d\" (UID: \"c3962e78-992c-4a5f-a874-2d744965e3bb\") " pod="openstack/dnsmasq-dns-95f5f6995-p9d9d" Dec 11 10:31:28 crc kubenswrapper[4953]: I1211 10:31:28.980648 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f6z6d\" (UniqueName: \"kubernetes.io/projected/c3962e78-992c-4a5f-a874-2d744965e3bb-kube-api-access-f6z6d\") pod \"dnsmasq-dns-95f5f6995-p9d9d\" (UID: \"c3962e78-992c-4a5f-a874-2d744965e3bb\") " pod="openstack/dnsmasq-dns-95f5f6995-p9d9d" Dec 11 10:31:29 crc kubenswrapper[4953]: I1211 10:31:29.005922 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95f5f6995-p9d9d" Dec 11 10:31:29 crc kubenswrapper[4953]: I1211 10:31:29.203704 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-744ffd65bc-sgr4j"] Dec 11 10:31:29 crc kubenswrapper[4953]: W1211 10:31:29.225192 4953 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod33fa5e5b_3be4_4fb2_8a05_e9f500184264.slice/crio-1a7367ad2fb38295643b4bdf0c5915cdb6506755fb82f20c4098e6c3067c4c55 WatchSource:0}: Error finding container 1a7367ad2fb38295643b4bdf0c5915cdb6506755fb82f20c4098e6c3067c4c55: Status 404 returned error can't find the container with id 1a7367ad2fb38295643b4bdf0c5915cdb6506755fb82f20c4098e6c3067c4c55 Dec 11 10:31:29 crc kubenswrapper[4953]: I1211 10:31:29.421056 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Dec 11 10:31:29 crc kubenswrapper[4953]: I1211 10:31:29.441732 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 11 10:31:29 crc kubenswrapper[4953]: I1211 10:31:29.444696 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-9bxn8" Dec 11 10:31:29 crc kubenswrapper[4953]: I1211 10:31:29.445071 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Dec 11 10:31:29 crc kubenswrapper[4953]: I1211 10:31:29.445710 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Dec 11 10:31:29 crc kubenswrapper[4953]: I1211 10:31:29.445915 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Dec 11 10:31:29 crc kubenswrapper[4953]: I1211 10:31:29.446090 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Dec 11 10:31:29 crc kubenswrapper[4953]: I1211 10:31:29.447795 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Dec 11 10:31:29 crc kubenswrapper[4953]: I1211 10:31:29.450210 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Dec 11 10:31:29 crc kubenswrapper[4953]: I1211 10:31:29.457321 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 11 10:31:29 crc kubenswrapper[4953]: I1211 10:31:29.607924 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fv427\" (UniqueName: \"kubernetes.io/projected/b29c8985-0d8c-4382-9969-29422929136f-kube-api-access-fv427\") pod \"rabbitmq-server-0\" (UID: \"b29c8985-0d8c-4382-9969-29422929136f\") " pod="openstack/rabbitmq-server-0" Dec 11 10:31:29 crc kubenswrapper[4953]: I1211 10:31:29.608009 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b29c8985-0d8c-4382-9969-29422929136f-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"b29c8985-0d8c-4382-9969-29422929136f\") " pod="openstack/rabbitmq-server-0" Dec 11 10:31:29 crc kubenswrapper[4953]: I1211 10:31:29.608035 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b29c8985-0d8c-4382-9969-29422929136f-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"b29c8985-0d8c-4382-9969-29422929136f\") " pod="openstack/rabbitmq-server-0" Dec 11 10:31:29 crc kubenswrapper[4953]: I1211 10:31:29.608057 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b29c8985-0d8c-4382-9969-29422929136f-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"b29c8985-0d8c-4382-9969-29422929136f\") " pod="openstack/rabbitmq-server-0" Dec 11 10:31:29 crc kubenswrapper[4953]: I1211 10:31:29.608079 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b29c8985-0d8c-4382-9969-29422929136f-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"b29c8985-0d8c-4382-9969-29422929136f\") " pod="openstack/rabbitmq-server-0" Dec 11 10:31:29 crc kubenswrapper[4953]: I1211 10:31:29.608115 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b29c8985-0d8c-4382-9969-29422929136f-server-conf\") pod \"rabbitmq-server-0\" (UID: \"b29c8985-0d8c-4382-9969-29422929136f\") " pod="openstack/rabbitmq-server-0" Dec 11 10:31:29 crc kubenswrapper[4953]: I1211 10:31:29.608138 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b29c8985-0d8c-4382-9969-29422929136f-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"b29c8985-0d8c-4382-9969-29422929136f\") " pod="openstack/rabbitmq-server-0" Dec 11 10:31:29 crc kubenswrapper[4953]: I1211 10:31:29.608156 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b29c8985-0d8c-4382-9969-29422929136f-pod-info\") pod \"rabbitmq-server-0\" (UID: \"b29c8985-0d8c-4382-9969-29422929136f\") " pod="openstack/rabbitmq-server-0" Dec 11 10:31:29 crc kubenswrapper[4953]: I1211 10:31:29.608170 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b29c8985-0d8c-4382-9969-29422929136f-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"b29c8985-0d8c-4382-9969-29422929136f\") " pod="openstack/rabbitmq-server-0" Dec 11 10:31:29 crc kubenswrapper[4953]: I1211 10:31:29.608191 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-server-0\" (UID: \"b29c8985-0d8c-4382-9969-29422929136f\") " pod="openstack/rabbitmq-server-0" Dec 11 10:31:29 crc kubenswrapper[4953]: I1211 10:31:29.608415 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b29c8985-0d8c-4382-9969-29422929136f-config-data\") pod \"rabbitmq-server-0\" (UID: \"b29c8985-0d8c-4382-9969-29422929136f\") " pod="openstack/rabbitmq-server-0" Dec 11 10:31:29 crc kubenswrapper[4953]: I1211 10:31:29.713359 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fv427\" (UniqueName: \"kubernetes.io/projected/b29c8985-0d8c-4382-9969-29422929136f-kube-api-access-fv427\") pod \"rabbitmq-server-0\" (UID: \"b29c8985-0d8c-4382-9969-29422929136f\") " pod="openstack/rabbitmq-server-0" Dec 11 10:31:29 crc kubenswrapper[4953]: I1211 10:31:29.713763 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b29c8985-0d8c-4382-9969-29422929136f-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"b29c8985-0d8c-4382-9969-29422929136f\") " pod="openstack/rabbitmq-server-0" Dec 11 10:31:29 crc kubenswrapper[4953]: I1211 10:31:29.713793 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b29c8985-0d8c-4382-9969-29422929136f-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"b29c8985-0d8c-4382-9969-29422929136f\") " pod="openstack/rabbitmq-server-0" Dec 11 10:31:29 crc kubenswrapper[4953]: I1211 10:31:29.713816 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b29c8985-0d8c-4382-9969-29422929136f-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"b29c8985-0d8c-4382-9969-29422929136f\") " pod="openstack/rabbitmq-server-0" Dec 11 10:31:29 crc kubenswrapper[4953]: I1211 10:31:29.713841 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b29c8985-0d8c-4382-9969-29422929136f-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"b29c8985-0d8c-4382-9969-29422929136f\") " pod="openstack/rabbitmq-server-0" Dec 11 10:31:29 crc kubenswrapper[4953]: I1211 10:31:29.713863 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b29c8985-0d8c-4382-9969-29422929136f-server-conf\") pod \"rabbitmq-server-0\" (UID: \"b29c8985-0d8c-4382-9969-29422929136f\") " pod="openstack/rabbitmq-server-0" Dec 11 10:31:29 crc kubenswrapper[4953]: I1211 10:31:29.713890 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b29c8985-0d8c-4382-9969-29422929136f-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"b29c8985-0d8c-4382-9969-29422929136f\") " pod="openstack/rabbitmq-server-0" Dec 11 10:31:29 crc kubenswrapper[4953]: I1211 10:31:29.713917 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b29c8985-0d8c-4382-9969-29422929136f-pod-info\") pod \"rabbitmq-server-0\" (UID: \"b29c8985-0d8c-4382-9969-29422929136f\") " pod="openstack/rabbitmq-server-0" Dec 11 10:31:29 crc kubenswrapper[4953]: I1211 10:31:29.713933 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b29c8985-0d8c-4382-9969-29422929136f-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"b29c8985-0d8c-4382-9969-29422929136f\") " pod="openstack/rabbitmq-server-0" Dec 11 10:31:29 crc kubenswrapper[4953]: I1211 10:31:29.713956 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-server-0\" (UID: \"b29c8985-0d8c-4382-9969-29422929136f\") " pod="openstack/rabbitmq-server-0" Dec 11 10:31:29 crc kubenswrapper[4953]: I1211 10:31:29.713987 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b29c8985-0d8c-4382-9969-29422929136f-config-data\") pod \"rabbitmq-server-0\" (UID: \"b29c8985-0d8c-4382-9969-29422929136f\") " pod="openstack/rabbitmq-server-0" Dec 11 10:31:29 crc kubenswrapper[4953]: I1211 10:31:29.788095 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b29c8985-0d8c-4382-9969-29422929136f-config-data\") pod \"rabbitmq-server-0\" (UID: \"b29c8985-0d8c-4382-9969-29422929136f\") " pod="openstack/rabbitmq-server-0" Dec 11 10:31:29 crc kubenswrapper[4953]: I1211 10:31:29.792905 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b29c8985-0d8c-4382-9969-29422929136f-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"b29c8985-0d8c-4382-9969-29422929136f\") " pod="openstack/rabbitmq-server-0" Dec 11 10:31:29 crc kubenswrapper[4953]: I1211 10:31:29.793146 4953 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-server-0\" (UID: \"b29c8985-0d8c-4382-9969-29422929136f\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/rabbitmq-server-0" Dec 11 10:31:29 crc kubenswrapper[4953]: I1211 10:31:29.795188 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b29c8985-0d8c-4382-9969-29422929136f-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"b29c8985-0d8c-4382-9969-29422929136f\") " pod="openstack/rabbitmq-server-0" Dec 11 10:31:29 crc kubenswrapper[4953]: I1211 10:31:29.799329 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b29c8985-0d8c-4382-9969-29422929136f-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"b29c8985-0d8c-4382-9969-29422929136f\") " pod="openstack/rabbitmq-server-0" Dec 11 10:31:29 crc kubenswrapper[4953]: I1211 10:31:29.803710 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b29c8985-0d8c-4382-9969-29422929136f-server-conf\") pod \"rabbitmq-server-0\" (UID: \"b29c8985-0d8c-4382-9969-29422929136f\") " pod="openstack/rabbitmq-server-0" Dec 11 10:31:29 crc kubenswrapper[4953]: I1211 10:31:29.803784 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b29c8985-0d8c-4382-9969-29422929136f-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"b29c8985-0d8c-4382-9969-29422929136f\") " pod="openstack/rabbitmq-server-0" Dec 11 10:31:29 crc kubenswrapper[4953]: I1211 10:31:29.804178 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b29c8985-0d8c-4382-9969-29422929136f-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"b29c8985-0d8c-4382-9969-29422929136f\") " pod="openstack/rabbitmq-server-0" Dec 11 10:31:29 crc kubenswrapper[4953]: I1211 10:31:29.818384 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b29c8985-0d8c-4382-9969-29422929136f-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"b29c8985-0d8c-4382-9969-29422929136f\") " pod="openstack/rabbitmq-server-0" Dec 11 10:31:29 crc kubenswrapper[4953]: I1211 10:31:29.819994 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fv427\" (UniqueName: \"kubernetes.io/projected/b29c8985-0d8c-4382-9969-29422929136f-kube-api-access-fv427\") pod \"rabbitmq-server-0\" (UID: \"b29c8985-0d8c-4382-9969-29422929136f\") " pod="openstack/rabbitmq-server-0" Dec 11 10:31:29 crc kubenswrapper[4953]: I1211 10:31:29.822535 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-744ffd65bc-sgr4j" event={"ID":"33fa5e5b-3be4-4fb2-8a05-e9f500184264","Type":"ContainerStarted","Data":"1a7367ad2fb38295643b4bdf0c5915cdb6506755fb82f20c4098e6c3067c4c55"} Dec 11 10:31:29 crc kubenswrapper[4953]: I1211 10:31:29.829535 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 11 10:31:29 crc kubenswrapper[4953]: I1211 10:31:29.836005 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-server-0\" (UID: \"b29c8985-0d8c-4382-9969-29422929136f\") " pod="openstack/rabbitmq-server-0" Dec 11 10:31:29 crc kubenswrapper[4953]: I1211 10:31:29.839116 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 11 10:31:29 crc kubenswrapper[4953]: I1211 10:31:29.844180 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Dec 11 10:31:29 crc kubenswrapper[4953]: I1211 10:31:29.844422 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Dec 11 10:31:29 crc kubenswrapper[4953]: I1211 10:31:29.844646 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Dec 11 10:31:29 crc kubenswrapper[4953]: I1211 10:31:29.844827 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Dec 11 10:31:29 crc kubenswrapper[4953]: I1211 10:31:29.845043 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-tx2st" Dec 11 10:31:29 crc kubenswrapper[4953]: I1211 10:31:29.845205 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Dec 11 10:31:29 crc kubenswrapper[4953]: I1211 10:31:29.850304 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Dec 11 10:31:29 crc kubenswrapper[4953]: I1211 10:31:29.855259 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b29c8985-0d8c-4382-9969-29422929136f-pod-info\") pod \"rabbitmq-server-0\" (UID: \"b29c8985-0d8c-4382-9969-29422929136f\") " pod="openstack/rabbitmq-server-0" Dec 11 10:31:29 crc kubenswrapper[4953]: I1211 10:31:29.874078 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 11 10:31:29 crc kubenswrapper[4953]: W1211 10:31:29.880945 4953 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc3962e78_992c_4a5f_a874_2d744965e3bb.slice/crio-6400f0a3975d3d79d5ef8c72aaf6c1447bdc26992ee711e04ade1270a899e83d WatchSource:0}: Error finding container 6400f0a3975d3d79d5ef8c72aaf6c1447bdc26992ee711e04ade1270a899e83d: Status 404 returned error can't find the container with id 6400f0a3975d3d79d5ef8c72aaf6c1447bdc26992ee711e04ade1270a899e83d Dec 11 10:31:29 crc kubenswrapper[4953]: I1211 10:31:29.885861 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-p9d9d"] Dec 11 10:31:30 crc kubenswrapper[4953]: I1211 10:31:30.021884 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/01196778-96de-4f79-b9ac-e01243f86ebb-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"01196778-96de-4f79-b9ac-e01243f86ebb\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 10:31:30 crc kubenswrapper[4953]: I1211 10:31:30.021956 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/01196778-96de-4f79-b9ac-e01243f86ebb-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"01196778-96de-4f79-b9ac-e01243f86ebb\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 10:31:30 crc kubenswrapper[4953]: I1211 10:31:30.022022 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/01196778-96de-4f79-b9ac-e01243f86ebb-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"01196778-96de-4f79-b9ac-e01243f86ebb\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 10:31:30 crc kubenswrapper[4953]: I1211 10:31:30.022042 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/01196778-96de-4f79-b9ac-e01243f86ebb-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"01196778-96de-4f79-b9ac-e01243f86ebb\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 10:31:30 crc kubenswrapper[4953]: I1211 10:31:30.022065 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/01196778-96de-4f79-b9ac-e01243f86ebb-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"01196778-96de-4f79-b9ac-e01243f86ebb\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 10:31:30 crc kubenswrapper[4953]: I1211 10:31:30.022114 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/01196778-96de-4f79-b9ac-e01243f86ebb-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"01196778-96de-4f79-b9ac-e01243f86ebb\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 10:31:30 crc kubenswrapper[4953]: I1211 10:31:30.022149 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/01196778-96de-4f79-b9ac-e01243f86ebb-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"01196778-96de-4f79-b9ac-e01243f86ebb\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 10:31:30 crc kubenswrapper[4953]: I1211 10:31:30.022178 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/01196778-96de-4f79-b9ac-e01243f86ebb-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"01196778-96de-4f79-b9ac-e01243f86ebb\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 10:31:30 crc kubenswrapper[4953]: I1211 10:31:30.022256 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/01196778-96de-4f79-b9ac-e01243f86ebb-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"01196778-96de-4f79-b9ac-e01243f86ebb\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 10:31:30 crc kubenswrapper[4953]: I1211 10:31:30.022311 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"01196778-96de-4f79-b9ac-e01243f86ebb\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 10:31:30 crc kubenswrapper[4953]: I1211 10:31:30.022337 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hsppt\" (UniqueName: \"kubernetes.io/projected/01196778-96de-4f79-b9ac-e01243f86ebb-kube-api-access-hsppt\") pod \"rabbitmq-cell1-server-0\" (UID: \"01196778-96de-4f79-b9ac-e01243f86ebb\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 10:31:30 crc kubenswrapper[4953]: I1211 10:31:30.098616 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 11 10:31:30 crc kubenswrapper[4953]: I1211 10:31:30.126507 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/01196778-96de-4f79-b9ac-e01243f86ebb-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"01196778-96de-4f79-b9ac-e01243f86ebb\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 10:31:30 crc kubenswrapper[4953]: I1211 10:31:30.126839 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/01196778-96de-4f79-b9ac-e01243f86ebb-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"01196778-96de-4f79-b9ac-e01243f86ebb\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 10:31:30 crc kubenswrapper[4953]: I1211 10:31:30.126987 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/01196778-96de-4f79-b9ac-e01243f86ebb-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"01196778-96de-4f79-b9ac-e01243f86ebb\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 10:31:30 crc kubenswrapper[4953]: I1211 10:31:30.127083 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/01196778-96de-4f79-b9ac-e01243f86ebb-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"01196778-96de-4f79-b9ac-e01243f86ebb\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 10:31:30 crc kubenswrapper[4953]: I1211 10:31:30.127147 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/01196778-96de-4f79-b9ac-e01243f86ebb-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"01196778-96de-4f79-b9ac-e01243f86ebb\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 10:31:30 crc kubenswrapper[4953]: I1211 10:31:30.127208 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/01196778-96de-4f79-b9ac-e01243f86ebb-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"01196778-96de-4f79-b9ac-e01243f86ebb\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 10:31:30 crc kubenswrapper[4953]: I1211 10:31:30.127236 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/01196778-96de-4f79-b9ac-e01243f86ebb-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"01196778-96de-4f79-b9ac-e01243f86ebb\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 10:31:30 crc kubenswrapper[4953]: I1211 10:31:30.127306 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"01196778-96de-4f79-b9ac-e01243f86ebb\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 10:31:30 crc kubenswrapper[4953]: I1211 10:31:30.127327 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hsppt\" (UniqueName: \"kubernetes.io/projected/01196778-96de-4f79-b9ac-e01243f86ebb-kube-api-access-hsppt\") pod \"rabbitmq-cell1-server-0\" (UID: \"01196778-96de-4f79-b9ac-e01243f86ebb\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 10:31:30 crc kubenswrapper[4953]: I1211 10:31:30.127390 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/01196778-96de-4f79-b9ac-e01243f86ebb-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"01196778-96de-4f79-b9ac-e01243f86ebb\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 10:31:30 crc kubenswrapper[4953]: I1211 10:31:30.127418 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/01196778-96de-4f79-b9ac-e01243f86ebb-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"01196778-96de-4f79-b9ac-e01243f86ebb\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 10:31:30 crc kubenswrapper[4953]: I1211 10:31:30.129010 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/01196778-96de-4f79-b9ac-e01243f86ebb-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"01196778-96de-4f79-b9ac-e01243f86ebb\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 10:31:30 crc kubenswrapper[4953]: I1211 10:31:30.129195 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/01196778-96de-4f79-b9ac-e01243f86ebb-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"01196778-96de-4f79-b9ac-e01243f86ebb\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 10:31:30 crc kubenswrapper[4953]: I1211 10:31:30.130016 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/01196778-96de-4f79-b9ac-e01243f86ebb-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"01196778-96de-4f79-b9ac-e01243f86ebb\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 10:31:30 crc kubenswrapper[4953]: I1211 10:31:30.131500 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/01196778-96de-4f79-b9ac-e01243f86ebb-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"01196778-96de-4f79-b9ac-e01243f86ebb\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 10:31:30 crc kubenswrapper[4953]: I1211 10:31:30.133273 4953 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"01196778-96de-4f79-b9ac-e01243f86ebb\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/rabbitmq-cell1-server-0" Dec 11 10:31:30 crc kubenswrapper[4953]: I1211 10:31:30.138317 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/01196778-96de-4f79-b9ac-e01243f86ebb-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"01196778-96de-4f79-b9ac-e01243f86ebb\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 10:31:30 crc kubenswrapper[4953]: I1211 10:31:30.146360 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/01196778-96de-4f79-b9ac-e01243f86ebb-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"01196778-96de-4f79-b9ac-e01243f86ebb\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 10:31:30 crc kubenswrapper[4953]: I1211 10:31:30.148282 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/01196778-96de-4f79-b9ac-e01243f86ebb-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"01196778-96de-4f79-b9ac-e01243f86ebb\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 10:31:30 crc kubenswrapper[4953]: I1211 10:31:30.150073 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/01196778-96de-4f79-b9ac-e01243f86ebb-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"01196778-96de-4f79-b9ac-e01243f86ebb\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 10:31:30 crc kubenswrapper[4953]: I1211 10:31:30.150131 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hsppt\" (UniqueName: \"kubernetes.io/projected/01196778-96de-4f79-b9ac-e01243f86ebb-kube-api-access-hsppt\") pod \"rabbitmq-cell1-server-0\" (UID: \"01196778-96de-4f79-b9ac-e01243f86ebb\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 10:31:30 crc kubenswrapper[4953]: I1211 10:31:30.170354 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"01196778-96de-4f79-b9ac-e01243f86ebb\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 10:31:30 crc kubenswrapper[4953]: I1211 10:31:30.176248 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/01196778-96de-4f79-b9ac-e01243f86ebb-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"01196778-96de-4f79-b9ac-e01243f86ebb\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 10:31:30 crc kubenswrapper[4953]: I1211 10:31:30.225778 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 11 10:31:30 crc kubenswrapper[4953]: I1211 10:31:30.699268 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 11 10:31:30 crc kubenswrapper[4953]: W1211 10:31:30.714886 4953 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb29c8985_0d8c_4382_9969_29422929136f.slice/crio-1656a1963db1fe5049b3c6547d05e4f0c8bac32bebf99bdbbe2e3b32ddd579f6 WatchSource:0}: Error finding container 1656a1963db1fe5049b3c6547d05e4f0c8bac32bebf99bdbbe2e3b32ddd579f6: Status 404 returned error can't find the container with id 1656a1963db1fe5049b3c6547d05e4f0c8bac32bebf99bdbbe2e3b32ddd579f6 Dec 11 10:31:30 crc kubenswrapper[4953]: I1211 10:31:30.826256 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 11 10:31:30 crc kubenswrapper[4953]: I1211 10:31:30.839300 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b29c8985-0d8c-4382-9969-29422929136f","Type":"ContainerStarted","Data":"1656a1963db1fe5049b3c6547d05e4f0c8bac32bebf99bdbbe2e3b32ddd579f6"} Dec 11 10:31:30 crc kubenswrapper[4953]: I1211 10:31:30.841214 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95f5f6995-p9d9d" event={"ID":"c3962e78-992c-4a5f-a874-2d744965e3bb","Type":"ContainerStarted","Data":"6400f0a3975d3d79d5ef8c72aaf6c1447bdc26992ee711e04ade1270a899e83d"} Dec 11 10:31:31 crc kubenswrapper[4953]: I1211 10:31:30.999948 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Dec 11 10:31:31 crc kubenswrapper[4953]: I1211 10:31:31.002259 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Dec 11 10:31:31 crc kubenswrapper[4953]: I1211 10:31:31.006105 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Dec 11 10:31:31 crc kubenswrapper[4953]: I1211 10:31:31.006496 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Dec 11 10:31:31 crc kubenswrapper[4953]: I1211 10:31:31.006522 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Dec 11 10:31:31 crc kubenswrapper[4953]: I1211 10:31:31.006946 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-4mfbc" Dec 11 10:31:31 crc kubenswrapper[4953]: I1211 10:31:31.011197 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Dec 11 10:31:31 crc kubenswrapper[4953]: I1211 10:31:31.031924 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Dec 11 10:31:31 crc kubenswrapper[4953]: I1211 10:31:31.157256 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/23f99edb-3870-42f3-bdef-ec4db335ba35-operator-scripts\") pod \"openstack-galera-0\" (UID: \"23f99edb-3870-42f3-bdef-ec4db335ba35\") " pod="openstack/openstack-galera-0" Dec 11 10:31:31 crc kubenswrapper[4953]: I1211 10:31:31.157744 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/23f99edb-3870-42f3-bdef-ec4db335ba35-config-data-generated\") pod \"openstack-galera-0\" (UID: \"23f99edb-3870-42f3-bdef-ec4db335ba35\") " pod="openstack/openstack-galera-0" Dec 11 10:31:31 crc kubenswrapper[4953]: I1211 10:31:31.157783 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-galera-0\" (UID: \"23f99edb-3870-42f3-bdef-ec4db335ba35\") " pod="openstack/openstack-galera-0" Dec 11 10:31:31 crc kubenswrapper[4953]: I1211 10:31:31.157813 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9xhj\" (UniqueName: \"kubernetes.io/projected/23f99edb-3870-42f3-bdef-ec4db335ba35-kube-api-access-c9xhj\") pod \"openstack-galera-0\" (UID: \"23f99edb-3870-42f3-bdef-ec4db335ba35\") " pod="openstack/openstack-galera-0" Dec 11 10:31:31 crc kubenswrapper[4953]: I1211 10:31:31.157856 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/23f99edb-3870-42f3-bdef-ec4db335ba35-config-data-default\") pod \"openstack-galera-0\" (UID: \"23f99edb-3870-42f3-bdef-ec4db335ba35\") " pod="openstack/openstack-galera-0" Dec 11 10:31:31 crc kubenswrapper[4953]: I1211 10:31:31.157938 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/23f99edb-3870-42f3-bdef-ec4db335ba35-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"23f99edb-3870-42f3-bdef-ec4db335ba35\") " pod="openstack/openstack-galera-0" Dec 11 10:31:31 crc kubenswrapper[4953]: I1211 10:31:31.158016 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23f99edb-3870-42f3-bdef-ec4db335ba35-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"23f99edb-3870-42f3-bdef-ec4db335ba35\") " pod="openstack/openstack-galera-0" Dec 11 10:31:31 crc kubenswrapper[4953]: I1211 10:31:31.158067 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/23f99edb-3870-42f3-bdef-ec4db335ba35-kolla-config\") pod \"openstack-galera-0\" (UID: \"23f99edb-3870-42f3-bdef-ec4db335ba35\") " pod="openstack/openstack-galera-0" Dec 11 10:31:31 crc kubenswrapper[4953]: I1211 10:31:31.260762 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/23f99edb-3870-42f3-bdef-ec4db335ba35-operator-scripts\") pod \"openstack-galera-0\" (UID: \"23f99edb-3870-42f3-bdef-ec4db335ba35\") " pod="openstack/openstack-galera-0" Dec 11 10:31:31 crc kubenswrapper[4953]: I1211 10:31:31.260846 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/23f99edb-3870-42f3-bdef-ec4db335ba35-config-data-generated\") pod \"openstack-galera-0\" (UID: \"23f99edb-3870-42f3-bdef-ec4db335ba35\") " pod="openstack/openstack-galera-0" Dec 11 10:31:31 crc kubenswrapper[4953]: I1211 10:31:31.260892 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-galera-0\" (UID: \"23f99edb-3870-42f3-bdef-ec4db335ba35\") " pod="openstack/openstack-galera-0" Dec 11 10:31:31 crc kubenswrapper[4953]: I1211 10:31:31.260945 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c9xhj\" (UniqueName: \"kubernetes.io/projected/23f99edb-3870-42f3-bdef-ec4db335ba35-kube-api-access-c9xhj\") pod \"openstack-galera-0\" (UID: \"23f99edb-3870-42f3-bdef-ec4db335ba35\") " pod="openstack/openstack-galera-0" Dec 11 10:31:31 crc kubenswrapper[4953]: I1211 10:31:31.260973 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/23f99edb-3870-42f3-bdef-ec4db335ba35-config-data-default\") pod \"openstack-galera-0\" (UID: \"23f99edb-3870-42f3-bdef-ec4db335ba35\") " pod="openstack/openstack-galera-0" Dec 11 10:31:31 crc kubenswrapper[4953]: I1211 10:31:31.261034 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/23f99edb-3870-42f3-bdef-ec4db335ba35-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"23f99edb-3870-42f3-bdef-ec4db335ba35\") " pod="openstack/openstack-galera-0" Dec 11 10:31:31 crc kubenswrapper[4953]: I1211 10:31:31.261066 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23f99edb-3870-42f3-bdef-ec4db335ba35-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"23f99edb-3870-42f3-bdef-ec4db335ba35\") " pod="openstack/openstack-galera-0" Dec 11 10:31:31 crc kubenswrapper[4953]: I1211 10:31:31.261124 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/23f99edb-3870-42f3-bdef-ec4db335ba35-kolla-config\") pod \"openstack-galera-0\" (UID: \"23f99edb-3870-42f3-bdef-ec4db335ba35\") " pod="openstack/openstack-galera-0" Dec 11 10:31:31 crc kubenswrapper[4953]: I1211 10:31:31.261321 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/23f99edb-3870-42f3-bdef-ec4db335ba35-config-data-generated\") pod \"openstack-galera-0\" (UID: \"23f99edb-3870-42f3-bdef-ec4db335ba35\") " pod="openstack/openstack-galera-0" Dec 11 10:31:31 crc kubenswrapper[4953]: I1211 10:31:31.261483 4953 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-galera-0\" (UID: \"23f99edb-3870-42f3-bdef-ec4db335ba35\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/openstack-galera-0" Dec 11 10:31:31 crc kubenswrapper[4953]: I1211 10:31:31.262104 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/23f99edb-3870-42f3-bdef-ec4db335ba35-kolla-config\") pod \"openstack-galera-0\" (UID: \"23f99edb-3870-42f3-bdef-ec4db335ba35\") " pod="openstack/openstack-galera-0" Dec 11 10:31:31 crc kubenswrapper[4953]: I1211 10:31:31.263625 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/23f99edb-3870-42f3-bdef-ec4db335ba35-config-data-default\") pod \"openstack-galera-0\" (UID: \"23f99edb-3870-42f3-bdef-ec4db335ba35\") " pod="openstack/openstack-galera-0" Dec 11 10:31:31 crc kubenswrapper[4953]: I1211 10:31:31.268029 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/23f99edb-3870-42f3-bdef-ec4db335ba35-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"23f99edb-3870-42f3-bdef-ec4db335ba35\") " pod="openstack/openstack-galera-0" Dec 11 10:31:31 crc kubenswrapper[4953]: I1211 10:31:31.268944 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/23f99edb-3870-42f3-bdef-ec4db335ba35-operator-scripts\") pod \"openstack-galera-0\" (UID: \"23f99edb-3870-42f3-bdef-ec4db335ba35\") " pod="openstack/openstack-galera-0" Dec 11 10:31:31 crc kubenswrapper[4953]: I1211 10:31:31.305257 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23f99edb-3870-42f3-bdef-ec4db335ba35-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"23f99edb-3870-42f3-bdef-ec4db335ba35\") " pod="openstack/openstack-galera-0" Dec 11 10:31:31 crc kubenswrapper[4953]: I1211 10:31:31.335969 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9xhj\" (UniqueName: \"kubernetes.io/projected/23f99edb-3870-42f3-bdef-ec4db335ba35-kube-api-access-c9xhj\") pod \"openstack-galera-0\" (UID: \"23f99edb-3870-42f3-bdef-ec4db335ba35\") " pod="openstack/openstack-galera-0" Dec 11 10:31:31 crc kubenswrapper[4953]: I1211 10:31:31.376591 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-galera-0\" (UID: \"23f99edb-3870-42f3-bdef-ec4db335ba35\") " pod="openstack/openstack-galera-0" Dec 11 10:31:31 crc kubenswrapper[4953]: I1211 10:31:31.648525 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Dec 11 10:31:31 crc kubenswrapper[4953]: I1211 10:31:31.868166 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"01196778-96de-4f79-b9ac-e01243f86ebb","Type":"ContainerStarted","Data":"335e970b2bb298e905b5df33c2c1753b2cd38ee7861b2571301309400e9b9c32"} Dec 11 10:31:32 crc kubenswrapper[4953]: I1211 10:31:32.153163 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 11 10:31:32 crc kubenswrapper[4953]: I1211 10:31:32.154559 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Dec 11 10:31:32 crc kubenswrapper[4953]: I1211 10:31:32.160654 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Dec 11 10:31:32 crc kubenswrapper[4953]: I1211 10:31:32.160688 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Dec 11 10:31:32 crc kubenswrapper[4953]: I1211 10:31:32.160689 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Dec 11 10:31:32 crc kubenswrapper[4953]: I1211 10:31:32.166896 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-x4jbt" Dec 11 10:31:32 crc kubenswrapper[4953]: I1211 10:31:32.177402 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 11 10:31:32 crc kubenswrapper[4953]: I1211 10:31:32.224279 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Dec 11 10:31:32 crc kubenswrapper[4953]: I1211 10:31:32.278244 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-cell1-galera-0\" (UID: \"27258186-4cab-45b4-a20c-a4c3ddc82f76\") " pod="openstack/openstack-cell1-galera-0" Dec 11 10:31:32 crc kubenswrapper[4953]: I1211 10:31:32.278611 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/27258186-4cab-45b4-a20c-a4c3ddc82f76-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"27258186-4cab-45b4-a20c-a4c3ddc82f76\") " pod="openstack/openstack-cell1-galera-0" Dec 11 10:31:32 crc kubenswrapper[4953]: I1211 10:31:32.278675 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27258186-4cab-45b4-a20c-a4c3ddc82f76-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"27258186-4cab-45b4-a20c-a4c3ddc82f76\") " pod="openstack/openstack-cell1-galera-0" Dec 11 10:31:32 crc kubenswrapper[4953]: I1211 10:31:32.278749 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/27258186-4cab-45b4-a20c-a4c3ddc82f76-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"27258186-4cab-45b4-a20c-a4c3ddc82f76\") " pod="openstack/openstack-cell1-galera-0" Dec 11 10:31:32 crc kubenswrapper[4953]: I1211 10:31:32.278842 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/27258186-4cab-45b4-a20c-a4c3ddc82f76-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"27258186-4cab-45b4-a20c-a4c3ddc82f76\") " pod="openstack/openstack-cell1-galera-0" Dec 11 10:31:32 crc kubenswrapper[4953]: I1211 10:31:32.278885 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npns9\" (UniqueName: \"kubernetes.io/projected/27258186-4cab-45b4-a20c-a4c3ddc82f76-kube-api-access-npns9\") pod \"openstack-cell1-galera-0\" (UID: \"27258186-4cab-45b4-a20c-a4c3ddc82f76\") " pod="openstack/openstack-cell1-galera-0" Dec 11 10:31:32 crc kubenswrapper[4953]: I1211 10:31:32.278951 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/27258186-4cab-45b4-a20c-a4c3ddc82f76-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"27258186-4cab-45b4-a20c-a4c3ddc82f76\") " pod="openstack/openstack-cell1-galera-0" Dec 11 10:31:32 crc kubenswrapper[4953]: I1211 10:31:32.279013 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/27258186-4cab-45b4-a20c-a4c3ddc82f76-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"27258186-4cab-45b4-a20c-a4c3ddc82f76\") " pod="openstack/openstack-cell1-galera-0" Dec 11 10:31:32 crc kubenswrapper[4953]: I1211 10:31:32.381878 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27258186-4cab-45b4-a20c-a4c3ddc82f76-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"27258186-4cab-45b4-a20c-a4c3ddc82f76\") " pod="openstack/openstack-cell1-galera-0" Dec 11 10:31:32 crc kubenswrapper[4953]: I1211 10:31:32.382008 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/27258186-4cab-45b4-a20c-a4c3ddc82f76-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"27258186-4cab-45b4-a20c-a4c3ddc82f76\") " pod="openstack/openstack-cell1-galera-0" Dec 11 10:31:32 crc kubenswrapper[4953]: I1211 10:31:32.382050 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/27258186-4cab-45b4-a20c-a4c3ddc82f76-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"27258186-4cab-45b4-a20c-a4c3ddc82f76\") " pod="openstack/openstack-cell1-galera-0" Dec 11 10:31:32 crc kubenswrapper[4953]: I1211 10:31:32.382129 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-npns9\" (UniqueName: \"kubernetes.io/projected/27258186-4cab-45b4-a20c-a4c3ddc82f76-kube-api-access-npns9\") pod \"openstack-cell1-galera-0\" (UID: \"27258186-4cab-45b4-a20c-a4c3ddc82f76\") " pod="openstack/openstack-cell1-galera-0" Dec 11 10:31:32 crc kubenswrapper[4953]: I1211 10:31:32.382171 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/27258186-4cab-45b4-a20c-a4c3ddc82f76-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"27258186-4cab-45b4-a20c-a4c3ddc82f76\") " pod="openstack/openstack-cell1-galera-0" Dec 11 10:31:32 crc kubenswrapper[4953]: I1211 10:31:32.382197 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/27258186-4cab-45b4-a20c-a4c3ddc82f76-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"27258186-4cab-45b4-a20c-a4c3ddc82f76\") " pod="openstack/openstack-cell1-galera-0" Dec 11 10:31:32 crc kubenswrapper[4953]: I1211 10:31:32.382279 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-cell1-galera-0\" (UID: \"27258186-4cab-45b4-a20c-a4c3ddc82f76\") " pod="openstack/openstack-cell1-galera-0" Dec 11 10:31:32 crc kubenswrapper[4953]: I1211 10:31:32.382303 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/27258186-4cab-45b4-a20c-a4c3ddc82f76-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"27258186-4cab-45b4-a20c-a4c3ddc82f76\") " pod="openstack/openstack-cell1-galera-0" Dec 11 10:31:32 crc kubenswrapper[4953]: I1211 10:31:32.382870 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/27258186-4cab-45b4-a20c-a4c3ddc82f76-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"27258186-4cab-45b4-a20c-a4c3ddc82f76\") " pod="openstack/openstack-cell1-galera-0" Dec 11 10:31:32 crc kubenswrapper[4953]: I1211 10:31:32.384626 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/27258186-4cab-45b4-a20c-a4c3ddc82f76-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"27258186-4cab-45b4-a20c-a4c3ddc82f76\") " pod="openstack/openstack-cell1-galera-0" Dec 11 10:31:32 crc kubenswrapper[4953]: I1211 10:31:32.384674 4953 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-cell1-galera-0\" (UID: \"27258186-4cab-45b4-a20c-a4c3ddc82f76\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/openstack-cell1-galera-0" Dec 11 10:31:32 crc kubenswrapper[4953]: I1211 10:31:32.385419 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/27258186-4cab-45b4-a20c-a4c3ddc82f76-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"27258186-4cab-45b4-a20c-a4c3ddc82f76\") " pod="openstack/openstack-cell1-galera-0" Dec 11 10:31:32 crc kubenswrapper[4953]: I1211 10:31:32.385876 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/27258186-4cab-45b4-a20c-a4c3ddc82f76-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"27258186-4cab-45b4-a20c-a4c3ddc82f76\") " pod="openstack/openstack-cell1-galera-0" Dec 11 10:31:32 crc kubenswrapper[4953]: I1211 10:31:32.391714 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/27258186-4cab-45b4-a20c-a4c3ddc82f76-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"27258186-4cab-45b4-a20c-a4c3ddc82f76\") " pod="openstack/openstack-cell1-galera-0" Dec 11 10:31:32 crc kubenswrapper[4953]: I1211 10:31:32.395483 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27258186-4cab-45b4-a20c-a4c3ddc82f76-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"27258186-4cab-45b4-a20c-a4c3ddc82f76\") " pod="openstack/openstack-cell1-galera-0" Dec 11 10:31:32 crc kubenswrapper[4953]: I1211 10:31:32.414172 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-npns9\" (UniqueName: \"kubernetes.io/projected/27258186-4cab-45b4-a20c-a4c3ddc82f76-kube-api-access-npns9\") pod \"openstack-cell1-galera-0\" (UID: \"27258186-4cab-45b4-a20c-a4c3ddc82f76\") " pod="openstack/openstack-cell1-galera-0" Dec 11 10:31:32 crc kubenswrapper[4953]: I1211 10:31:32.415965 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-cell1-galera-0\" (UID: \"27258186-4cab-45b4-a20c-a4c3ddc82f76\") " pod="openstack/openstack-cell1-galera-0" Dec 11 10:31:32 crc kubenswrapper[4953]: I1211 10:31:32.474047 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Dec 11 10:31:32 crc kubenswrapper[4953]: I1211 10:31:32.554006 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Dec 11 10:31:32 crc kubenswrapper[4953]: I1211 10:31:32.592820 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Dec 11 10:31:32 crc kubenswrapper[4953]: I1211 10:31:32.592919 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Dec 11 10:31:32 crc kubenswrapper[4953]: I1211 10:31:32.598350 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Dec 11 10:31:32 crc kubenswrapper[4953]: I1211 10:31:32.598608 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-dh5wx" Dec 11 10:31:32 crc kubenswrapper[4953]: I1211 10:31:32.607175 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Dec 11 10:31:32 crc kubenswrapper[4953]: I1211 10:31:32.690201 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ab07f951-5c8d-428b-9b26-52ea2284ee52-kolla-config\") pod \"memcached-0\" (UID: \"ab07f951-5c8d-428b-9b26-52ea2284ee52\") " pod="openstack/memcached-0" Dec 11 10:31:32 crc kubenswrapper[4953]: I1211 10:31:32.691819 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab07f951-5c8d-428b-9b26-52ea2284ee52-memcached-tls-certs\") pod \"memcached-0\" (UID: \"ab07f951-5c8d-428b-9b26-52ea2284ee52\") " pod="openstack/memcached-0" Dec 11 10:31:32 crc kubenswrapper[4953]: I1211 10:31:32.691873 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ab07f951-5c8d-428b-9b26-52ea2284ee52-config-data\") pod \"memcached-0\" (UID: \"ab07f951-5c8d-428b-9b26-52ea2284ee52\") " pod="openstack/memcached-0" Dec 11 10:31:32 crc kubenswrapper[4953]: I1211 10:31:32.691948 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab07f951-5c8d-428b-9b26-52ea2284ee52-combined-ca-bundle\") pod \"memcached-0\" (UID: \"ab07f951-5c8d-428b-9b26-52ea2284ee52\") " pod="openstack/memcached-0" Dec 11 10:31:32 crc kubenswrapper[4953]: I1211 10:31:32.692049 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6npsw\" (UniqueName: \"kubernetes.io/projected/ab07f951-5c8d-428b-9b26-52ea2284ee52-kube-api-access-6npsw\") pod \"memcached-0\" (UID: \"ab07f951-5c8d-428b-9b26-52ea2284ee52\") " pod="openstack/memcached-0" Dec 11 10:31:32 crc kubenswrapper[4953]: I1211 10:31:32.797157 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab07f951-5c8d-428b-9b26-52ea2284ee52-combined-ca-bundle\") pod \"memcached-0\" (UID: \"ab07f951-5c8d-428b-9b26-52ea2284ee52\") " pod="openstack/memcached-0" Dec 11 10:31:32 crc kubenswrapper[4953]: I1211 10:31:32.797328 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6npsw\" (UniqueName: \"kubernetes.io/projected/ab07f951-5c8d-428b-9b26-52ea2284ee52-kube-api-access-6npsw\") pod \"memcached-0\" (UID: \"ab07f951-5c8d-428b-9b26-52ea2284ee52\") " pod="openstack/memcached-0" Dec 11 10:31:32 crc kubenswrapper[4953]: I1211 10:31:32.797404 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ab07f951-5c8d-428b-9b26-52ea2284ee52-kolla-config\") pod \"memcached-0\" (UID: \"ab07f951-5c8d-428b-9b26-52ea2284ee52\") " pod="openstack/memcached-0" Dec 11 10:31:32 crc kubenswrapper[4953]: I1211 10:31:32.797492 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab07f951-5c8d-428b-9b26-52ea2284ee52-memcached-tls-certs\") pod \"memcached-0\" (UID: \"ab07f951-5c8d-428b-9b26-52ea2284ee52\") " pod="openstack/memcached-0" Dec 11 10:31:32 crc kubenswrapper[4953]: I1211 10:31:32.797547 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ab07f951-5c8d-428b-9b26-52ea2284ee52-config-data\") pod \"memcached-0\" (UID: \"ab07f951-5c8d-428b-9b26-52ea2284ee52\") " pod="openstack/memcached-0" Dec 11 10:31:32 crc kubenswrapper[4953]: I1211 10:31:32.800341 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ab07f951-5c8d-428b-9b26-52ea2284ee52-kolla-config\") pod \"memcached-0\" (UID: \"ab07f951-5c8d-428b-9b26-52ea2284ee52\") " pod="openstack/memcached-0" Dec 11 10:31:32 crc kubenswrapper[4953]: I1211 10:31:32.806699 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab07f951-5c8d-428b-9b26-52ea2284ee52-memcached-tls-certs\") pod \"memcached-0\" (UID: \"ab07f951-5c8d-428b-9b26-52ea2284ee52\") " pod="openstack/memcached-0" Dec 11 10:31:32 crc kubenswrapper[4953]: I1211 10:31:32.808686 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ab07f951-5c8d-428b-9b26-52ea2284ee52-config-data\") pod \"memcached-0\" (UID: \"ab07f951-5c8d-428b-9b26-52ea2284ee52\") " pod="openstack/memcached-0" Dec 11 10:31:32 crc kubenswrapper[4953]: I1211 10:31:32.812697 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab07f951-5c8d-428b-9b26-52ea2284ee52-combined-ca-bundle\") pod \"memcached-0\" (UID: \"ab07f951-5c8d-428b-9b26-52ea2284ee52\") " pod="openstack/memcached-0" Dec 11 10:31:32 crc kubenswrapper[4953]: I1211 10:31:32.837877 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6npsw\" (UniqueName: \"kubernetes.io/projected/ab07f951-5c8d-428b-9b26-52ea2284ee52-kube-api-access-6npsw\") pod \"memcached-0\" (UID: \"ab07f951-5c8d-428b-9b26-52ea2284ee52\") " pod="openstack/memcached-0" Dec 11 10:31:32 crc kubenswrapper[4953]: I1211 10:31:32.879161 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"23f99edb-3870-42f3-bdef-ec4db335ba35","Type":"ContainerStarted","Data":"19f2ff2b286c609eb991cdae646608c15ba0dd6a59d7d95eb2c9b362d53eba22"} Dec 11 10:31:32 crc kubenswrapper[4953]: I1211 10:31:32.930703 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Dec 11 10:31:33 crc kubenswrapper[4953]: I1211 10:31:33.133722 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 11 10:31:33 crc kubenswrapper[4953]: I1211 10:31:33.441259 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Dec 11 10:31:33 crc kubenswrapper[4953]: W1211 10:31:33.460701 4953 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podab07f951_5c8d_428b_9b26_52ea2284ee52.slice/crio-7738e432bc1c8b812b699e92585c8058cc38d67737fb88e0c1088d675668ea1e WatchSource:0}: Error finding container 7738e432bc1c8b812b699e92585c8058cc38d67737fb88e0c1088d675668ea1e: Status 404 returned error can't find the container with id 7738e432bc1c8b812b699e92585c8058cc38d67737fb88e0c1088d675668ea1e Dec 11 10:31:33 crc kubenswrapper[4953]: I1211 10:31:33.946256 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"ab07f951-5c8d-428b-9b26-52ea2284ee52","Type":"ContainerStarted","Data":"7738e432bc1c8b812b699e92585c8058cc38d67737fb88e0c1088d675668ea1e"} Dec 11 10:31:33 crc kubenswrapper[4953]: I1211 10:31:33.972412 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"27258186-4cab-45b4-a20c-a4c3ddc82f76","Type":"ContainerStarted","Data":"4ef4df22cc9c9f55d1bb8a1da388a5bbc6210506bfcd3708750a31edf4486029"} Dec 11 10:31:34 crc kubenswrapper[4953]: I1211 10:31:34.465302 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Dec 11 10:31:34 crc kubenswrapper[4953]: I1211 10:31:34.466859 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 11 10:31:34 crc kubenswrapper[4953]: I1211 10:31:34.469524 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-5np74" Dec 11 10:31:34 crc kubenswrapper[4953]: I1211 10:31:34.527642 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-742qb\" (UniqueName: \"kubernetes.io/projected/9bc1f5cb-5d27-4ce1-8f01-5219db1cbeab-kube-api-access-742qb\") pod \"kube-state-metrics-0\" (UID: \"9bc1f5cb-5d27-4ce1-8f01-5219db1cbeab\") " pod="openstack/kube-state-metrics-0" Dec 11 10:31:34 crc kubenswrapper[4953]: I1211 10:31:34.543504 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 11 10:31:34 crc kubenswrapper[4953]: I1211 10:31:34.629374 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-742qb\" (UniqueName: \"kubernetes.io/projected/9bc1f5cb-5d27-4ce1-8f01-5219db1cbeab-kube-api-access-742qb\") pod \"kube-state-metrics-0\" (UID: \"9bc1f5cb-5d27-4ce1-8f01-5219db1cbeab\") " pod="openstack/kube-state-metrics-0" Dec 11 10:31:34 crc kubenswrapper[4953]: I1211 10:31:34.650476 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-742qb\" (UniqueName: \"kubernetes.io/projected/9bc1f5cb-5d27-4ce1-8f01-5219db1cbeab-kube-api-access-742qb\") pod \"kube-state-metrics-0\" (UID: \"9bc1f5cb-5d27-4ce1-8f01-5219db1cbeab\") " pod="openstack/kube-state-metrics-0" Dec 11 10:31:34 crc kubenswrapper[4953]: I1211 10:31:34.813945 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 11 10:31:35 crc kubenswrapper[4953]: I1211 10:31:35.371485 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 11 10:31:37 crc kubenswrapper[4953]: I1211 10:31:37.655263 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-n6pxp"] Dec 11 10:31:37 crc kubenswrapper[4953]: I1211 10:31:37.657058 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-n6pxp" Dec 11 10:31:37 crc kubenswrapper[4953]: I1211 10:31:37.660639 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-tpwzf" Dec 11 10:31:37 crc kubenswrapper[4953]: I1211 10:31:37.660942 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Dec 11 10:31:37 crc kubenswrapper[4953]: I1211 10:31:37.661085 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Dec 11 10:31:37 crc kubenswrapper[4953]: I1211 10:31:37.677181 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-n6pxp"] Dec 11 10:31:37 crc kubenswrapper[4953]: I1211 10:31:37.682963 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/498f7a43-7db9-42e8-b722-a5fb6ae4749f-var-run\") pod \"ovn-controller-n6pxp\" (UID: \"498f7a43-7db9-42e8-b722-a5fb6ae4749f\") " pod="openstack/ovn-controller-n6pxp" Dec 11 10:31:37 crc kubenswrapper[4953]: I1211 10:31:37.683014 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/498f7a43-7db9-42e8-b722-a5fb6ae4749f-combined-ca-bundle\") pod \"ovn-controller-n6pxp\" (UID: \"498f7a43-7db9-42e8-b722-a5fb6ae4749f\") " pod="openstack/ovn-controller-n6pxp" Dec 11 10:31:37 crc kubenswrapper[4953]: I1211 10:31:37.683049 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/498f7a43-7db9-42e8-b722-a5fb6ae4749f-scripts\") pod \"ovn-controller-n6pxp\" (UID: \"498f7a43-7db9-42e8-b722-a5fb6ae4749f\") " pod="openstack/ovn-controller-n6pxp" Dec 11 10:31:37 crc kubenswrapper[4953]: I1211 10:31:37.683153 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/498f7a43-7db9-42e8-b722-a5fb6ae4749f-var-run-ovn\") pod \"ovn-controller-n6pxp\" (UID: \"498f7a43-7db9-42e8-b722-a5fb6ae4749f\") " pod="openstack/ovn-controller-n6pxp" Dec 11 10:31:37 crc kubenswrapper[4953]: I1211 10:31:37.683272 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48wpb\" (UniqueName: \"kubernetes.io/projected/498f7a43-7db9-42e8-b722-a5fb6ae4749f-kube-api-access-48wpb\") pod \"ovn-controller-n6pxp\" (UID: \"498f7a43-7db9-42e8-b722-a5fb6ae4749f\") " pod="openstack/ovn-controller-n6pxp" Dec 11 10:31:37 crc kubenswrapper[4953]: I1211 10:31:37.683316 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/498f7a43-7db9-42e8-b722-a5fb6ae4749f-ovn-controller-tls-certs\") pod \"ovn-controller-n6pxp\" (UID: \"498f7a43-7db9-42e8-b722-a5fb6ae4749f\") " pod="openstack/ovn-controller-n6pxp" Dec 11 10:31:37 crc kubenswrapper[4953]: I1211 10:31:37.683409 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/498f7a43-7db9-42e8-b722-a5fb6ae4749f-var-log-ovn\") pod \"ovn-controller-n6pxp\" (UID: \"498f7a43-7db9-42e8-b722-a5fb6ae4749f\") " pod="openstack/ovn-controller-n6pxp" Dec 11 10:31:37 crc kubenswrapper[4953]: I1211 10:31:37.687006 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-mbtwm"] Dec 11 10:31:37 crc kubenswrapper[4953]: I1211 10:31:37.689494 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-mbtwm" Dec 11 10:31:37 crc kubenswrapper[4953]: I1211 10:31:37.711158 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-mbtwm"] Dec 11 10:31:37 crc kubenswrapper[4953]: I1211 10:31:37.785471 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/498f7a43-7db9-42e8-b722-a5fb6ae4749f-var-run\") pod \"ovn-controller-n6pxp\" (UID: \"498f7a43-7db9-42e8-b722-a5fb6ae4749f\") " pod="openstack/ovn-controller-n6pxp" Dec 11 10:31:37 crc kubenswrapper[4953]: I1211 10:31:37.785528 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/498f7a43-7db9-42e8-b722-a5fb6ae4749f-combined-ca-bundle\") pod \"ovn-controller-n6pxp\" (UID: \"498f7a43-7db9-42e8-b722-a5fb6ae4749f\") " pod="openstack/ovn-controller-n6pxp" Dec 11 10:31:37 crc kubenswrapper[4953]: I1211 10:31:37.785586 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/498f7a43-7db9-42e8-b722-a5fb6ae4749f-scripts\") pod \"ovn-controller-n6pxp\" (UID: \"498f7a43-7db9-42e8-b722-a5fb6ae4749f\") " pod="openstack/ovn-controller-n6pxp" Dec 11 10:31:37 crc kubenswrapper[4953]: I1211 10:31:37.785614 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/5cfd14e5-05e2-4cc5-ba83-259321c6f872-var-lib\") pod \"ovn-controller-ovs-mbtwm\" (UID: \"5cfd14e5-05e2-4cc5-ba83-259321c6f872\") " pod="openstack/ovn-controller-ovs-mbtwm" Dec 11 10:31:37 crc kubenswrapper[4953]: I1211 10:31:37.785659 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/5cfd14e5-05e2-4cc5-ba83-259321c6f872-var-log\") pod \"ovn-controller-ovs-mbtwm\" (UID: \"5cfd14e5-05e2-4cc5-ba83-259321c6f872\") " pod="openstack/ovn-controller-ovs-mbtwm" Dec 11 10:31:37 crc kubenswrapper[4953]: I1211 10:31:37.785713 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/498f7a43-7db9-42e8-b722-a5fb6ae4749f-var-run-ovn\") pod \"ovn-controller-n6pxp\" (UID: \"498f7a43-7db9-42e8-b722-a5fb6ae4749f\") " pod="openstack/ovn-controller-n6pxp" Dec 11 10:31:37 crc kubenswrapper[4953]: I1211 10:31:37.785753 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5cfd14e5-05e2-4cc5-ba83-259321c6f872-var-run\") pod \"ovn-controller-ovs-mbtwm\" (UID: \"5cfd14e5-05e2-4cc5-ba83-259321c6f872\") " pod="openstack/ovn-controller-ovs-mbtwm" Dec 11 10:31:37 crc kubenswrapper[4953]: I1211 10:31:37.785820 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48wpb\" (UniqueName: \"kubernetes.io/projected/498f7a43-7db9-42e8-b722-a5fb6ae4749f-kube-api-access-48wpb\") pod \"ovn-controller-n6pxp\" (UID: \"498f7a43-7db9-42e8-b722-a5fb6ae4749f\") " pod="openstack/ovn-controller-n6pxp" Dec 11 10:31:37 crc kubenswrapper[4953]: I1211 10:31:37.785845 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/498f7a43-7db9-42e8-b722-a5fb6ae4749f-ovn-controller-tls-certs\") pod \"ovn-controller-n6pxp\" (UID: \"498f7a43-7db9-42e8-b722-a5fb6ae4749f\") " pod="openstack/ovn-controller-n6pxp" Dec 11 10:31:37 crc kubenswrapper[4953]: I1211 10:31:37.785870 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/5cfd14e5-05e2-4cc5-ba83-259321c6f872-etc-ovs\") pod \"ovn-controller-ovs-mbtwm\" (UID: \"5cfd14e5-05e2-4cc5-ba83-259321c6f872\") " pod="openstack/ovn-controller-ovs-mbtwm" Dec 11 10:31:37 crc kubenswrapper[4953]: I1211 10:31:37.785917 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5cfd14e5-05e2-4cc5-ba83-259321c6f872-scripts\") pod \"ovn-controller-ovs-mbtwm\" (UID: \"5cfd14e5-05e2-4cc5-ba83-259321c6f872\") " pod="openstack/ovn-controller-ovs-mbtwm" Dec 11 10:31:37 crc kubenswrapper[4953]: I1211 10:31:37.785942 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/498f7a43-7db9-42e8-b722-a5fb6ae4749f-var-log-ovn\") pod \"ovn-controller-n6pxp\" (UID: \"498f7a43-7db9-42e8-b722-a5fb6ae4749f\") " pod="openstack/ovn-controller-n6pxp" Dec 11 10:31:37 crc kubenswrapper[4953]: I1211 10:31:37.785995 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9bsb\" (UniqueName: \"kubernetes.io/projected/5cfd14e5-05e2-4cc5-ba83-259321c6f872-kube-api-access-d9bsb\") pod \"ovn-controller-ovs-mbtwm\" (UID: \"5cfd14e5-05e2-4cc5-ba83-259321c6f872\") " pod="openstack/ovn-controller-ovs-mbtwm" Dec 11 10:31:37 crc kubenswrapper[4953]: I1211 10:31:37.786769 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/498f7a43-7db9-42e8-b722-a5fb6ae4749f-var-run\") pod \"ovn-controller-n6pxp\" (UID: \"498f7a43-7db9-42e8-b722-a5fb6ae4749f\") " pod="openstack/ovn-controller-n6pxp" Dec 11 10:31:37 crc kubenswrapper[4953]: I1211 10:31:37.788970 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/498f7a43-7db9-42e8-b722-a5fb6ae4749f-var-run-ovn\") pod \"ovn-controller-n6pxp\" (UID: \"498f7a43-7db9-42e8-b722-a5fb6ae4749f\") " pod="openstack/ovn-controller-n6pxp" Dec 11 10:31:37 crc kubenswrapper[4953]: I1211 10:31:37.789237 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/498f7a43-7db9-42e8-b722-a5fb6ae4749f-var-log-ovn\") pod \"ovn-controller-n6pxp\" (UID: \"498f7a43-7db9-42e8-b722-a5fb6ae4749f\") " pod="openstack/ovn-controller-n6pxp" Dec 11 10:31:37 crc kubenswrapper[4953]: I1211 10:31:37.790028 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/498f7a43-7db9-42e8-b722-a5fb6ae4749f-scripts\") pod \"ovn-controller-n6pxp\" (UID: \"498f7a43-7db9-42e8-b722-a5fb6ae4749f\") " pod="openstack/ovn-controller-n6pxp" Dec 11 10:31:37 crc kubenswrapper[4953]: I1211 10:31:37.793546 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/498f7a43-7db9-42e8-b722-a5fb6ae4749f-combined-ca-bundle\") pod \"ovn-controller-n6pxp\" (UID: \"498f7a43-7db9-42e8-b722-a5fb6ae4749f\") " pod="openstack/ovn-controller-n6pxp" Dec 11 10:31:37 crc kubenswrapper[4953]: I1211 10:31:37.810564 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48wpb\" (UniqueName: \"kubernetes.io/projected/498f7a43-7db9-42e8-b722-a5fb6ae4749f-kube-api-access-48wpb\") pod \"ovn-controller-n6pxp\" (UID: \"498f7a43-7db9-42e8-b722-a5fb6ae4749f\") " pod="openstack/ovn-controller-n6pxp" Dec 11 10:31:37 crc kubenswrapper[4953]: I1211 10:31:37.812241 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/498f7a43-7db9-42e8-b722-a5fb6ae4749f-ovn-controller-tls-certs\") pod \"ovn-controller-n6pxp\" (UID: \"498f7a43-7db9-42e8-b722-a5fb6ae4749f\") " pod="openstack/ovn-controller-n6pxp" Dec 11 10:31:37 crc kubenswrapper[4953]: I1211 10:31:37.887434 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5cfd14e5-05e2-4cc5-ba83-259321c6f872-scripts\") pod \"ovn-controller-ovs-mbtwm\" (UID: \"5cfd14e5-05e2-4cc5-ba83-259321c6f872\") " pod="openstack/ovn-controller-ovs-mbtwm" Dec 11 10:31:37 crc kubenswrapper[4953]: I1211 10:31:37.887521 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9bsb\" (UniqueName: \"kubernetes.io/projected/5cfd14e5-05e2-4cc5-ba83-259321c6f872-kube-api-access-d9bsb\") pod \"ovn-controller-ovs-mbtwm\" (UID: \"5cfd14e5-05e2-4cc5-ba83-259321c6f872\") " pod="openstack/ovn-controller-ovs-mbtwm" Dec 11 10:31:37 crc kubenswrapper[4953]: I1211 10:31:37.887562 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/5cfd14e5-05e2-4cc5-ba83-259321c6f872-var-lib\") pod \"ovn-controller-ovs-mbtwm\" (UID: \"5cfd14e5-05e2-4cc5-ba83-259321c6f872\") " pod="openstack/ovn-controller-ovs-mbtwm" Dec 11 10:31:37 crc kubenswrapper[4953]: I1211 10:31:37.887603 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/5cfd14e5-05e2-4cc5-ba83-259321c6f872-var-log\") pod \"ovn-controller-ovs-mbtwm\" (UID: \"5cfd14e5-05e2-4cc5-ba83-259321c6f872\") " pod="openstack/ovn-controller-ovs-mbtwm" Dec 11 10:31:37 crc kubenswrapper[4953]: I1211 10:31:37.887652 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5cfd14e5-05e2-4cc5-ba83-259321c6f872-var-run\") pod \"ovn-controller-ovs-mbtwm\" (UID: \"5cfd14e5-05e2-4cc5-ba83-259321c6f872\") " pod="openstack/ovn-controller-ovs-mbtwm" Dec 11 10:31:37 crc kubenswrapper[4953]: I1211 10:31:37.887675 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/5cfd14e5-05e2-4cc5-ba83-259321c6f872-etc-ovs\") pod \"ovn-controller-ovs-mbtwm\" (UID: \"5cfd14e5-05e2-4cc5-ba83-259321c6f872\") " pod="openstack/ovn-controller-ovs-mbtwm" Dec 11 10:31:37 crc kubenswrapper[4953]: I1211 10:31:37.888084 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/5cfd14e5-05e2-4cc5-ba83-259321c6f872-etc-ovs\") pod \"ovn-controller-ovs-mbtwm\" (UID: \"5cfd14e5-05e2-4cc5-ba83-259321c6f872\") " pod="openstack/ovn-controller-ovs-mbtwm" Dec 11 10:31:37 crc kubenswrapper[4953]: I1211 10:31:37.889019 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5cfd14e5-05e2-4cc5-ba83-259321c6f872-var-run\") pod \"ovn-controller-ovs-mbtwm\" (UID: \"5cfd14e5-05e2-4cc5-ba83-259321c6f872\") " pod="openstack/ovn-controller-ovs-mbtwm" Dec 11 10:31:37 crc kubenswrapper[4953]: I1211 10:31:37.889051 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/5cfd14e5-05e2-4cc5-ba83-259321c6f872-var-log\") pod \"ovn-controller-ovs-mbtwm\" (UID: \"5cfd14e5-05e2-4cc5-ba83-259321c6f872\") " pod="openstack/ovn-controller-ovs-mbtwm" Dec 11 10:31:37 crc kubenswrapper[4953]: I1211 10:31:37.889261 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/5cfd14e5-05e2-4cc5-ba83-259321c6f872-var-lib\") pod \"ovn-controller-ovs-mbtwm\" (UID: \"5cfd14e5-05e2-4cc5-ba83-259321c6f872\") " pod="openstack/ovn-controller-ovs-mbtwm" Dec 11 10:31:37 crc kubenswrapper[4953]: I1211 10:31:37.890187 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5cfd14e5-05e2-4cc5-ba83-259321c6f872-scripts\") pod \"ovn-controller-ovs-mbtwm\" (UID: \"5cfd14e5-05e2-4cc5-ba83-259321c6f872\") " pod="openstack/ovn-controller-ovs-mbtwm" Dec 11 10:31:37 crc kubenswrapper[4953]: I1211 10:31:37.908710 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9bsb\" (UniqueName: \"kubernetes.io/projected/5cfd14e5-05e2-4cc5-ba83-259321c6f872-kube-api-access-d9bsb\") pod \"ovn-controller-ovs-mbtwm\" (UID: \"5cfd14e5-05e2-4cc5-ba83-259321c6f872\") " pod="openstack/ovn-controller-ovs-mbtwm" Dec 11 10:31:37 crc kubenswrapper[4953]: I1211 10:31:37.992412 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-n6pxp" Dec 11 10:31:38 crc kubenswrapper[4953]: I1211 10:31:38.010467 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-mbtwm" Dec 11 10:31:39 crc kubenswrapper[4953]: I1211 10:31:39.373624 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 11 10:31:39 crc kubenswrapper[4953]: I1211 10:31:39.374975 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Dec 11 10:31:39 crc kubenswrapper[4953]: I1211 10:31:39.376790 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-9b6nv" Dec 11 10:31:39 crc kubenswrapper[4953]: I1211 10:31:39.378755 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Dec 11 10:31:39 crc kubenswrapper[4953]: I1211 10:31:39.378759 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Dec 11 10:31:39 crc kubenswrapper[4953]: I1211 10:31:39.378755 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Dec 11 10:31:39 crc kubenswrapper[4953]: I1211 10:31:39.378984 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Dec 11 10:31:39 crc kubenswrapper[4953]: I1211 10:31:39.385263 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 11 10:31:39 crc kubenswrapper[4953]: I1211 10:31:39.413856 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4237606-fdcf-403b-8e5a-1bbb4a2e38de-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"b4237606-fdcf-403b-8e5a-1bbb4a2e38de\") " pod="openstack/ovsdbserver-nb-0" Dec 11 10:31:39 crc kubenswrapper[4953]: I1211 10:31:39.413964 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xv9xc\" (UniqueName: \"kubernetes.io/projected/b4237606-fdcf-403b-8e5a-1bbb4a2e38de-kube-api-access-xv9xc\") pod \"ovsdbserver-nb-0\" (UID: \"b4237606-fdcf-403b-8e5a-1bbb4a2e38de\") " pod="openstack/ovsdbserver-nb-0" Dec 11 10:31:39 crc kubenswrapper[4953]: I1211 10:31:39.414000 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4237606-fdcf-403b-8e5a-1bbb4a2e38de-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"b4237606-fdcf-403b-8e5a-1bbb4a2e38de\") " pod="openstack/ovsdbserver-nb-0" Dec 11 10:31:39 crc kubenswrapper[4953]: I1211 10:31:39.414126 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4237606-fdcf-403b-8e5a-1bbb4a2e38de-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"b4237606-fdcf-403b-8e5a-1bbb4a2e38de\") " pod="openstack/ovsdbserver-nb-0" Dec 11 10:31:39 crc kubenswrapper[4953]: I1211 10:31:39.414188 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4237606-fdcf-403b-8e5a-1bbb4a2e38de-config\") pod \"ovsdbserver-nb-0\" (UID: \"b4237606-fdcf-403b-8e5a-1bbb4a2e38de\") " pod="openstack/ovsdbserver-nb-0" Dec 11 10:31:39 crc kubenswrapper[4953]: I1211 10:31:39.414213 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b4237606-fdcf-403b-8e5a-1bbb4a2e38de-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"b4237606-fdcf-403b-8e5a-1bbb4a2e38de\") " pod="openstack/ovsdbserver-nb-0" Dec 11 10:31:39 crc kubenswrapper[4953]: I1211 10:31:39.414262 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b4237606-fdcf-403b-8e5a-1bbb4a2e38de-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"b4237606-fdcf-403b-8e5a-1bbb4a2e38de\") " pod="openstack/ovsdbserver-nb-0" Dec 11 10:31:39 crc kubenswrapper[4953]: I1211 10:31:39.414389 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-nb-0\" (UID: \"b4237606-fdcf-403b-8e5a-1bbb4a2e38de\") " pod="openstack/ovsdbserver-nb-0" Dec 11 10:31:39 crc kubenswrapper[4953]: I1211 10:31:39.515767 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4237606-fdcf-403b-8e5a-1bbb4a2e38de-config\") pod \"ovsdbserver-nb-0\" (UID: \"b4237606-fdcf-403b-8e5a-1bbb4a2e38de\") " pod="openstack/ovsdbserver-nb-0" Dec 11 10:31:39 crc kubenswrapper[4953]: I1211 10:31:39.515822 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b4237606-fdcf-403b-8e5a-1bbb4a2e38de-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"b4237606-fdcf-403b-8e5a-1bbb4a2e38de\") " pod="openstack/ovsdbserver-nb-0" Dec 11 10:31:39 crc kubenswrapper[4953]: I1211 10:31:39.515886 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b4237606-fdcf-403b-8e5a-1bbb4a2e38de-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"b4237606-fdcf-403b-8e5a-1bbb4a2e38de\") " pod="openstack/ovsdbserver-nb-0" Dec 11 10:31:39 crc kubenswrapper[4953]: I1211 10:31:39.515923 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-nb-0\" (UID: \"b4237606-fdcf-403b-8e5a-1bbb4a2e38de\") " pod="openstack/ovsdbserver-nb-0" Dec 11 10:31:39 crc kubenswrapper[4953]: I1211 10:31:39.515994 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4237606-fdcf-403b-8e5a-1bbb4a2e38de-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"b4237606-fdcf-403b-8e5a-1bbb4a2e38de\") " pod="openstack/ovsdbserver-nb-0" Dec 11 10:31:39 crc kubenswrapper[4953]: I1211 10:31:39.516030 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xv9xc\" (UniqueName: \"kubernetes.io/projected/b4237606-fdcf-403b-8e5a-1bbb4a2e38de-kube-api-access-xv9xc\") pod \"ovsdbserver-nb-0\" (UID: \"b4237606-fdcf-403b-8e5a-1bbb4a2e38de\") " pod="openstack/ovsdbserver-nb-0" Dec 11 10:31:39 crc kubenswrapper[4953]: I1211 10:31:39.516050 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4237606-fdcf-403b-8e5a-1bbb4a2e38de-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"b4237606-fdcf-403b-8e5a-1bbb4a2e38de\") " pod="openstack/ovsdbserver-nb-0" Dec 11 10:31:39 crc kubenswrapper[4953]: I1211 10:31:39.516087 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4237606-fdcf-403b-8e5a-1bbb4a2e38de-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"b4237606-fdcf-403b-8e5a-1bbb4a2e38de\") " pod="openstack/ovsdbserver-nb-0" Dec 11 10:31:39 crc kubenswrapper[4953]: I1211 10:31:39.516434 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b4237606-fdcf-403b-8e5a-1bbb4a2e38de-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"b4237606-fdcf-403b-8e5a-1bbb4a2e38de\") " pod="openstack/ovsdbserver-nb-0" Dec 11 10:31:39 crc kubenswrapper[4953]: I1211 10:31:39.516641 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4237606-fdcf-403b-8e5a-1bbb4a2e38de-config\") pod \"ovsdbserver-nb-0\" (UID: \"b4237606-fdcf-403b-8e5a-1bbb4a2e38de\") " pod="openstack/ovsdbserver-nb-0" Dec 11 10:31:39 crc kubenswrapper[4953]: I1211 10:31:39.517055 4953 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-nb-0\" (UID: \"b4237606-fdcf-403b-8e5a-1bbb4a2e38de\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/ovsdbserver-nb-0" Dec 11 10:31:39 crc kubenswrapper[4953]: I1211 10:31:39.518087 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b4237606-fdcf-403b-8e5a-1bbb4a2e38de-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"b4237606-fdcf-403b-8e5a-1bbb4a2e38de\") " pod="openstack/ovsdbserver-nb-0" Dec 11 10:31:39 crc kubenswrapper[4953]: I1211 10:31:39.520355 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4237606-fdcf-403b-8e5a-1bbb4a2e38de-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"b4237606-fdcf-403b-8e5a-1bbb4a2e38de\") " pod="openstack/ovsdbserver-nb-0" Dec 11 10:31:39 crc kubenswrapper[4953]: I1211 10:31:39.522191 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4237606-fdcf-403b-8e5a-1bbb4a2e38de-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"b4237606-fdcf-403b-8e5a-1bbb4a2e38de\") " pod="openstack/ovsdbserver-nb-0" Dec 11 10:31:39 crc kubenswrapper[4953]: I1211 10:31:39.529957 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4237606-fdcf-403b-8e5a-1bbb4a2e38de-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"b4237606-fdcf-403b-8e5a-1bbb4a2e38de\") " pod="openstack/ovsdbserver-nb-0" Dec 11 10:31:39 crc kubenswrapper[4953]: I1211 10:31:39.533678 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xv9xc\" (UniqueName: \"kubernetes.io/projected/b4237606-fdcf-403b-8e5a-1bbb4a2e38de-kube-api-access-xv9xc\") pod \"ovsdbserver-nb-0\" (UID: \"b4237606-fdcf-403b-8e5a-1bbb4a2e38de\") " pod="openstack/ovsdbserver-nb-0" Dec 11 10:31:39 crc kubenswrapper[4953]: I1211 10:31:39.539094 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-nb-0\" (UID: \"b4237606-fdcf-403b-8e5a-1bbb4a2e38de\") " pod="openstack/ovsdbserver-nb-0" Dec 11 10:31:39 crc kubenswrapper[4953]: I1211 10:31:39.695170 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Dec 11 10:31:41 crc kubenswrapper[4953]: I1211 10:31:41.740606 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 11 10:31:41 crc kubenswrapper[4953]: I1211 10:31:41.742313 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Dec 11 10:31:41 crc kubenswrapper[4953]: I1211 10:31:41.744074 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Dec 11 10:31:41 crc kubenswrapper[4953]: I1211 10:31:41.744144 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-jnvdn" Dec 11 10:31:41 crc kubenswrapper[4953]: I1211 10:31:41.746101 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Dec 11 10:31:41 crc kubenswrapper[4953]: I1211 10:31:41.746161 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Dec 11 10:31:41 crc kubenswrapper[4953]: I1211 10:31:41.771126 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 11 10:31:41 crc kubenswrapper[4953]: I1211 10:31:41.854678 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8bb60f7f-22a1-48c2-8815-f2c6cfddfc2e-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"8bb60f7f-22a1-48c2-8815-f2c6cfddfc2e\") " pod="openstack/ovsdbserver-sb-0" Dec 11 10:31:41 crc kubenswrapper[4953]: I1211 10:31:41.854960 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8bb60f7f-22a1-48c2-8815-f2c6cfddfc2e-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"8bb60f7f-22a1-48c2-8815-f2c6cfddfc2e\") " pod="openstack/ovsdbserver-sb-0" Dec 11 10:31:41 crc kubenswrapper[4953]: I1211 10:31:41.855164 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8bb60f7f-22a1-48c2-8815-f2c6cfddfc2e-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"8bb60f7f-22a1-48c2-8815-f2c6cfddfc2e\") " pod="openstack/ovsdbserver-sb-0" Dec 11 10:31:41 crc kubenswrapper[4953]: I1211 10:31:41.855275 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8bb60f7f-22a1-48c2-8815-f2c6cfddfc2e-config\") pod \"ovsdbserver-sb-0\" (UID: \"8bb60f7f-22a1-48c2-8815-f2c6cfddfc2e\") " pod="openstack/ovsdbserver-sb-0" Dec 11 10:31:41 crc kubenswrapper[4953]: I1211 10:31:41.855364 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-sb-0\" (UID: \"8bb60f7f-22a1-48c2-8815-f2c6cfddfc2e\") " pod="openstack/ovsdbserver-sb-0" Dec 11 10:31:41 crc kubenswrapper[4953]: I1211 10:31:41.855394 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8bb60f7f-22a1-48c2-8815-f2c6cfddfc2e-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"8bb60f7f-22a1-48c2-8815-f2c6cfddfc2e\") " pod="openstack/ovsdbserver-sb-0" Dec 11 10:31:41 crc kubenswrapper[4953]: I1211 10:31:41.855412 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8w67\" (UniqueName: \"kubernetes.io/projected/8bb60f7f-22a1-48c2-8815-f2c6cfddfc2e-kube-api-access-c8w67\") pod \"ovsdbserver-sb-0\" (UID: \"8bb60f7f-22a1-48c2-8815-f2c6cfddfc2e\") " pod="openstack/ovsdbserver-sb-0" Dec 11 10:31:41 crc kubenswrapper[4953]: I1211 10:31:41.855442 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8bb60f7f-22a1-48c2-8815-f2c6cfddfc2e-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"8bb60f7f-22a1-48c2-8815-f2c6cfddfc2e\") " pod="openstack/ovsdbserver-sb-0" Dec 11 10:31:41 crc kubenswrapper[4953]: I1211 10:31:41.956639 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8bb60f7f-22a1-48c2-8815-f2c6cfddfc2e-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"8bb60f7f-22a1-48c2-8815-f2c6cfddfc2e\") " pod="openstack/ovsdbserver-sb-0" Dec 11 10:31:41 crc kubenswrapper[4953]: I1211 10:31:41.956730 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8bb60f7f-22a1-48c2-8815-f2c6cfddfc2e-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"8bb60f7f-22a1-48c2-8815-f2c6cfddfc2e\") " pod="openstack/ovsdbserver-sb-0" Dec 11 10:31:41 crc kubenswrapper[4953]: I1211 10:31:41.956781 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8bb60f7f-22a1-48c2-8815-f2c6cfddfc2e-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"8bb60f7f-22a1-48c2-8815-f2c6cfddfc2e\") " pod="openstack/ovsdbserver-sb-0" Dec 11 10:31:41 crc kubenswrapper[4953]: I1211 10:31:41.956807 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8bb60f7f-22a1-48c2-8815-f2c6cfddfc2e-config\") pod \"ovsdbserver-sb-0\" (UID: \"8bb60f7f-22a1-48c2-8815-f2c6cfddfc2e\") " pod="openstack/ovsdbserver-sb-0" Dec 11 10:31:41 crc kubenswrapper[4953]: I1211 10:31:41.956839 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-sb-0\" (UID: \"8bb60f7f-22a1-48c2-8815-f2c6cfddfc2e\") " pod="openstack/ovsdbserver-sb-0" Dec 11 10:31:41 crc kubenswrapper[4953]: I1211 10:31:41.956861 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8bb60f7f-22a1-48c2-8815-f2c6cfddfc2e-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"8bb60f7f-22a1-48c2-8815-f2c6cfddfc2e\") " pod="openstack/ovsdbserver-sb-0" Dec 11 10:31:41 crc kubenswrapper[4953]: I1211 10:31:41.956878 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c8w67\" (UniqueName: \"kubernetes.io/projected/8bb60f7f-22a1-48c2-8815-f2c6cfddfc2e-kube-api-access-c8w67\") pod \"ovsdbserver-sb-0\" (UID: \"8bb60f7f-22a1-48c2-8815-f2c6cfddfc2e\") " pod="openstack/ovsdbserver-sb-0" Dec 11 10:31:41 crc kubenswrapper[4953]: I1211 10:31:41.956906 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8bb60f7f-22a1-48c2-8815-f2c6cfddfc2e-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"8bb60f7f-22a1-48c2-8815-f2c6cfddfc2e\") " pod="openstack/ovsdbserver-sb-0" Dec 11 10:31:41 crc kubenswrapper[4953]: I1211 10:31:41.957600 4953 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-sb-0\" (UID: \"8bb60f7f-22a1-48c2-8815-f2c6cfddfc2e\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/ovsdbserver-sb-0" Dec 11 10:31:41 crc kubenswrapper[4953]: I1211 10:31:41.957833 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8bb60f7f-22a1-48c2-8815-f2c6cfddfc2e-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"8bb60f7f-22a1-48c2-8815-f2c6cfddfc2e\") " pod="openstack/ovsdbserver-sb-0" Dec 11 10:31:41 crc kubenswrapper[4953]: I1211 10:31:41.958032 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8bb60f7f-22a1-48c2-8815-f2c6cfddfc2e-config\") pod \"ovsdbserver-sb-0\" (UID: \"8bb60f7f-22a1-48c2-8815-f2c6cfddfc2e\") " pod="openstack/ovsdbserver-sb-0" Dec 11 10:31:41 crc kubenswrapper[4953]: I1211 10:31:41.958041 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8bb60f7f-22a1-48c2-8815-f2c6cfddfc2e-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"8bb60f7f-22a1-48c2-8815-f2c6cfddfc2e\") " pod="openstack/ovsdbserver-sb-0" Dec 11 10:31:41 crc kubenswrapper[4953]: I1211 10:31:41.963658 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8bb60f7f-22a1-48c2-8815-f2c6cfddfc2e-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"8bb60f7f-22a1-48c2-8815-f2c6cfddfc2e\") " pod="openstack/ovsdbserver-sb-0" Dec 11 10:31:41 crc kubenswrapper[4953]: I1211 10:31:41.965293 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8bb60f7f-22a1-48c2-8815-f2c6cfddfc2e-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"8bb60f7f-22a1-48c2-8815-f2c6cfddfc2e\") " pod="openstack/ovsdbserver-sb-0" Dec 11 10:31:41 crc kubenswrapper[4953]: I1211 10:31:41.966089 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8bb60f7f-22a1-48c2-8815-f2c6cfddfc2e-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"8bb60f7f-22a1-48c2-8815-f2c6cfddfc2e\") " pod="openstack/ovsdbserver-sb-0" Dec 11 10:31:41 crc kubenswrapper[4953]: I1211 10:31:41.983361 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c8w67\" (UniqueName: \"kubernetes.io/projected/8bb60f7f-22a1-48c2-8815-f2c6cfddfc2e-kube-api-access-c8w67\") pod \"ovsdbserver-sb-0\" (UID: \"8bb60f7f-22a1-48c2-8815-f2c6cfddfc2e\") " pod="openstack/ovsdbserver-sb-0" Dec 11 10:31:42 crc kubenswrapper[4953]: I1211 10:31:42.002114 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-sb-0\" (UID: \"8bb60f7f-22a1-48c2-8815-f2c6cfddfc2e\") " pod="openstack/ovsdbserver-sb-0" Dec 11 10:31:42 crc kubenswrapper[4953]: I1211 10:31:42.047887 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"9bc1f5cb-5d27-4ce1-8f01-5219db1cbeab","Type":"ContainerStarted","Data":"738dfe9957b6c367e5eeb447874b2745a611cfce8db1f2768ce655265cb1852c"} Dec 11 10:31:42 crc kubenswrapper[4953]: I1211 10:31:42.062759 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Dec 11 10:32:08 crc kubenswrapper[4953]: E1211 10:32:08.818176 4953 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:e733252aab7f4bc0efbdd712bcd88e44c5498bf1773dba843bc9dcfac324fe3d" Dec 11 10:32:08 crc kubenswrapper[4953]: E1211 10:32:08.818963 4953 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:e733252aab7f4bc0efbdd712bcd88e44c5498bf1773dba843bc9dcfac324fe3d,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fv427,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(b29c8985-0d8c-4382-9969-29422929136f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 11 10:32:08 crc kubenswrapper[4953]: E1211 10:32:08.821329 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="b29c8985-0d8c-4382-9969-29422929136f" Dec 11 10:32:09 crc kubenswrapper[4953]: E1211 10:32:09.312801 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:e733252aab7f4bc0efbdd712bcd88e44c5498bf1773dba843bc9dcfac324fe3d\\\"\"" pod="openstack/rabbitmq-server-0" podUID="b29c8985-0d8c-4382-9969-29422929136f" Dec 11 10:32:09 crc kubenswrapper[4953]: E1211 10:32:09.656303 4953 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-memcached@sha256:e47191ba776414b781b3e27b856ab45a03b9480c7dc2b1addb939608794882dc" Dec 11 10:32:09 crc kubenswrapper[4953]: E1211 10:32:09.656555 4953 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:memcached,Image:quay.io/podified-antelope-centos9/openstack-memcached@sha256:e47191ba776414b781b3e27b856ab45a03b9480c7dc2b1addb939608794882dc,Command:[/usr/bin/dumb-init -- /usr/local/bin/kolla_start],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:memcached,HostPort:0,ContainerPort:11211,Protocol:TCP,HostIP:,},ContainerPort{Name:memcached-tls,HostPort:0,ContainerPort:11212,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:POD_IPS,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIPs,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:CONFIG_HASH,Value:n8bh95h64bh697h65fh9h678h9chf8h675h567h5hfh78h694h66bh689h5c4h7dh79h85h555h5d6h666h568h6ch5b4h89hc8h5d7h579h58q,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/src,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/certs/memcached.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/private/memcached.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6npsw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42457,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42457,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod memcached-0_openstack(ab07f951-5c8d-428b-9b26-52ea2284ee52): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 11 10:32:09 crc kubenswrapper[4953]: E1211 10:32:09.657846 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/memcached-0" podUID="ab07f951-5c8d-428b-9b26-52ea2284ee52" Dec 11 10:32:09 crc kubenswrapper[4953]: E1211 10:32:09.790025 4953 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:e733252aab7f4bc0efbdd712bcd88e44c5498bf1773dba843bc9dcfac324fe3d" Dec 11 10:32:09 crc kubenswrapper[4953]: E1211 10:32:09.790600 4953 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:e733252aab7f4bc0efbdd712bcd88e44c5498bf1773dba843bc9dcfac324fe3d,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hsppt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_openstack(01196778-96de-4f79-b9ac-e01243f86ebb): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 11 10:32:09 crc kubenswrapper[4953]: E1211 10:32:09.791837 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-cell1-server-0" podUID="01196778-96de-4f79-b9ac-e01243f86ebb" Dec 11 10:32:10 crc kubenswrapper[4953]: E1211 10:32:10.327003 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-memcached@sha256:e47191ba776414b781b3e27b856ab45a03b9480c7dc2b1addb939608794882dc\\\"\"" pod="openstack/memcached-0" podUID="ab07f951-5c8d-428b-9b26-52ea2284ee52" Dec 11 10:32:10 crc kubenswrapper[4953]: E1211 10:32:10.327339 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:e733252aab7f4bc0efbdd712bcd88e44c5498bf1773dba843bc9dcfac324fe3d\\\"\"" pod="openstack/rabbitmq-cell1-server-0" podUID="01196778-96de-4f79-b9ac-e01243f86ebb" Dec 11 10:32:11 crc kubenswrapper[4953]: E1211 10:32:11.407147 4953 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33" Dec 11 10:32:11 crc kubenswrapper[4953]: E1211 10:32:11.407434 4953 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h56d8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-84bb9d8bd9-ncvp2_openstack(f12edfc1-607b-4fd2-bf95-997de251003d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 11 10:32:11 crc kubenswrapper[4953]: E1211 10:32:11.408929 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-84bb9d8bd9-ncvp2" podUID="f12edfc1-607b-4fd2-bf95-997de251003d" Dec 11 10:32:11 crc kubenswrapper[4953]: E1211 10:32:11.983852 4953 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33" Dec 11 10:32:11 crc kubenswrapper[4953]: E1211 10:32:11.984076 4953 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t8xh9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-744ffd65bc-sgr4j_openstack(33fa5e5b-3be4-4fb2-8a05-e9f500184264): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 11 10:32:11 crc kubenswrapper[4953]: E1211 10:32:11.985289 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-744ffd65bc-sgr4j" podUID="33fa5e5b-3be4-4fb2-8a05-e9f500184264" Dec 11 10:32:12 crc kubenswrapper[4953]: E1211 10:32:12.344101 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33\\\"\"" pod="openstack/dnsmasq-dns-744ffd65bc-sgr4j" podUID="33fa5e5b-3be4-4fb2-8a05-e9f500184264" Dec 11 10:32:13 crc kubenswrapper[4953]: E1211 10:32:13.299217 4953 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33" Dec 11 10:32:13 crc kubenswrapper[4953]: E1211 10:32:13.299429 4953 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-td9pq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-5f854695bc-6xzcj_openstack(908458a6-c175-4cc4-85b8-c8e6313c5501): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 11 10:32:13 crc kubenswrapper[4953]: E1211 10:32:13.300636 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-5f854695bc-6xzcj" podUID="908458a6-c175-4cc4-85b8-c8e6313c5501" Dec 11 10:32:13 crc kubenswrapper[4953]: E1211 10:32:13.971512 4953 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13" Dec 11 10:32:13 crc kubenswrapper[4953]: E1211 10:32:13.971969 4953 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c9xhj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-galera-0_openstack(23f99edb-3870-42f3-bdef-ec4db335ba35): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 11 10:32:13 crc kubenswrapper[4953]: E1211 10:32:13.973676 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-galera-0" podUID="23f99edb-3870-42f3-bdef-ec4db335ba35" Dec 11 10:32:13 crc kubenswrapper[4953]: E1211 10:32:13.975508 4953 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13" Dec 11 10:32:13 crc kubenswrapper[4953]: E1211 10:32:13.975743 4953 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-npns9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-cell1-galera-0_openstack(27258186-4cab-45b4-a20c-a4c3ddc82f76): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 11 10:32:13 crc kubenswrapper[4953]: E1211 10:32:13.977371 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-cell1-galera-0" podUID="27258186-4cab-45b4-a20c-a4c3ddc82f76" Dec 11 10:32:14 crc kubenswrapper[4953]: I1211 10:32:14.071305 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f854695bc-6xzcj" Dec 11 10:32:14 crc kubenswrapper[4953]: I1211 10:32:14.082998 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84bb9d8bd9-ncvp2" Dec 11 10:32:14 crc kubenswrapper[4953]: I1211 10:32:14.114353 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/908458a6-c175-4cc4-85b8-c8e6313c5501-config\") pod \"908458a6-c175-4cc4-85b8-c8e6313c5501\" (UID: \"908458a6-c175-4cc4-85b8-c8e6313c5501\") " Dec 11 10:32:14 crc kubenswrapper[4953]: I1211 10:32:14.114468 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-td9pq\" (UniqueName: \"kubernetes.io/projected/908458a6-c175-4cc4-85b8-c8e6313c5501-kube-api-access-td9pq\") pod \"908458a6-c175-4cc4-85b8-c8e6313c5501\" (UID: \"908458a6-c175-4cc4-85b8-c8e6313c5501\") " Dec 11 10:32:14 crc kubenswrapper[4953]: I1211 10:32:14.114506 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/908458a6-c175-4cc4-85b8-c8e6313c5501-dns-svc\") pod \"908458a6-c175-4cc4-85b8-c8e6313c5501\" (UID: \"908458a6-c175-4cc4-85b8-c8e6313c5501\") " Dec 11 10:32:14 crc kubenswrapper[4953]: I1211 10:32:14.115673 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/908458a6-c175-4cc4-85b8-c8e6313c5501-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "908458a6-c175-4cc4-85b8-c8e6313c5501" (UID: "908458a6-c175-4cc4-85b8-c8e6313c5501"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:32:14 crc kubenswrapper[4953]: I1211 10:32:14.116129 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/908458a6-c175-4cc4-85b8-c8e6313c5501-config" (OuterVolumeSpecName: "config") pod "908458a6-c175-4cc4-85b8-c8e6313c5501" (UID: "908458a6-c175-4cc4-85b8-c8e6313c5501"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:32:14 crc kubenswrapper[4953]: I1211 10:32:14.122081 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/908458a6-c175-4cc4-85b8-c8e6313c5501-kube-api-access-td9pq" (OuterVolumeSpecName: "kube-api-access-td9pq") pod "908458a6-c175-4cc4-85b8-c8e6313c5501" (UID: "908458a6-c175-4cc4-85b8-c8e6313c5501"). InnerVolumeSpecName "kube-api-access-td9pq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:32:14 crc kubenswrapper[4953]: I1211 10:32:14.215839 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h56d8\" (UniqueName: \"kubernetes.io/projected/f12edfc1-607b-4fd2-bf95-997de251003d-kube-api-access-h56d8\") pod \"f12edfc1-607b-4fd2-bf95-997de251003d\" (UID: \"f12edfc1-607b-4fd2-bf95-997de251003d\") " Dec 11 10:32:14 crc kubenswrapper[4953]: I1211 10:32:14.215963 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f12edfc1-607b-4fd2-bf95-997de251003d-config\") pod \"f12edfc1-607b-4fd2-bf95-997de251003d\" (UID: \"f12edfc1-607b-4fd2-bf95-997de251003d\") " Dec 11 10:32:14 crc kubenswrapper[4953]: I1211 10:32:14.216431 4953 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/908458a6-c175-4cc4-85b8-c8e6313c5501-config\") on node \"crc\" DevicePath \"\"" Dec 11 10:32:14 crc kubenswrapper[4953]: I1211 10:32:14.216458 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-td9pq\" (UniqueName: \"kubernetes.io/projected/908458a6-c175-4cc4-85b8-c8e6313c5501-kube-api-access-td9pq\") on node \"crc\" DevicePath \"\"" Dec 11 10:32:14 crc kubenswrapper[4953]: I1211 10:32:14.216469 4953 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/908458a6-c175-4cc4-85b8-c8e6313c5501-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 11 10:32:14 crc kubenswrapper[4953]: I1211 10:32:14.216892 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f12edfc1-607b-4fd2-bf95-997de251003d-config" (OuterVolumeSpecName: "config") pod "f12edfc1-607b-4fd2-bf95-997de251003d" (UID: "f12edfc1-607b-4fd2-bf95-997de251003d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:32:14 crc kubenswrapper[4953]: I1211 10:32:14.219486 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f12edfc1-607b-4fd2-bf95-997de251003d-kube-api-access-h56d8" (OuterVolumeSpecName: "kube-api-access-h56d8") pod "f12edfc1-607b-4fd2-bf95-997de251003d" (UID: "f12edfc1-607b-4fd2-bf95-997de251003d"). InnerVolumeSpecName "kube-api-access-h56d8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:32:14 crc kubenswrapper[4953]: I1211 10:32:14.318585 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h56d8\" (UniqueName: \"kubernetes.io/projected/f12edfc1-607b-4fd2-bf95-997de251003d-kube-api-access-h56d8\") on node \"crc\" DevicePath \"\"" Dec 11 10:32:14 crc kubenswrapper[4953]: I1211 10:32:14.318611 4953 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f12edfc1-607b-4fd2-bf95-997de251003d-config\") on node \"crc\" DevicePath \"\"" Dec 11 10:32:14 crc kubenswrapper[4953]: I1211 10:32:14.356089 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f854695bc-6xzcj" Dec 11 10:32:14 crc kubenswrapper[4953]: I1211 10:32:14.356081 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f854695bc-6xzcj" event={"ID":"908458a6-c175-4cc4-85b8-c8e6313c5501","Type":"ContainerDied","Data":"b59333c0304de15e75f2f4163fba37d0d9d5488fda6797f169652b9aa5d691bd"} Dec 11 10:32:14 crc kubenswrapper[4953]: I1211 10:32:14.357293 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84bb9d8bd9-ncvp2" event={"ID":"f12edfc1-607b-4fd2-bf95-997de251003d","Type":"ContainerDied","Data":"5d0f22fa79d60e7ad6ce2392cb373871888f17fcb786a0224b9b7992115ffe1a"} Dec 11 10:32:14 crc kubenswrapper[4953]: I1211 10:32:14.357431 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84bb9d8bd9-ncvp2" Dec 11 10:32:14 crc kubenswrapper[4953]: E1211 10:32:14.359158 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13\\\"\"" pod="openstack/openstack-cell1-galera-0" podUID="27258186-4cab-45b4-a20c-a4c3ddc82f76" Dec 11 10:32:14 crc kubenswrapper[4953]: E1211 10:32:14.359419 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13\\\"\"" pod="openstack/openstack-galera-0" podUID="23f99edb-3870-42f3-bdef-ec4db335ba35" Dec 11 10:32:14 crc kubenswrapper[4953]: E1211 10:32:14.377872 4953 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33" Dec 11 10:32:14 crc kubenswrapper[4953]: E1211 10:32:14.378034 4953 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f6z6d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-95f5f6995-p9d9d_openstack(c3962e78-992c-4a5f-a874-2d744965e3bb): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 11 10:32:14 crc kubenswrapper[4953]: E1211 10:32:14.379548 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-95f5f6995-p9d9d" podUID="c3962e78-992c-4a5f-a874-2d744965e3bb" Dec 11 10:32:14 crc kubenswrapper[4953]: I1211 10:32:14.434541 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-n6pxp"] Dec 11 10:32:14 crc kubenswrapper[4953]: I1211 10:32:14.465520 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-ncvp2"] Dec 11 10:32:14 crc kubenswrapper[4953]: I1211 10:32:14.472701 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-ncvp2"] Dec 11 10:32:14 crc kubenswrapper[4953]: I1211 10:32:14.483645 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f12edfc1-607b-4fd2-bf95-997de251003d" path="/var/lib/kubelet/pods/f12edfc1-607b-4fd2-bf95-997de251003d/volumes" Dec 11 10:32:14 crc kubenswrapper[4953]: I1211 10:32:14.496532 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-6xzcj"] Dec 11 10:32:14 crc kubenswrapper[4953]: I1211 10:32:14.505090 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-6xzcj"] Dec 11 10:32:15 crc kubenswrapper[4953]: I1211 10:32:15.028905 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 11 10:32:15 crc kubenswrapper[4953]: I1211 10:32:15.095184 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-mbtwm"] Dec 11 10:32:15 crc kubenswrapper[4953]: W1211 10:32:15.097556 4953 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5cfd14e5_05e2_4cc5_ba83_259321c6f872.slice/crio-4399ce8689c1cc367477024dd8650a0a20f282fb0dc067b1690b582fe77bbdf2 WatchSource:0}: Error finding container 4399ce8689c1cc367477024dd8650a0a20f282fb0dc067b1690b582fe77bbdf2: Status 404 returned error can't find the container with id 4399ce8689c1cc367477024dd8650a0a20f282fb0dc067b1690b582fe77bbdf2 Dec 11 10:32:15 crc kubenswrapper[4953]: E1211 10:32:15.152824 4953 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying layer: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics@sha256:db384bf43222b066c378e77027a675d4cd9911107adba46c2922b3a55e10d6fb" Dec 11 10:32:15 crc kubenswrapper[4953]: E1211 10:32:15.152878 4953 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying system image from manifest list: copying layer: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics@sha256:db384bf43222b066c378e77027a675d4cd9911107adba46c2922b3a55e10d6fb" Dec 11 10:32:15 crc kubenswrapper[4953]: E1211 10:32:15.153041 4953 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-state-metrics,Image:registry.k8s.io/kube-state-metrics/kube-state-metrics@sha256:db384bf43222b066c378e77027a675d4cd9911107adba46c2922b3a55e10d6fb,Command:[],Args:[--resources=pods --namespaces=openstack],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http-metrics,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},ContainerPort{Name:telemetry,HostPort:0,ContainerPort:8081,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-742qb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-state-metrics-0_openstack(9bc1f5cb-5d27-4ce1-8f01-5219db1cbeab): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying layer: context canceled" logger="UnhandledError" Dec 11 10:32:15 crc kubenswrapper[4953]: E1211 10:32:15.154272 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying layer: context canceled\"" pod="openstack/kube-state-metrics-0" podUID="9bc1f5cb-5d27-4ce1-8f01-5219db1cbeab" Dec 11 10:32:15 crc kubenswrapper[4953]: I1211 10:32:15.366301 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-mbtwm" event={"ID":"5cfd14e5-05e2-4cc5-ba83-259321c6f872","Type":"ContainerStarted","Data":"4399ce8689c1cc367477024dd8650a0a20f282fb0dc067b1690b582fe77bbdf2"} Dec 11 10:32:15 crc kubenswrapper[4953]: I1211 10:32:15.367610 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"8bb60f7f-22a1-48c2-8815-f2c6cfddfc2e","Type":"ContainerStarted","Data":"b1d5dfa95f27ea161b20ee40d6f6d4bf87396305b0879c824cc4998b6621f128"} Dec 11 10:32:15 crc kubenswrapper[4953]: I1211 10:32:15.368450 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-n6pxp" event={"ID":"498f7a43-7db9-42e8-b722-a5fb6ae4749f","Type":"ContainerStarted","Data":"abd5eb9c865c91ed39409e56456d5631999a02a171950a07d1bf10beb42a032a"} Dec 11 10:32:15 crc kubenswrapper[4953]: E1211 10:32:15.369780 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/kube-state-metrics/kube-state-metrics@sha256:db384bf43222b066c378e77027a675d4cd9911107adba46c2922b3a55e10d6fb\\\"\"" pod="openstack/kube-state-metrics-0" podUID="9bc1f5cb-5d27-4ce1-8f01-5219db1cbeab" Dec 11 10:32:15 crc kubenswrapper[4953]: E1211 10:32:15.370235 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33\\\"\"" pod="openstack/dnsmasq-dns-95f5f6995-p9d9d" podUID="c3962e78-992c-4a5f-a874-2d744965e3bb" Dec 11 10:32:15 crc kubenswrapper[4953]: I1211 10:32:15.848341 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 11 10:32:15 crc kubenswrapper[4953]: W1211 10:32:15.857701 4953 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb4237606_fdcf_403b_8e5a_1bbb4a2e38de.slice/crio-6b54a10b2979db12155abb16c859a889440d76284d97353bd444550a2c49e75f WatchSource:0}: Error finding container 6b54a10b2979db12155abb16c859a889440d76284d97353bd444550a2c49e75f: Status 404 returned error can't find the container with id 6b54a10b2979db12155abb16c859a889440d76284d97353bd444550a2c49e75f Dec 11 10:32:16 crc kubenswrapper[4953]: I1211 10:32:16.422526 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"b4237606-fdcf-403b-8e5a-1bbb4a2e38de","Type":"ContainerStarted","Data":"6b54a10b2979db12155abb16c859a889440d76284d97353bd444550a2c49e75f"} Dec 11 10:32:16 crc kubenswrapper[4953]: I1211 10:32:16.483161 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="908458a6-c175-4cc4-85b8-c8e6313c5501" path="/var/lib/kubelet/pods/908458a6-c175-4cc4-85b8-c8e6313c5501/volumes" Dec 11 10:32:21 crc kubenswrapper[4953]: I1211 10:32:21.466048 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"8bb60f7f-22a1-48c2-8815-f2c6cfddfc2e","Type":"ContainerStarted","Data":"b7f497b107b8e8652a7f168df902d76edf4cc8c0d003e369a126e81b80c2c81c"} Dec 11 10:32:22 crc kubenswrapper[4953]: I1211 10:32:22.495607 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"b4237606-fdcf-403b-8e5a-1bbb4a2e38de","Type":"ContainerStarted","Data":"16b6376ca3b41c1f6e9ee55d0479d0566772d86be8f749eb1b02c4edcfa051b9"} Dec 11 10:32:22 crc kubenswrapper[4953]: I1211 10:32:22.498263 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-n6pxp" event={"ID":"498f7a43-7db9-42e8-b722-a5fb6ae4749f","Type":"ContainerStarted","Data":"d0b7c04c7aac708c8d19088fd2a98707adc64c19e1992cf63c2b85b7be925ba4"} Dec 11 10:32:22 crc kubenswrapper[4953]: I1211 10:32:22.498391 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-n6pxp" Dec 11 10:32:22 crc kubenswrapper[4953]: I1211 10:32:22.505715 4953 generic.go:334] "Generic (PLEG): container finished" podID="5cfd14e5-05e2-4cc5-ba83-259321c6f872" containerID="a60310c044585031cbb4f2c50aa560e5bb93943517261c36733ba28b71e81580" exitCode=0 Dec 11 10:32:22 crc kubenswrapper[4953]: I1211 10:32:22.506124 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-mbtwm" event={"ID":"5cfd14e5-05e2-4cc5-ba83-259321c6f872","Type":"ContainerDied","Data":"a60310c044585031cbb4f2c50aa560e5bb93943517261c36733ba28b71e81580"} Dec 11 10:32:22 crc kubenswrapper[4953]: I1211 10:32:22.713722 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-n6pxp" podStartSLOduration=38.955674514 podStartE2EDuration="45.713697165s" podCreationTimestamp="2025-12-11 10:31:37 +0000 UTC" firstStartedPulling="2025-12-11 10:32:14.433798918 +0000 UTC m=+1252.457657951" lastFinishedPulling="2025-12-11 10:32:21.191821559 +0000 UTC m=+1259.215680602" observedRunningTime="2025-12-11 10:32:22.699371528 +0000 UTC m=+1260.723230561" watchObservedRunningTime="2025-12-11 10:32:22.713697165 +0000 UTC m=+1260.737556208" Dec 11 10:32:23 crc kubenswrapper[4953]: I1211 10:32:23.522405 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-mbtwm" event={"ID":"5cfd14e5-05e2-4cc5-ba83-259321c6f872","Type":"ContainerStarted","Data":"9dd0749df58975f05de050cfcf92dc87ed6378284f27a69c71579f156df64d52"} Dec 11 10:32:24 crc kubenswrapper[4953]: I1211 10:32:24.535018 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"ab07f951-5c8d-428b-9b26-52ea2284ee52","Type":"ContainerStarted","Data":"41bbb6ee795ebc3c22e509c06b7f775810c8aed2e9da9f0f6b746d7e045c0c23"} Dec 11 10:32:24 crc kubenswrapper[4953]: I1211 10:32:24.535527 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Dec 11 10:32:24 crc kubenswrapper[4953]: I1211 10:32:24.538009 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"01196778-96de-4f79-b9ac-e01243f86ebb","Type":"ContainerStarted","Data":"94ecea46a02f645c72f741be8c0e8c18496d154632db9f0e42995f5ff8e48207"} Dec 11 10:32:24 crc kubenswrapper[4953]: I1211 10:32:24.564462 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=2.7660277410000003 podStartE2EDuration="52.564441056s" podCreationTimestamp="2025-12-11 10:31:32 +0000 UTC" firstStartedPulling="2025-12-11 10:31:33.464941934 +0000 UTC m=+1211.488800967" lastFinishedPulling="2025-12-11 10:32:23.263355249 +0000 UTC m=+1261.287214282" observedRunningTime="2025-12-11 10:32:24.555407558 +0000 UTC m=+1262.579266601" watchObservedRunningTime="2025-12-11 10:32:24.564441056 +0000 UTC m=+1262.588300099" Dec 11 10:32:25 crc kubenswrapper[4953]: I1211 10:32:25.548355 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"23f99edb-3870-42f3-bdef-ec4db335ba35","Type":"ContainerStarted","Data":"6ee2835a71d7d5e83718a29d1cfff494a3741681572cda87af607b814ba32761"} Dec 11 10:32:25 crc kubenswrapper[4953]: I1211 10:32:25.551142 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"8bb60f7f-22a1-48c2-8815-f2c6cfddfc2e","Type":"ContainerStarted","Data":"8c899ec3f19ce335b2f89755f8a4e4532bfe9f417bd7fb76d6371e306044ac4e"} Dec 11 10:32:25 crc kubenswrapper[4953]: I1211 10:32:25.554416 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"b4237606-fdcf-403b-8e5a-1bbb4a2e38de","Type":"ContainerStarted","Data":"e70fe7fa2779f3637bf42c139e92bf6db02367cfe5162c9dbcefd534285e7752"} Dec 11 10:32:25 crc kubenswrapper[4953]: I1211 10:32:25.574740 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-mbtwm" event={"ID":"5cfd14e5-05e2-4cc5-ba83-259321c6f872","Type":"ContainerStarted","Data":"f6f4f73f93ab838f657b20b0e0f2f7780e20c20fb3adfe66d3e44a87fc1d18c6"} Dec 11 10:32:25 crc kubenswrapper[4953]: I1211 10:32:25.574833 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-mbtwm" Dec 11 10:32:25 crc kubenswrapper[4953]: I1211 10:32:25.574887 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-mbtwm" Dec 11 10:32:25 crc kubenswrapper[4953]: I1211 10:32:25.603977 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=38.40160796 podStartE2EDuration="47.603959425s" podCreationTimestamp="2025-12-11 10:31:38 +0000 UTC" firstStartedPulling="2025-12-11 10:32:15.860340545 +0000 UTC m=+1253.884199578" lastFinishedPulling="2025-12-11 10:32:25.06269201 +0000 UTC m=+1263.086551043" observedRunningTime="2025-12-11 10:32:25.603096897 +0000 UTC m=+1263.626955940" watchObservedRunningTime="2025-12-11 10:32:25.603959425 +0000 UTC m=+1263.627818458" Dec 11 10:32:25 crc kubenswrapper[4953]: I1211 10:32:25.652013 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=35.650667881 podStartE2EDuration="45.651996187s" podCreationTimestamp="2025-12-11 10:31:40 +0000 UTC" firstStartedPulling="2025-12-11 10:32:15.039451355 +0000 UTC m=+1253.063310388" lastFinishedPulling="2025-12-11 10:32:25.040779661 +0000 UTC m=+1263.064638694" observedRunningTime="2025-12-11 10:32:25.628956672 +0000 UTC m=+1263.652815715" watchObservedRunningTime="2025-12-11 10:32:25.651996187 +0000 UTC m=+1263.675855220" Dec 11 10:32:25 crc kubenswrapper[4953]: I1211 10:32:25.662272 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-mbtwm" podStartSLOduration=42.57202175 podStartE2EDuration="48.662254963s" podCreationTimestamp="2025-12-11 10:31:37 +0000 UTC" firstStartedPulling="2025-12-11 10:32:15.100964696 +0000 UTC m=+1253.124823729" lastFinishedPulling="2025-12-11 10:32:21.191197909 +0000 UTC m=+1259.215056942" observedRunningTime="2025-12-11 10:32:25.654410253 +0000 UTC m=+1263.678269296" watchObservedRunningTime="2025-12-11 10:32:25.662254963 +0000 UTC m=+1263.686113996" Dec 11 10:32:26 crc kubenswrapper[4953]: I1211 10:32:26.735945 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b29c8985-0d8c-4382-9969-29422929136f","Type":"ContainerStarted","Data":"8ccd21efbe477435dfe6f0792b8d26e2c55b2f1636676f65356fe0d625e5ad71"} Dec 11 10:32:26 crc kubenswrapper[4953]: I1211 10:32:26.738735 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"27258186-4cab-45b4-a20c-a4c3ddc82f76","Type":"ContainerStarted","Data":"01fafdd99eaa9ede427831b53dacf59c2f223520959b1141bcba498e96fc5d55"} Dec 11 10:32:27 crc kubenswrapper[4953]: I1211 10:32:27.063369 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Dec 11 10:32:27 crc kubenswrapper[4953]: I1211 10:32:27.063424 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Dec 11 10:32:27 crc kubenswrapper[4953]: I1211 10:32:27.104523 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Dec 11 10:32:27 crc kubenswrapper[4953]: I1211 10:32:27.696018 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Dec 11 10:32:27 crc kubenswrapper[4953]: I1211 10:32:27.741888 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Dec 11 10:32:27 crc kubenswrapper[4953]: I1211 10:32:27.745953 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Dec 11 10:32:27 crc kubenswrapper[4953]: I1211 10:32:27.797975 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Dec 11 10:32:27 crc kubenswrapper[4953]: I1211 10:32:27.798588 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Dec 11 10:32:28 crc kubenswrapper[4953]: I1211 10:32:28.060847 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-744ffd65bc-sgr4j"] Dec 11 10:32:28 crc kubenswrapper[4953]: I1211 10:32:28.074517 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-tqd68"] Dec 11 10:32:28 crc kubenswrapper[4953]: I1211 10:32:28.105522 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-tqd68" Dec 11 10:32:28 crc kubenswrapper[4953]: I1211 10:32:28.107470 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-tqd68"] Dec 11 10:32:28 crc kubenswrapper[4953]: I1211 10:32:28.110429 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Dec 11 10:32:28 crc kubenswrapper[4953]: I1211 10:32:28.144622 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7bbdc7ccd7-h97wh"] Dec 11 10:32:28 crc kubenswrapper[4953]: I1211 10:32:28.145969 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bbdc7ccd7-h97wh" Dec 11 10:32:28 crc kubenswrapper[4953]: I1211 10:32:28.149778 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Dec 11 10:32:28 crc kubenswrapper[4953]: I1211 10:32:28.151469 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/48902dd9-8c9f-4983-b8dd-6f22f4382a19-ovn-rundir\") pod \"ovn-controller-metrics-tqd68\" (UID: \"48902dd9-8c9f-4983-b8dd-6f22f4382a19\") " pod="openstack/ovn-controller-metrics-tqd68" Dec 11 10:32:28 crc kubenswrapper[4953]: I1211 10:32:28.151740 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48902dd9-8c9f-4983-b8dd-6f22f4382a19-config\") pod \"ovn-controller-metrics-tqd68\" (UID: \"48902dd9-8c9f-4983-b8dd-6f22f4382a19\") " pod="openstack/ovn-controller-metrics-tqd68" Dec 11 10:32:28 crc kubenswrapper[4953]: I1211 10:32:28.151876 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/48902dd9-8c9f-4983-b8dd-6f22f4382a19-ovs-rundir\") pod \"ovn-controller-metrics-tqd68\" (UID: \"48902dd9-8c9f-4983-b8dd-6f22f4382a19\") " pod="openstack/ovn-controller-metrics-tqd68" Dec 11 10:32:28 crc kubenswrapper[4953]: I1211 10:32:28.152014 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/48902dd9-8c9f-4983-b8dd-6f22f4382a19-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-tqd68\" (UID: \"48902dd9-8c9f-4983-b8dd-6f22f4382a19\") " pod="openstack/ovn-controller-metrics-tqd68" Dec 11 10:32:28 crc kubenswrapper[4953]: I1211 10:32:28.152078 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hfsq\" (UniqueName: \"kubernetes.io/projected/48902dd9-8c9f-4983-b8dd-6f22f4382a19-kube-api-access-8hfsq\") pod \"ovn-controller-metrics-tqd68\" (UID: \"48902dd9-8c9f-4983-b8dd-6f22f4382a19\") " pod="openstack/ovn-controller-metrics-tqd68" Dec 11 10:32:28 crc kubenswrapper[4953]: I1211 10:32:28.152114 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48902dd9-8c9f-4983-b8dd-6f22f4382a19-combined-ca-bundle\") pod \"ovn-controller-metrics-tqd68\" (UID: \"48902dd9-8c9f-4983-b8dd-6f22f4382a19\") " pod="openstack/ovn-controller-metrics-tqd68" Dec 11 10:32:28 crc kubenswrapper[4953]: I1211 10:32:28.171678 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7bbdc7ccd7-h97wh"] Dec 11 10:32:28 crc kubenswrapper[4953]: I1211 10:32:28.220537 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-p9d9d"] Dec 11 10:32:28 crc kubenswrapper[4953]: I1211 10:32:28.248850 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-757dc6fff9-xc587"] Dec 11 10:32:28 crc kubenswrapper[4953]: I1211 10:32:28.250157 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757dc6fff9-xc587" Dec 11 10:32:28 crc kubenswrapper[4953]: I1211 10:32:28.253535 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/57347790-7a3f-4af9-acb8-5b6652c4240d-ovsdbserver-nb\") pod \"dnsmasq-dns-7bbdc7ccd7-h97wh\" (UID: \"57347790-7a3f-4af9-acb8-5b6652c4240d\") " pod="openstack/dnsmasq-dns-7bbdc7ccd7-h97wh" Dec 11 10:32:28 crc kubenswrapper[4953]: I1211 10:32:28.253667 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48902dd9-8c9f-4983-b8dd-6f22f4382a19-config\") pod \"ovn-controller-metrics-tqd68\" (UID: \"48902dd9-8c9f-4983-b8dd-6f22f4382a19\") " pod="openstack/ovn-controller-metrics-tqd68" Dec 11 10:32:28 crc kubenswrapper[4953]: I1211 10:32:28.253734 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/48902dd9-8c9f-4983-b8dd-6f22f4382a19-ovs-rundir\") pod \"ovn-controller-metrics-tqd68\" (UID: \"48902dd9-8c9f-4983-b8dd-6f22f4382a19\") " pod="openstack/ovn-controller-metrics-tqd68" Dec 11 10:32:28 crc kubenswrapper[4953]: I1211 10:32:28.253764 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89vmh\" (UniqueName: \"kubernetes.io/projected/57347790-7a3f-4af9-acb8-5b6652c4240d-kube-api-access-89vmh\") pod \"dnsmasq-dns-7bbdc7ccd7-h97wh\" (UID: \"57347790-7a3f-4af9-acb8-5b6652c4240d\") " pod="openstack/dnsmasq-dns-7bbdc7ccd7-h97wh" Dec 11 10:32:28 crc kubenswrapper[4953]: I1211 10:32:28.253815 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/48902dd9-8c9f-4983-b8dd-6f22f4382a19-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-tqd68\" (UID: \"48902dd9-8c9f-4983-b8dd-6f22f4382a19\") " pod="openstack/ovn-controller-metrics-tqd68" Dec 11 10:32:28 crc kubenswrapper[4953]: I1211 10:32:28.253862 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hfsq\" (UniqueName: \"kubernetes.io/projected/48902dd9-8c9f-4983-b8dd-6f22f4382a19-kube-api-access-8hfsq\") pod \"ovn-controller-metrics-tqd68\" (UID: \"48902dd9-8c9f-4983-b8dd-6f22f4382a19\") " pod="openstack/ovn-controller-metrics-tqd68" Dec 11 10:32:28 crc kubenswrapper[4953]: I1211 10:32:28.253906 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/57347790-7a3f-4af9-acb8-5b6652c4240d-dns-svc\") pod \"dnsmasq-dns-7bbdc7ccd7-h97wh\" (UID: \"57347790-7a3f-4af9-acb8-5b6652c4240d\") " pod="openstack/dnsmasq-dns-7bbdc7ccd7-h97wh" Dec 11 10:32:28 crc kubenswrapper[4953]: I1211 10:32:28.253934 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48902dd9-8c9f-4983-b8dd-6f22f4382a19-combined-ca-bundle\") pod \"ovn-controller-metrics-tqd68\" (UID: \"48902dd9-8c9f-4983-b8dd-6f22f4382a19\") " pod="openstack/ovn-controller-metrics-tqd68" Dec 11 10:32:28 crc kubenswrapper[4953]: I1211 10:32:28.253962 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57347790-7a3f-4af9-acb8-5b6652c4240d-config\") pod \"dnsmasq-dns-7bbdc7ccd7-h97wh\" (UID: \"57347790-7a3f-4af9-acb8-5b6652c4240d\") " pod="openstack/dnsmasq-dns-7bbdc7ccd7-h97wh" Dec 11 10:32:28 crc kubenswrapper[4953]: I1211 10:32:28.253996 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/48902dd9-8c9f-4983-b8dd-6f22f4382a19-ovn-rundir\") pod \"ovn-controller-metrics-tqd68\" (UID: \"48902dd9-8c9f-4983-b8dd-6f22f4382a19\") " pod="openstack/ovn-controller-metrics-tqd68" Dec 11 10:32:28 crc kubenswrapper[4953]: I1211 10:32:28.253993 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/48902dd9-8c9f-4983-b8dd-6f22f4382a19-ovs-rundir\") pod \"ovn-controller-metrics-tqd68\" (UID: \"48902dd9-8c9f-4983-b8dd-6f22f4382a19\") " pod="openstack/ovn-controller-metrics-tqd68" Dec 11 10:32:28 crc kubenswrapper[4953]: I1211 10:32:28.254097 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/48902dd9-8c9f-4983-b8dd-6f22f4382a19-ovn-rundir\") pod \"ovn-controller-metrics-tqd68\" (UID: \"48902dd9-8c9f-4983-b8dd-6f22f4382a19\") " pod="openstack/ovn-controller-metrics-tqd68" Dec 11 10:32:28 crc kubenswrapper[4953]: I1211 10:32:28.254357 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48902dd9-8c9f-4983-b8dd-6f22f4382a19-config\") pod \"ovn-controller-metrics-tqd68\" (UID: \"48902dd9-8c9f-4983-b8dd-6f22f4382a19\") " pod="openstack/ovn-controller-metrics-tqd68" Dec 11 10:32:28 crc kubenswrapper[4953]: I1211 10:32:28.257821 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Dec 11 10:32:28 crc kubenswrapper[4953]: I1211 10:32:28.262747 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/48902dd9-8c9f-4983-b8dd-6f22f4382a19-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-tqd68\" (UID: \"48902dd9-8c9f-4983-b8dd-6f22f4382a19\") " pod="openstack/ovn-controller-metrics-tqd68" Dec 11 10:32:28 crc kubenswrapper[4953]: I1211 10:32:28.315183 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-757dc6fff9-xc587"] Dec 11 10:32:28 crc kubenswrapper[4953]: I1211 10:32:28.315347 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48902dd9-8c9f-4983-b8dd-6f22f4382a19-combined-ca-bundle\") pod \"ovn-controller-metrics-tqd68\" (UID: \"48902dd9-8c9f-4983-b8dd-6f22f4382a19\") " pod="openstack/ovn-controller-metrics-tqd68" Dec 11 10:32:28 crc kubenswrapper[4953]: I1211 10:32:28.326224 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hfsq\" (UniqueName: \"kubernetes.io/projected/48902dd9-8c9f-4983-b8dd-6f22f4382a19-kube-api-access-8hfsq\") pod \"ovn-controller-metrics-tqd68\" (UID: \"48902dd9-8c9f-4983-b8dd-6f22f4382a19\") " pod="openstack/ovn-controller-metrics-tqd68" Dec 11 10:32:28 crc kubenswrapper[4953]: I1211 10:32:28.358458 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89vmh\" (UniqueName: \"kubernetes.io/projected/57347790-7a3f-4af9-acb8-5b6652c4240d-kube-api-access-89vmh\") pod \"dnsmasq-dns-7bbdc7ccd7-h97wh\" (UID: \"57347790-7a3f-4af9-acb8-5b6652c4240d\") " pod="openstack/dnsmasq-dns-7bbdc7ccd7-h97wh" Dec 11 10:32:28 crc kubenswrapper[4953]: I1211 10:32:28.358555 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/57347790-7a3f-4af9-acb8-5b6652c4240d-dns-svc\") pod \"dnsmasq-dns-7bbdc7ccd7-h97wh\" (UID: \"57347790-7a3f-4af9-acb8-5b6652c4240d\") " pod="openstack/dnsmasq-dns-7bbdc7ccd7-h97wh" Dec 11 10:32:28 crc kubenswrapper[4953]: I1211 10:32:28.358595 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a71a3f06-3f08-4259-a6e0-5c615e05ee23-config\") pod \"dnsmasq-dns-757dc6fff9-xc587\" (UID: \"a71a3f06-3f08-4259-a6e0-5c615e05ee23\") " pod="openstack/dnsmasq-dns-757dc6fff9-xc587" Dec 11 10:32:28 crc kubenswrapper[4953]: I1211 10:32:28.358623 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57347790-7a3f-4af9-acb8-5b6652c4240d-config\") pod \"dnsmasq-dns-7bbdc7ccd7-h97wh\" (UID: \"57347790-7a3f-4af9-acb8-5b6652c4240d\") " pod="openstack/dnsmasq-dns-7bbdc7ccd7-h97wh" Dec 11 10:32:28 crc kubenswrapper[4953]: I1211 10:32:28.358651 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a71a3f06-3f08-4259-a6e0-5c615e05ee23-dns-svc\") pod \"dnsmasq-dns-757dc6fff9-xc587\" (UID: \"a71a3f06-3f08-4259-a6e0-5c615e05ee23\") " pod="openstack/dnsmasq-dns-757dc6fff9-xc587" Dec 11 10:32:28 crc kubenswrapper[4953]: I1211 10:32:28.358672 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a71a3f06-3f08-4259-a6e0-5c615e05ee23-ovsdbserver-nb\") pod \"dnsmasq-dns-757dc6fff9-xc587\" (UID: \"a71a3f06-3f08-4259-a6e0-5c615e05ee23\") " pod="openstack/dnsmasq-dns-757dc6fff9-xc587" Dec 11 10:32:28 crc kubenswrapper[4953]: I1211 10:32:28.358692 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/57347790-7a3f-4af9-acb8-5b6652c4240d-ovsdbserver-nb\") pod \"dnsmasq-dns-7bbdc7ccd7-h97wh\" (UID: \"57347790-7a3f-4af9-acb8-5b6652c4240d\") " pod="openstack/dnsmasq-dns-7bbdc7ccd7-h97wh" Dec 11 10:32:28 crc kubenswrapper[4953]: I1211 10:32:28.358707 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6jhc\" (UniqueName: \"kubernetes.io/projected/a71a3f06-3f08-4259-a6e0-5c615e05ee23-kube-api-access-t6jhc\") pod \"dnsmasq-dns-757dc6fff9-xc587\" (UID: \"a71a3f06-3f08-4259-a6e0-5c615e05ee23\") " pod="openstack/dnsmasq-dns-757dc6fff9-xc587" Dec 11 10:32:28 crc kubenswrapper[4953]: I1211 10:32:28.358733 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a71a3f06-3f08-4259-a6e0-5c615e05ee23-ovsdbserver-sb\") pod \"dnsmasq-dns-757dc6fff9-xc587\" (UID: \"a71a3f06-3f08-4259-a6e0-5c615e05ee23\") " pod="openstack/dnsmasq-dns-757dc6fff9-xc587" Dec 11 10:32:28 crc kubenswrapper[4953]: I1211 10:32:28.360117 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/57347790-7a3f-4af9-acb8-5b6652c4240d-dns-svc\") pod \"dnsmasq-dns-7bbdc7ccd7-h97wh\" (UID: \"57347790-7a3f-4af9-acb8-5b6652c4240d\") " pod="openstack/dnsmasq-dns-7bbdc7ccd7-h97wh" Dec 11 10:32:28 crc kubenswrapper[4953]: I1211 10:32:28.360534 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57347790-7a3f-4af9-acb8-5b6652c4240d-config\") pod \"dnsmasq-dns-7bbdc7ccd7-h97wh\" (UID: \"57347790-7a3f-4af9-acb8-5b6652c4240d\") " pod="openstack/dnsmasq-dns-7bbdc7ccd7-h97wh" Dec 11 10:32:28 crc kubenswrapper[4953]: I1211 10:32:28.360708 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/57347790-7a3f-4af9-acb8-5b6652c4240d-ovsdbserver-nb\") pod \"dnsmasq-dns-7bbdc7ccd7-h97wh\" (UID: \"57347790-7a3f-4af9-acb8-5b6652c4240d\") " pod="openstack/dnsmasq-dns-7bbdc7ccd7-h97wh" Dec 11 10:32:28 crc kubenswrapper[4953]: I1211 10:32:28.373836 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Dec 11 10:32:28 crc kubenswrapper[4953]: I1211 10:32:28.375211 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Dec 11 10:32:28 crc kubenswrapper[4953]: I1211 10:32:28.379595 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-qjpz6" Dec 11 10:32:28 crc kubenswrapper[4953]: I1211 10:32:28.379787 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Dec 11 10:32:28 crc kubenswrapper[4953]: I1211 10:32:28.379956 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Dec 11 10:32:28 crc kubenswrapper[4953]: I1211 10:32:28.380309 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Dec 11 10:32:28 crc kubenswrapper[4953]: I1211 10:32:28.390314 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89vmh\" (UniqueName: \"kubernetes.io/projected/57347790-7a3f-4af9-acb8-5b6652c4240d-kube-api-access-89vmh\") pod \"dnsmasq-dns-7bbdc7ccd7-h97wh\" (UID: \"57347790-7a3f-4af9-acb8-5b6652c4240d\") " pod="openstack/dnsmasq-dns-7bbdc7ccd7-h97wh" Dec 11 10:32:28 crc kubenswrapper[4953]: I1211 10:32:28.401426 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Dec 11 10:32:28 crc kubenswrapper[4953]: I1211 10:32:28.448252 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-tqd68" Dec 11 10:32:28 crc kubenswrapper[4953]: I1211 10:32:28.460062 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a71a3f06-3f08-4259-a6e0-5c615e05ee23-ovsdbserver-sb\") pod \"dnsmasq-dns-757dc6fff9-xc587\" (UID: \"a71a3f06-3f08-4259-a6e0-5c615e05ee23\") " pod="openstack/dnsmasq-dns-757dc6fff9-xc587" Dec 11 10:32:28 crc kubenswrapper[4953]: I1211 10:32:28.460108 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4287349e-ff2e-483c-9ede-08ec5e03a2b4-config\") pod \"ovn-northd-0\" (UID: \"4287349e-ff2e-483c-9ede-08ec5e03a2b4\") " pod="openstack/ovn-northd-0" Dec 11 10:32:28 crc kubenswrapper[4953]: I1211 10:32:28.460142 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4v8nq\" (UniqueName: \"kubernetes.io/projected/4287349e-ff2e-483c-9ede-08ec5e03a2b4-kube-api-access-4v8nq\") pod \"ovn-northd-0\" (UID: \"4287349e-ff2e-483c-9ede-08ec5e03a2b4\") " pod="openstack/ovn-northd-0" Dec 11 10:32:28 crc kubenswrapper[4953]: I1211 10:32:28.460168 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4287349e-ff2e-483c-9ede-08ec5e03a2b4-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"4287349e-ff2e-483c-9ede-08ec5e03a2b4\") " pod="openstack/ovn-northd-0" Dec 11 10:32:28 crc kubenswrapper[4953]: I1211 10:32:28.460256 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/4287349e-ff2e-483c-9ede-08ec5e03a2b4-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"4287349e-ff2e-483c-9ede-08ec5e03a2b4\") " pod="openstack/ovn-northd-0" Dec 11 10:32:28 crc kubenswrapper[4953]: I1211 10:32:28.460276 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/4287349e-ff2e-483c-9ede-08ec5e03a2b4-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"4287349e-ff2e-483c-9ede-08ec5e03a2b4\") " pod="openstack/ovn-northd-0" Dec 11 10:32:28 crc kubenswrapper[4953]: I1211 10:32:28.460308 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a71a3f06-3f08-4259-a6e0-5c615e05ee23-config\") pod \"dnsmasq-dns-757dc6fff9-xc587\" (UID: \"a71a3f06-3f08-4259-a6e0-5c615e05ee23\") " pod="openstack/dnsmasq-dns-757dc6fff9-xc587" Dec 11 10:32:28 crc kubenswrapper[4953]: I1211 10:32:28.460335 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4287349e-ff2e-483c-9ede-08ec5e03a2b4-scripts\") pod \"ovn-northd-0\" (UID: \"4287349e-ff2e-483c-9ede-08ec5e03a2b4\") " pod="openstack/ovn-northd-0" Dec 11 10:32:28 crc kubenswrapper[4953]: I1211 10:32:28.460351 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4287349e-ff2e-483c-9ede-08ec5e03a2b4-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"4287349e-ff2e-483c-9ede-08ec5e03a2b4\") " pod="openstack/ovn-northd-0" Dec 11 10:32:28 crc kubenswrapper[4953]: I1211 10:32:28.460388 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a71a3f06-3f08-4259-a6e0-5c615e05ee23-dns-svc\") pod \"dnsmasq-dns-757dc6fff9-xc587\" (UID: \"a71a3f06-3f08-4259-a6e0-5c615e05ee23\") " pod="openstack/dnsmasq-dns-757dc6fff9-xc587" Dec 11 10:32:28 crc kubenswrapper[4953]: I1211 10:32:28.460409 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a71a3f06-3f08-4259-a6e0-5c615e05ee23-ovsdbserver-nb\") pod \"dnsmasq-dns-757dc6fff9-xc587\" (UID: \"a71a3f06-3f08-4259-a6e0-5c615e05ee23\") " pod="openstack/dnsmasq-dns-757dc6fff9-xc587" Dec 11 10:32:28 crc kubenswrapper[4953]: I1211 10:32:28.460431 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6jhc\" (UniqueName: \"kubernetes.io/projected/a71a3f06-3f08-4259-a6e0-5c615e05ee23-kube-api-access-t6jhc\") pod \"dnsmasq-dns-757dc6fff9-xc587\" (UID: \"a71a3f06-3f08-4259-a6e0-5c615e05ee23\") " pod="openstack/dnsmasq-dns-757dc6fff9-xc587" Dec 11 10:32:28 crc kubenswrapper[4953]: I1211 10:32:28.461518 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a71a3f06-3f08-4259-a6e0-5c615e05ee23-ovsdbserver-sb\") pod \"dnsmasq-dns-757dc6fff9-xc587\" (UID: \"a71a3f06-3f08-4259-a6e0-5c615e05ee23\") " pod="openstack/dnsmasq-dns-757dc6fff9-xc587" Dec 11 10:32:28 crc kubenswrapper[4953]: I1211 10:32:28.462141 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a71a3f06-3f08-4259-a6e0-5c615e05ee23-config\") pod \"dnsmasq-dns-757dc6fff9-xc587\" (UID: \"a71a3f06-3f08-4259-a6e0-5c615e05ee23\") " pod="openstack/dnsmasq-dns-757dc6fff9-xc587" Dec 11 10:32:28 crc kubenswrapper[4953]: I1211 10:32:28.463180 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a71a3f06-3f08-4259-a6e0-5c615e05ee23-ovsdbserver-nb\") pod \"dnsmasq-dns-757dc6fff9-xc587\" (UID: \"a71a3f06-3f08-4259-a6e0-5c615e05ee23\") " pod="openstack/dnsmasq-dns-757dc6fff9-xc587" Dec 11 10:32:28 crc kubenswrapper[4953]: I1211 10:32:28.463387 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a71a3f06-3f08-4259-a6e0-5c615e05ee23-dns-svc\") pod \"dnsmasq-dns-757dc6fff9-xc587\" (UID: \"a71a3f06-3f08-4259-a6e0-5c615e05ee23\") " pod="openstack/dnsmasq-dns-757dc6fff9-xc587" Dec 11 10:32:28 crc kubenswrapper[4953]: I1211 10:32:28.468165 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bbdc7ccd7-h97wh" Dec 11 10:32:28 crc kubenswrapper[4953]: I1211 10:32:28.478200 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6jhc\" (UniqueName: \"kubernetes.io/projected/a71a3f06-3f08-4259-a6e0-5c615e05ee23-kube-api-access-t6jhc\") pod \"dnsmasq-dns-757dc6fff9-xc587\" (UID: \"a71a3f06-3f08-4259-a6e0-5c615e05ee23\") " pod="openstack/dnsmasq-dns-757dc6fff9-xc587" Dec 11 10:32:28 crc kubenswrapper[4953]: I1211 10:32:28.561950 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/4287349e-ff2e-483c-9ede-08ec5e03a2b4-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"4287349e-ff2e-483c-9ede-08ec5e03a2b4\") " pod="openstack/ovn-northd-0" Dec 11 10:32:28 crc kubenswrapper[4953]: I1211 10:32:28.562389 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/4287349e-ff2e-483c-9ede-08ec5e03a2b4-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"4287349e-ff2e-483c-9ede-08ec5e03a2b4\") " pod="openstack/ovn-northd-0" Dec 11 10:32:28 crc kubenswrapper[4953]: I1211 10:32:28.562454 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4287349e-ff2e-483c-9ede-08ec5e03a2b4-scripts\") pod \"ovn-northd-0\" (UID: \"4287349e-ff2e-483c-9ede-08ec5e03a2b4\") " pod="openstack/ovn-northd-0" Dec 11 10:32:28 crc kubenswrapper[4953]: I1211 10:32:28.562482 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4287349e-ff2e-483c-9ede-08ec5e03a2b4-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"4287349e-ff2e-483c-9ede-08ec5e03a2b4\") " pod="openstack/ovn-northd-0" Dec 11 10:32:28 crc kubenswrapper[4953]: I1211 10:32:28.562549 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4287349e-ff2e-483c-9ede-08ec5e03a2b4-config\") pod \"ovn-northd-0\" (UID: \"4287349e-ff2e-483c-9ede-08ec5e03a2b4\") " pod="openstack/ovn-northd-0" Dec 11 10:32:28 crc kubenswrapper[4953]: I1211 10:32:28.562555 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/4287349e-ff2e-483c-9ede-08ec5e03a2b4-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"4287349e-ff2e-483c-9ede-08ec5e03a2b4\") " pod="openstack/ovn-northd-0" Dec 11 10:32:28 crc kubenswrapper[4953]: I1211 10:32:28.562601 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4v8nq\" (UniqueName: \"kubernetes.io/projected/4287349e-ff2e-483c-9ede-08ec5e03a2b4-kube-api-access-4v8nq\") pod \"ovn-northd-0\" (UID: \"4287349e-ff2e-483c-9ede-08ec5e03a2b4\") " pod="openstack/ovn-northd-0" Dec 11 10:32:28 crc kubenswrapper[4953]: I1211 10:32:28.562636 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4287349e-ff2e-483c-9ede-08ec5e03a2b4-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"4287349e-ff2e-483c-9ede-08ec5e03a2b4\") " pod="openstack/ovn-northd-0" Dec 11 10:32:28 crc kubenswrapper[4953]: I1211 10:32:28.563511 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4287349e-ff2e-483c-9ede-08ec5e03a2b4-scripts\") pod \"ovn-northd-0\" (UID: \"4287349e-ff2e-483c-9ede-08ec5e03a2b4\") " pod="openstack/ovn-northd-0" Dec 11 10:32:28 crc kubenswrapper[4953]: I1211 10:32:28.564835 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4287349e-ff2e-483c-9ede-08ec5e03a2b4-config\") pod \"ovn-northd-0\" (UID: \"4287349e-ff2e-483c-9ede-08ec5e03a2b4\") " pod="openstack/ovn-northd-0" Dec 11 10:32:28 crc kubenswrapper[4953]: I1211 10:32:28.571984 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4287349e-ff2e-483c-9ede-08ec5e03a2b4-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"4287349e-ff2e-483c-9ede-08ec5e03a2b4\") " pod="openstack/ovn-northd-0" Dec 11 10:32:28 crc kubenswrapper[4953]: I1211 10:32:28.572033 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/4287349e-ff2e-483c-9ede-08ec5e03a2b4-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"4287349e-ff2e-483c-9ede-08ec5e03a2b4\") " pod="openstack/ovn-northd-0" Dec 11 10:32:28 crc kubenswrapper[4953]: I1211 10:32:28.576077 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4287349e-ff2e-483c-9ede-08ec5e03a2b4-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"4287349e-ff2e-483c-9ede-08ec5e03a2b4\") " pod="openstack/ovn-northd-0" Dec 11 10:32:28 crc kubenswrapper[4953]: I1211 10:32:28.602752 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4v8nq\" (UniqueName: \"kubernetes.io/projected/4287349e-ff2e-483c-9ede-08ec5e03a2b4-kube-api-access-4v8nq\") pod \"ovn-northd-0\" (UID: \"4287349e-ff2e-483c-9ede-08ec5e03a2b4\") " pod="openstack/ovn-northd-0" Dec 11 10:32:28 crc kubenswrapper[4953]: I1211 10:32:28.686205 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757dc6fff9-xc587" Dec 11 10:32:28 crc kubenswrapper[4953]: I1211 10:32:28.728887 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Dec 11 10:32:28 crc kubenswrapper[4953]: I1211 10:32:28.988791 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-tqd68"] Dec 11 10:32:29 crc kubenswrapper[4953]: W1211 10:32:29.000183 4953 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod48902dd9_8c9f_4983_b8dd_6f22f4382a19.slice/crio-83fcd8493ce3c7045884617ae0baff2087a15f99d164d4a580ff6ea0be0b5085 WatchSource:0}: Error finding container 83fcd8493ce3c7045884617ae0baff2087a15f99d164d4a580ff6ea0be0b5085: Status 404 returned error can't find the container with id 83fcd8493ce3c7045884617ae0baff2087a15f99d164d4a580ff6ea0be0b5085 Dec 11 10:32:29 crc kubenswrapper[4953]: I1211 10:32:29.139315 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7bbdc7ccd7-h97wh"] Dec 11 10:32:29 crc kubenswrapper[4953]: I1211 10:32:29.239720 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-757dc6fff9-xc587"] Dec 11 10:32:29 crc kubenswrapper[4953]: I1211 10:32:29.349827 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Dec 11 10:32:29 crc kubenswrapper[4953]: I1211 10:32:29.761497 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bbdc7ccd7-h97wh" event={"ID":"57347790-7a3f-4af9-acb8-5b6652c4240d","Type":"ContainerStarted","Data":"211ba5a14cb3b6139e80a90edd6cf04f6193f358c809a108e76be8c77aa983ce"} Dec 11 10:32:29 crc kubenswrapper[4953]: I1211 10:32:29.762848 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-tqd68" event={"ID":"48902dd9-8c9f-4983-b8dd-6f22f4382a19","Type":"ContainerStarted","Data":"83fcd8493ce3c7045884617ae0baff2087a15f99d164d4a580ff6ea0be0b5085"} Dec 11 10:32:29 crc kubenswrapper[4953]: I1211 10:32:29.764049 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"4287349e-ff2e-483c-9ede-08ec5e03a2b4","Type":"ContainerStarted","Data":"cf045c703a6eff02b21452057e74d0bf94c9c669d3f08d464785c479c49135e8"} Dec 11 10:32:29 crc kubenswrapper[4953]: I1211 10:32:29.765198 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757dc6fff9-xc587" event={"ID":"a71a3f06-3f08-4259-a6e0-5c615e05ee23","Type":"ContainerStarted","Data":"e7f43fd09471fb45225554bf67efde6385f26a5c009d7af0b81cd81c9eff47c9"} Dec 11 10:32:30 crc kubenswrapper[4953]: I1211 10:32:30.773749 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-tqd68" event={"ID":"48902dd9-8c9f-4983-b8dd-6f22f4382a19","Type":"ContainerStarted","Data":"9700ecceafec88bb52bb9474793b818e8b7aef5592713f65ce2d49b857374493"} Dec 11 10:32:30 crc kubenswrapper[4953]: I1211 10:32:30.789947 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-tqd68" podStartSLOduration=2.789925669 podStartE2EDuration="2.789925669s" podCreationTimestamp="2025-12-11 10:32:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:32:30.788331709 +0000 UTC m=+1268.812190742" watchObservedRunningTime="2025-12-11 10:32:30.789925669 +0000 UTC m=+1268.813784702" Dec 11 10:32:32 crc kubenswrapper[4953]: I1211 10:32:32.932685 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Dec 11 10:32:34 crc kubenswrapper[4953]: I1211 10:32:34.779989 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7bbdc7ccd7-h97wh"] Dec 11 10:32:34 crc kubenswrapper[4953]: I1211 10:32:34.875114 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6cb545bd4c-9t76k"] Dec 11 10:32:34 crc kubenswrapper[4953]: I1211 10:32:34.878079 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cb545bd4c-9t76k" Dec 11 10:32:34 crc kubenswrapper[4953]: I1211 10:32:34.883836 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6cb545bd4c-9t76k"] Dec 11 10:32:34 crc kubenswrapper[4953]: I1211 10:32:34.925940 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2c1cb581-9d65-4d31-857c-de67900c05bf-dns-svc\") pod \"dnsmasq-dns-6cb545bd4c-9t76k\" (UID: \"2c1cb581-9d65-4d31-857c-de67900c05bf\") " pod="openstack/dnsmasq-dns-6cb545bd4c-9t76k" Dec 11 10:32:34 crc kubenswrapper[4953]: I1211 10:32:34.926093 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2c1cb581-9d65-4d31-857c-de67900c05bf-ovsdbserver-nb\") pod \"dnsmasq-dns-6cb545bd4c-9t76k\" (UID: \"2c1cb581-9d65-4d31-857c-de67900c05bf\") " pod="openstack/dnsmasq-dns-6cb545bd4c-9t76k" Dec 11 10:32:34 crc kubenswrapper[4953]: I1211 10:32:34.926217 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2c1cb581-9d65-4d31-857c-de67900c05bf-ovsdbserver-sb\") pod \"dnsmasq-dns-6cb545bd4c-9t76k\" (UID: \"2c1cb581-9d65-4d31-857c-de67900c05bf\") " pod="openstack/dnsmasq-dns-6cb545bd4c-9t76k" Dec 11 10:32:34 crc kubenswrapper[4953]: I1211 10:32:34.926428 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dg52l\" (UniqueName: \"kubernetes.io/projected/2c1cb581-9d65-4d31-857c-de67900c05bf-kube-api-access-dg52l\") pod \"dnsmasq-dns-6cb545bd4c-9t76k\" (UID: \"2c1cb581-9d65-4d31-857c-de67900c05bf\") " pod="openstack/dnsmasq-dns-6cb545bd4c-9t76k" Dec 11 10:32:34 crc kubenswrapper[4953]: I1211 10:32:34.926473 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c1cb581-9d65-4d31-857c-de67900c05bf-config\") pod \"dnsmasq-dns-6cb545bd4c-9t76k\" (UID: \"2c1cb581-9d65-4d31-857c-de67900c05bf\") " pod="openstack/dnsmasq-dns-6cb545bd4c-9t76k" Dec 11 10:32:35 crc kubenswrapper[4953]: I1211 10:32:35.027933 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2c1cb581-9d65-4d31-857c-de67900c05bf-ovsdbserver-sb\") pod \"dnsmasq-dns-6cb545bd4c-9t76k\" (UID: \"2c1cb581-9d65-4d31-857c-de67900c05bf\") " pod="openstack/dnsmasq-dns-6cb545bd4c-9t76k" Dec 11 10:32:35 crc kubenswrapper[4953]: I1211 10:32:35.028057 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dg52l\" (UniqueName: \"kubernetes.io/projected/2c1cb581-9d65-4d31-857c-de67900c05bf-kube-api-access-dg52l\") pod \"dnsmasq-dns-6cb545bd4c-9t76k\" (UID: \"2c1cb581-9d65-4d31-857c-de67900c05bf\") " pod="openstack/dnsmasq-dns-6cb545bd4c-9t76k" Dec 11 10:32:35 crc kubenswrapper[4953]: I1211 10:32:35.028081 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c1cb581-9d65-4d31-857c-de67900c05bf-config\") pod \"dnsmasq-dns-6cb545bd4c-9t76k\" (UID: \"2c1cb581-9d65-4d31-857c-de67900c05bf\") " pod="openstack/dnsmasq-dns-6cb545bd4c-9t76k" Dec 11 10:32:35 crc kubenswrapper[4953]: I1211 10:32:35.028132 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2c1cb581-9d65-4d31-857c-de67900c05bf-dns-svc\") pod \"dnsmasq-dns-6cb545bd4c-9t76k\" (UID: \"2c1cb581-9d65-4d31-857c-de67900c05bf\") " pod="openstack/dnsmasq-dns-6cb545bd4c-9t76k" Dec 11 10:32:35 crc kubenswrapper[4953]: I1211 10:32:35.028159 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2c1cb581-9d65-4d31-857c-de67900c05bf-ovsdbserver-nb\") pod \"dnsmasq-dns-6cb545bd4c-9t76k\" (UID: \"2c1cb581-9d65-4d31-857c-de67900c05bf\") " pod="openstack/dnsmasq-dns-6cb545bd4c-9t76k" Dec 11 10:32:35 crc kubenswrapper[4953]: I1211 10:32:35.029586 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2c1cb581-9d65-4d31-857c-de67900c05bf-ovsdbserver-nb\") pod \"dnsmasq-dns-6cb545bd4c-9t76k\" (UID: \"2c1cb581-9d65-4d31-857c-de67900c05bf\") " pod="openstack/dnsmasq-dns-6cb545bd4c-9t76k" Dec 11 10:32:35 crc kubenswrapper[4953]: I1211 10:32:35.029564 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c1cb581-9d65-4d31-857c-de67900c05bf-config\") pod \"dnsmasq-dns-6cb545bd4c-9t76k\" (UID: \"2c1cb581-9d65-4d31-857c-de67900c05bf\") " pod="openstack/dnsmasq-dns-6cb545bd4c-9t76k" Dec 11 10:32:35 crc kubenswrapper[4953]: I1211 10:32:35.029658 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2c1cb581-9d65-4d31-857c-de67900c05bf-ovsdbserver-sb\") pod \"dnsmasq-dns-6cb545bd4c-9t76k\" (UID: \"2c1cb581-9d65-4d31-857c-de67900c05bf\") " pod="openstack/dnsmasq-dns-6cb545bd4c-9t76k" Dec 11 10:32:35 crc kubenswrapper[4953]: I1211 10:32:35.029648 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2c1cb581-9d65-4d31-857c-de67900c05bf-dns-svc\") pod \"dnsmasq-dns-6cb545bd4c-9t76k\" (UID: \"2c1cb581-9d65-4d31-857c-de67900c05bf\") " pod="openstack/dnsmasq-dns-6cb545bd4c-9t76k" Dec 11 10:32:35 crc kubenswrapper[4953]: I1211 10:32:35.060563 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dg52l\" (UniqueName: \"kubernetes.io/projected/2c1cb581-9d65-4d31-857c-de67900c05bf-kube-api-access-dg52l\") pod \"dnsmasq-dns-6cb545bd4c-9t76k\" (UID: \"2c1cb581-9d65-4d31-857c-de67900c05bf\") " pod="openstack/dnsmasq-dns-6cb545bd4c-9t76k" Dec 11 10:32:35 crc kubenswrapper[4953]: I1211 10:32:35.201856 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cb545bd4c-9t76k" Dec 11 10:32:35 crc kubenswrapper[4953]: I1211 10:32:35.871123 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Dec 11 10:32:35 crc kubenswrapper[4953]: I1211 10:32:35.881335 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Dec 11 10:32:35 crc kubenswrapper[4953]: I1211 10:32:35.886417 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Dec 11 10:32:35 crc kubenswrapper[4953]: I1211 10:32:35.886555 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-4bjht" Dec 11 10:32:35 crc kubenswrapper[4953]: I1211 10:32:35.886662 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Dec 11 10:32:35 crc kubenswrapper[4953]: I1211 10:32:35.889026 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Dec 11 10:32:35 crc kubenswrapper[4953]: I1211 10:32:35.890763 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Dec 11 10:32:36 crc kubenswrapper[4953]: I1211 10:32:36.056672 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"swift-storage-0\" (UID: \"7be1c768-78bb-476b-b51d-8e4fe80b8500\") " pod="openstack/swift-storage-0" Dec 11 10:32:36 crc kubenswrapper[4953]: I1211 10:32:36.056773 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2msdn\" (UniqueName: \"kubernetes.io/projected/7be1c768-78bb-476b-b51d-8e4fe80b8500-kube-api-access-2msdn\") pod \"swift-storage-0\" (UID: \"7be1c768-78bb-476b-b51d-8e4fe80b8500\") " pod="openstack/swift-storage-0" Dec 11 10:32:36 crc kubenswrapper[4953]: I1211 10:32:36.056836 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7be1c768-78bb-476b-b51d-8e4fe80b8500-etc-swift\") pod \"swift-storage-0\" (UID: \"7be1c768-78bb-476b-b51d-8e4fe80b8500\") " pod="openstack/swift-storage-0" Dec 11 10:32:36 crc kubenswrapper[4953]: I1211 10:32:36.056932 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/7be1c768-78bb-476b-b51d-8e4fe80b8500-lock\") pod \"swift-storage-0\" (UID: \"7be1c768-78bb-476b-b51d-8e4fe80b8500\") " pod="openstack/swift-storage-0" Dec 11 10:32:36 crc kubenswrapper[4953]: I1211 10:32:36.056955 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/7be1c768-78bb-476b-b51d-8e4fe80b8500-cache\") pod \"swift-storage-0\" (UID: \"7be1c768-78bb-476b-b51d-8e4fe80b8500\") " pod="openstack/swift-storage-0" Dec 11 10:32:36 crc kubenswrapper[4953]: I1211 10:32:36.158880 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/7be1c768-78bb-476b-b51d-8e4fe80b8500-lock\") pod \"swift-storage-0\" (UID: \"7be1c768-78bb-476b-b51d-8e4fe80b8500\") " pod="openstack/swift-storage-0" Dec 11 10:32:36 crc kubenswrapper[4953]: I1211 10:32:36.159269 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/7be1c768-78bb-476b-b51d-8e4fe80b8500-cache\") pod \"swift-storage-0\" (UID: \"7be1c768-78bb-476b-b51d-8e4fe80b8500\") " pod="openstack/swift-storage-0" Dec 11 10:32:36 crc kubenswrapper[4953]: I1211 10:32:36.159379 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"swift-storage-0\" (UID: \"7be1c768-78bb-476b-b51d-8e4fe80b8500\") " pod="openstack/swift-storage-0" Dec 11 10:32:36 crc kubenswrapper[4953]: I1211 10:32:36.159452 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2msdn\" (UniqueName: \"kubernetes.io/projected/7be1c768-78bb-476b-b51d-8e4fe80b8500-kube-api-access-2msdn\") pod \"swift-storage-0\" (UID: \"7be1c768-78bb-476b-b51d-8e4fe80b8500\") " pod="openstack/swift-storage-0" Dec 11 10:32:36 crc kubenswrapper[4953]: I1211 10:32:36.159521 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7be1c768-78bb-476b-b51d-8e4fe80b8500-etc-swift\") pod \"swift-storage-0\" (UID: \"7be1c768-78bb-476b-b51d-8e4fe80b8500\") " pod="openstack/swift-storage-0" Dec 11 10:32:36 crc kubenswrapper[4953]: E1211 10:32:36.159738 4953 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 11 10:32:36 crc kubenswrapper[4953]: E1211 10:32:36.159772 4953 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 11 10:32:36 crc kubenswrapper[4953]: E1211 10:32:36.159872 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7be1c768-78bb-476b-b51d-8e4fe80b8500-etc-swift podName:7be1c768-78bb-476b-b51d-8e4fe80b8500 nodeName:}" failed. No retries permitted until 2025-12-11 10:32:36.659834348 +0000 UTC m=+1274.683693471 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/7be1c768-78bb-476b-b51d-8e4fe80b8500-etc-swift") pod "swift-storage-0" (UID: "7be1c768-78bb-476b-b51d-8e4fe80b8500") : configmap "swift-ring-files" not found Dec 11 10:32:36 crc kubenswrapper[4953]: I1211 10:32:36.160640 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/7be1c768-78bb-476b-b51d-8e4fe80b8500-lock\") pod \"swift-storage-0\" (UID: \"7be1c768-78bb-476b-b51d-8e4fe80b8500\") " pod="openstack/swift-storage-0" Dec 11 10:32:36 crc kubenswrapper[4953]: I1211 10:32:36.161039 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/7be1c768-78bb-476b-b51d-8e4fe80b8500-cache\") pod \"swift-storage-0\" (UID: \"7be1c768-78bb-476b-b51d-8e4fe80b8500\") " pod="openstack/swift-storage-0" Dec 11 10:32:36 crc kubenswrapper[4953]: I1211 10:32:36.161400 4953 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"swift-storage-0\" (UID: \"7be1c768-78bb-476b-b51d-8e4fe80b8500\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/swift-storage-0" Dec 11 10:32:36 crc kubenswrapper[4953]: I1211 10:32:36.190869 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"swift-storage-0\" (UID: \"7be1c768-78bb-476b-b51d-8e4fe80b8500\") " pod="openstack/swift-storage-0" Dec 11 10:32:36 crc kubenswrapper[4953]: I1211 10:32:36.191842 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2msdn\" (UniqueName: \"kubernetes.io/projected/7be1c768-78bb-476b-b51d-8e4fe80b8500-kube-api-access-2msdn\") pod \"swift-storage-0\" (UID: \"7be1c768-78bb-476b-b51d-8e4fe80b8500\") " pod="openstack/swift-storage-0" Dec 11 10:32:36 crc kubenswrapper[4953]: I1211 10:32:36.688456 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7be1c768-78bb-476b-b51d-8e4fe80b8500-etc-swift\") pod \"swift-storage-0\" (UID: \"7be1c768-78bb-476b-b51d-8e4fe80b8500\") " pod="openstack/swift-storage-0" Dec 11 10:32:36 crc kubenswrapper[4953]: E1211 10:32:36.688968 4953 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 11 10:32:36 crc kubenswrapper[4953]: E1211 10:32:36.689146 4953 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 11 10:32:36 crc kubenswrapper[4953]: E1211 10:32:36.689199 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7be1c768-78bb-476b-b51d-8e4fe80b8500-etc-swift podName:7be1c768-78bb-476b-b51d-8e4fe80b8500 nodeName:}" failed. No retries permitted until 2025-12-11 10:32:37.689181673 +0000 UTC m=+1275.713040706 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/7be1c768-78bb-476b-b51d-8e4fe80b8500-etc-swift") pod "swift-storage-0" (UID: "7be1c768-78bb-476b-b51d-8e4fe80b8500") : configmap "swift-ring-files" not found Dec 11 10:32:37 crc kubenswrapper[4953]: I1211 10:32:37.046849 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6cb545bd4c-9t76k"] Dec 11 10:32:37 crc kubenswrapper[4953]: I1211 10:32:37.705715 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7be1c768-78bb-476b-b51d-8e4fe80b8500-etc-swift\") pod \"swift-storage-0\" (UID: \"7be1c768-78bb-476b-b51d-8e4fe80b8500\") " pod="openstack/swift-storage-0" Dec 11 10:32:37 crc kubenswrapper[4953]: E1211 10:32:37.705941 4953 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 11 10:32:37 crc kubenswrapper[4953]: E1211 10:32:37.705972 4953 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 11 10:32:37 crc kubenswrapper[4953]: E1211 10:32:37.706067 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7be1c768-78bb-476b-b51d-8e4fe80b8500-etc-swift podName:7be1c768-78bb-476b-b51d-8e4fe80b8500 nodeName:}" failed. No retries permitted until 2025-12-11 10:32:39.706017999 +0000 UTC m=+1277.729877032 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/7be1c768-78bb-476b-b51d-8e4fe80b8500-etc-swift") pod "swift-storage-0" (UID: "7be1c768-78bb-476b-b51d-8e4fe80b8500") : configmap "swift-ring-files" not found Dec 11 10:32:39 crc kubenswrapper[4953]: I1211 10:32:39.739442 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7be1c768-78bb-476b-b51d-8e4fe80b8500-etc-swift\") pod \"swift-storage-0\" (UID: \"7be1c768-78bb-476b-b51d-8e4fe80b8500\") " pod="openstack/swift-storage-0" Dec 11 10:32:39 crc kubenswrapper[4953]: E1211 10:32:39.739678 4953 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 11 10:32:39 crc kubenswrapper[4953]: E1211 10:32:39.740145 4953 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 11 10:32:39 crc kubenswrapper[4953]: E1211 10:32:39.740199 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7be1c768-78bb-476b-b51d-8e4fe80b8500-etc-swift podName:7be1c768-78bb-476b-b51d-8e4fe80b8500 nodeName:}" failed. No retries permitted until 2025-12-11 10:32:43.740180856 +0000 UTC m=+1281.764039889 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/7be1c768-78bb-476b-b51d-8e4fe80b8500-etc-swift") pod "swift-storage-0" (UID: "7be1c768-78bb-476b-b51d-8e4fe80b8500") : configmap "swift-ring-files" not found Dec 11 10:32:39 crc kubenswrapper[4953]: I1211 10:32:39.838299 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-8kmhp"] Dec 11 10:32:39 crc kubenswrapper[4953]: I1211 10:32:39.839512 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-8kmhp" Dec 11 10:32:39 crc kubenswrapper[4953]: I1211 10:32:39.844182 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Dec 11 10:32:39 crc kubenswrapper[4953]: I1211 10:32:39.844254 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Dec 11 10:32:39 crc kubenswrapper[4953]: I1211 10:32:39.844543 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Dec 11 10:32:39 crc kubenswrapper[4953]: I1211 10:32:39.868121 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-t4cc8"] Dec 11 10:32:39 crc kubenswrapper[4953]: I1211 10:32:39.869091 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-t4cc8" Dec 11 10:32:39 crc kubenswrapper[4953]: I1211 10:32:39.894701 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-t4cc8"] Dec 11 10:32:39 crc kubenswrapper[4953]: I1211 10:32:39.898884 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cb545bd4c-9t76k" event={"ID":"2c1cb581-9d65-4d31-857c-de67900c05bf","Type":"ContainerStarted","Data":"b9fd3aba719059ec59d37f0c01deef4872a40b2b70e03a344d5f97701fbc31f1"} Dec 11 10:32:39 crc kubenswrapper[4953]: I1211 10:32:39.915123 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-8kmhp"] Dec 11 10:32:39 crc kubenswrapper[4953]: E1211 10:32:39.915799 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle dispersionconf etc-swift kube-api-access-d7jmz ring-data-devices scripts swiftconf], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/swift-ring-rebalance-8kmhp" podUID="40f659e1-a9a4-4b3c-b5e2-0c100c77e91e" Dec 11 10:32:39 crc kubenswrapper[4953]: I1211 10:32:39.938410 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-8kmhp"] Dec 11 10:32:39 crc kubenswrapper[4953]: I1211 10:32:39.943037 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/40f659e1-a9a4-4b3c-b5e2-0c100c77e91e-etc-swift\") pod \"swift-ring-rebalance-8kmhp\" (UID: \"40f659e1-a9a4-4b3c-b5e2-0c100c77e91e\") " pod="openstack/swift-ring-rebalance-8kmhp" Dec 11 10:32:39 crc kubenswrapper[4953]: I1211 10:32:39.943186 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/40f659e1-a9a4-4b3c-b5e2-0c100c77e91e-swiftconf\") pod \"swift-ring-rebalance-8kmhp\" (UID: \"40f659e1-a9a4-4b3c-b5e2-0c100c77e91e\") " pod="openstack/swift-ring-rebalance-8kmhp" Dec 11 10:32:39 crc kubenswrapper[4953]: I1211 10:32:39.943324 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/40f659e1-a9a4-4b3c-b5e2-0c100c77e91e-ring-data-devices\") pod \"swift-ring-rebalance-8kmhp\" (UID: \"40f659e1-a9a4-4b3c-b5e2-0c100c77e91e\") " pod="openstack/swift-ring-rebalance-8kmhp" Dec 11 10:32:39 crc kubenswrapper[4953]: I1211 10:32:39.943389 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/40f659e1-a9a4-4b3c-b5e2-0c100c77e91e-scripts\") pod \"swift-ring-rebalance-8kmhp\" (UID: \"40f659e1-a9a4-4b3c-b5e2-0c100c77e91e\") " pod="openstack/swift-ring-rebalance-8kmhp" Dec 11 10:32:39 crc kubenswrapper[4953]: I1211 10:32:39.943506 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40f659e1-a9a4-4b3c-b5e2-0c100c77e91e-combined-ca-bundle\") pod \"swift-ring-rebalance-8kmhp\" (UID: \"40f659e1-a9a4-4b3c-b5e2-0c100c77e91e\") " pod="openstack/swift-ring-rebalance-8kmhp" Dec 11 10:32:39 crc kubenswrapper[4953]: I1211 10:32:39.943537 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/40f659e1-a9a4-4b3c-b5e2-0c100c77e91e-dispersionconf\") pod \"swift-ring-rebalance-8kmhp\" (UID: \"40f659e1-a9a4-4b3c-b5e2-0c100c77e91e\") " pod="openstack/swift-ring-rebalance-8kmhp" Dec 11 10:32:39 crc kubenswrapper[4953]: I1211 10:32:39.943557 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7jmz\" (UniqueName: \"kubernetes.io/projected/40f659e1-a9a4-4b3c-b5e2-0c100c77e91e-kube-api-access-d7jmz\") pod \"swift-ring-rebalance-8kmhp\" (UID: \"40f659e1-a9a4-4b3c-b5e2-0c100c77e91e\") " pod="openstack/swift-ring-rebalance-8kmhp" Dec 11 10:32:40 crc kubenswrapper[4953]: I1211 10:32:40.044987 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/21b311c3-3edf-4905-9929-79787eb29bb8-swiftconf\") pod \"swift-ring-rebalance-t4cc8\" (UID: \"21b311c3-3edf-4905-9929-79787eb29bb8\") " pod="openstack/swift-ring-rebalance-t4cc8" Dec 11 10:32:40 crc kubenswrapper[4953]: I1211 10:32:40.045046 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/21b311c3-3edf-4905-9929-79787eb29bb8-scripts\") pod \"swift-ring-rebalance-t4cc8\" (UID: \"21b311c3-3edf-4905-9929-79787eb29bb8\") " pod="openstack/swift-ring-rebalance-t4cc8" Dec 11 10:32:40 crc kubenswrapper[4953]: I1211 10:32:40.045132 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/40f659e1-a9a4-4b3c-b5e2-0c100c77e91e-swiftconf\") pod \"swift-ring-rebalance-8kmhp\" (UID: \"40f659e1-a9a4-4b3c-b5e2-0c100c77e91e\") " pod="openstack/swift-ring-rebalance-8kmhp" Dec 11 10:32:40 crc kubenswrapper[4953]: I1211 10:32:40.045188 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4m8j\" (UniqueName: \"kubernetes.io/projected/21b311c3-3edf-4905-9929-79787eb29bb8-kube-api-access-c4m8j\") pod \"swift-ring-rebalance-t4cc8\" (UID: \"21b311c3-3edf-4905-9929-79787eb29bb8\") " pod="openstack/swift-ring-rebalance-t4cc8" Dec 11 10:32:40 crc kubenswrapper[4953]: I1211 10:32:40.045218 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/40f659e1-a9a4-4b3c-b5e2-0c100c77e91e-ring-data-devices\") pod \"swift-ring-rebalance-8kmhp\" (UID: \"40f659e1-a9a4-4b3c-b5e2-0c100c77e91e\") " pod="openstack/swift-ring-rebalance-8kmhp" Dec 11 10:32:40 crc kubenswrapper[4953]: I1211 10:32:40.045248 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/40f659e1-a9a4-4b3c-b5e2-0c100c77e91e-scripts\") pod \"swift-ring-rebalance-8kmhp\" (UID: \"40f659e1-a9a4-4b3c-b5e2-0c100c77e91e\") " pod="openstack/swift-ring-rebalance-8kmhp" Dec 11 10:32:40 crc kubenswrapper[4953]: I1211 10:32:40.045288 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/21b311c3-3edf-4905-9929-79787eb29bb8-dispersionconf\") pod \"swift-ring-rebalance-t4cc8\" (UID: \"21b311c3-3edf-4905-9929-79787eb29bb8\") " pod="openstack/swift-ring-rebalance-t4cc8" Dec 11 10:32:40 crc kubenswrapper[4953]: I1211 10:32:40.045320 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21b311c3-3edf-4905-9929-79787eb29bb8-combined-ca-bundle\") pod \"swift-ring-rebalance-t4cc8\" (UID: \"21b311c3-3edf-4905-9929-79787eb29bb8\") " pod="openstack/swift-ring-rebalance-t4cc8" Dec 11 10:32:40 crc kubenswrapper[4953]: I1211 10:32:40.045356 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40f659e1-a9a4-4b3c-b5e2-0c100c77e91e-combined-ca-bundle\") pod \"swift-ring-rebalance-8kmhp\" (UID: \"40f659e1-a9a4-4b3c-b5e2-0c100c77e91e\") " pod="openstack/swift-ring-rebalance-8kmhp" Dec 11 10:32:40 crc kubenswrapper[4953]: I1211 10:32:40.045378 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/21b311c3-3edf-4905-9929-79787eb29bb8-ring-data-devices\") pod \"swift-ring-rebalance-t4cc8\" (UID: \"21b311c3-3edf-4905-9929-79787eb29bb8\") " pod="openstack/swift-ring-rebalance-t4cc8" Dec 11 10:32:40 crc kubenswrapper[4953]: I1211 10:32:40.045406 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/40f659e1-a9a4-4b3c-b5e2-0c100c77e91e-dispersionconf\") pod \"swift-ring-rebalance-8kmhp\" (UID: \"40f659e1-a9a4-4b3c-b5e2-0c100c77e91e\") " pod="openstack/swift-ring-rebalance-8kmhp" Dec 11 10:32:40 crc kubenswrapper[4953]: I1211 10:32:40.045431 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d7jmz\" (UniqueName: \"kubernetes.io/projected/40f659e1-a9a4-4b3c-b5e2-0c100c77e91e-kube-api-access-d7jmz\") pod \"swift-ring-rebalance-8kmhp\" (UID: \"40f659e1-a9a4-4b3c-b5e2-0c100c77e91e\") " pod="openstack/swift-ring-rebalance-8kmhp" Dec 11 10:32:40 crc kubenswrapper[4953]: I1211 10:32:40.045463 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/40f659e1-a9a4-4b3c-b5e2-0c100c77e91e-etc-swift\") pod \"swift-ring-rebalance-8kmhp\" (UID: \"40f659e1-a9a4-4b3c-b5e2-0c100c77e91e\") " pod="openstack/swift-ring-rebalance-8kmhp" Dec 11 10:32:40 crc kubenswrapper[4953]: I1211 10:32:40.045498 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/21b311c3-3edf-4905-9929-79787eb29bb8-etc-swift\") pod \"swift-ring-rebalance-t4cc8\" (UID: \"21b311c3-3edf-4905-9929-79787eb29bb8\") " pod="openstack/swift-ring-rebalance-t4cc8" Dec 11 10:32:40 crc kubenswrapper[4953]: I1211 10:32:40.046508 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/40f659e1-a9a4-4b3c-b5e2-0c100c77e91e-etc-swift\") pod \"swift-ring-rebalance-8kmhp\" (UID: \"40f659e1-a9a4-4b3c-b5e2-0c100c77e91e\") " pod="openstack/swift-ring-rebalance-8kmhp" Dec 11 10:32:40 crc kubenswrapper[4953]: I1211 10:32:40.046744 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/40f659e1-a9a4-4b3c-b5e2-0c100c77e91e-ring-data-devices\") pod \"swift-ring-rebalance-8kmhp\" (UID: \"40f659e1-a9a4-4b3c-b5e2-0c100c77e91e\") " pod="openstack/swift-ring-rebalance-8kmhp" Dec 11 10:32:40 crc kubenswrapper[4953]: I1211 10:32:40.047174 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/40f659e1-a9a4-4b3c-b5e2-0c100c77e91e-scripts\") pod \"swift-ring-rebalance-8kmhp\" (UID: \"40f659e1-a9a4-4b3c-b5e2-0c100c77e91e\") " pod="openstack/swift-ring-rebalance-8kmhp" Dec 11 10:32:40 crc kubenswrapper[4953]: I1211 10:32:40.050966 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40f659e1-a9a4-4b3c-b5e2-0c100c77e91e-combined-ca-bundle\") pod \"swift-ring-rebalance-8kmhp\" (UID: \"40f659e1-a9a4-4b3c-b5e2-0c100c77e91e\") " pod="openstack/swift-ring-rebalance-8kmhp" Dec 11 10:32:40 crc kubenswrapper[4953]: I1211 10:32:40.050982 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/40f659e1-a9a4-4b3c-b5e2-0c100c77e91e-swiftconf\") pod \"swift-ring-rebalance-8kmhp\" (UID: \"40f659e1-a9a4-4b3c-b5e2-0c100c77e91e\") " pod="openstack/swift-ring-rebalance-8kmhp" Dec 11 10:32:40 crc kubenswrapper[4953]: I1211 10:32:40.053976 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/40f659e1-a9a4-4b3c-b5e2-0c100c77e91e-dispersionconf\") pod \"swift-ring-rebalance-8kmhp\" (UID: \"40f659e1-a9a4-4b3c-b5e2-0c100c77e91e\") " pod="openstack/swift-ring-rebalance-8kmhp" Dec 11 10:32:40 crc kubenswrapper[4953]: I1211 10:32:40.070447 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7jmz\" (UniqueName: \"kubernetes.io/projected/40f659e1-a9a4-4b3c-b5e2-0c100c77e91e-kube-api-access-d7jmz\") pod \"swift-ring-rebalance-8kmhp\" (UID: \"40f659e1-a9a4-4b3c-b5e2-0c100c77e91e\") " pod="openstack/swift-ring-rebalance-8kmhp" Dec 11 10:32:40 crc kubenswrapper[4953]: I1211 10:32:40.147390 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/21b311c3-3edf-4905-9929-79787eb29bb8-dispersionconf\") pod \"swift-ring-rebalance-t4cc8\" (UID: \"21b311c3-3edf-4905-9929-79787eb29bb8\") " pod="openstack/swift-ring-rebalance-t4cc8" Dec 11 10:32:40 crc kubenswrapper[4953]: I1211 10:32:40.147445 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21b311c3-3edf-4905-9929-79787eb29bb8-combined-ca-bundle\") pod \"swift-ring-rebalance-t4cc8\" (UID: \"21b311c3-3edf-4905-9929-79787eb29bb8\") " pod="openstack/swift-ring-rebalance-t4cc8" Dec 11 10:32:40 crc kubenswrapper[4953]: I1211 10:32:40.147473 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/21b311c3-3edf-4905-9929-79787eb29bb8-ring-data-devices\") pod \"swift-ring-rebalance-t4cc8\" (UID: \"21b311c3-3edf-4905-9929-79787eb29bb8\") " pod="openstack/swift-ring-rebalance-t4cc8" Dec 11 10:32:40 crc kubenswrapper[4953]: I1211 10:32:40.147515 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/21b311c3-3edf-4905-9929-79787eb29bb8-etc-swift\") pod \"swift-ring-rebalance-t4cc8\" (UID: \"21b311c3-3edf-4905-9929-79787eb29bb8\") " pod="openstack/swift-ring-rebalance-t4cc8" Dec 11 10:32:40 crc kubenswrapper[4953]: I1211 10:32:40.147586 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/21b311c3-3edf-4905-9929-79787eb29bb8-swiftconf\") pod \"swift-ring-rebalance-t4cc8\" (UID: \"21b311c3-3edf-4905-9929-79787eb29bb8\") " pod="openstack/swift-ring-rebalance-t4cc8" Dec 11 10:32:40 crc kubenswrapper[4953]: I1211 10:32:40.147605 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/21b311c3-3edf-4905-9929-79787eb29bb8-scripts\") pod \"swift-ring-rebalance-t4cc8\" (UID: \"21b311c3-3edf-4905-9929-79787eb29bb8\") " pod="openstack/swift-ring-rebalance-t4cc8" Dec 11 10:32:40 crc kubenswrapper[4953]: I1211 10:32:40.147652 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c4m8j\" (UniqueName: \"kubernetes.io/projected/21b311c3-3edf-4905-9929-79787eb29bb8-kube-api-access-c4m8j\") pod \"swift-ring-rebalance-t4cc8\" (UID: \"21b311c3-3edf-4905-9929-79787eb29bb8\") " pod="openstack/swift-ring-rebalance-t4cc8" Dec 11 10:32:40 crc kubenswrapper[4953]: I1211 10:32:40.148731 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/21b311c3-3edf-4905-9929-79787eb29bb8-etc-swift\") pod \"swift-ring-rebalance-t4cc8\" (UID: \"21b311c3-3edf-4905-9929-79787eb29bb8\") " pod="openstack/swift-ring-rebalance-t4cc8" Dec 11 10:32:40 crc kubenswrapper[4953]: I1211 10:32:40.149110 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/21b311c3-3edf-4905-9929-79787eb29bb8-ring-data-devices\") pod \"swift-ring-rebalance-t4cc8\" (UID: \"21b311c3-3edf-4905-9929-79787eb29bb8\") " pod="openstack/swift-ring-rebalance-t4cc8" Dec 11 10:32:40 crc kubenswrapper[4953]: I1211 10:32:40.149421 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/21b311c3-3edf-4905-9929-79787eb29bb8-scripts\") pod \"swift-ring-rebalance-t4cc8\" (UID: \"21b311c3-3edf-4905-9929-79787eb29bb8\") " pod="openstack/swift-ring-rebalance-t4cc8" Dec 11 10:32:40 crc kubenswrapper[4953]: I1211 10:32:40.153076 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/21b311c3-3edf-4905-9929-79787eb29bb8-dispersionconf\") pod \"swift-ring-rebalance-t4cc8\" (UID: \"21b311c3-3edf-4905-9929-79787eb29bb8\") " pod="openstack/swift-ring-rebalance-t4cc8" Dec 11 10:32:40 crc kubenswrapper[4953]: I1211 10:32:40.153450 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21b311c3-3edf-4905-9929-79787eb29bb8-combined-ca-bundle\") pod \"swift-ring-rebalance-t4cc8\" (UID: \"21b311c3-3edf-4905-9929-79787eb29bb8\") " pod="openstack/swift-ring-rebalance-t4cc8" Dec 11 10:32:40 crc kubenswrapper[4953]: I1211 10:32:40.157029 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/21b311c3-3edf-4905-9929-79787eb29bb8-swiftconf\") pod \"swift-ring-rebalance-t4cc8\" (UID: \"21b311c3-3edf-4905-9929-79787eb29bb8\") " pod="openstack/swift-ring-rebalance-t4cc8" Dec 11 10:32:40 crc kubenswrapper[4953]: I1211 10:32:40.172058 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4m8j\" (UniqueName: \"kubernetes.io/projected/21b311c3-3edf-4905-9929-79787eb29bb8-kube-api-access-c4m8j\") pod \"swift-ring-rebalance-t4cc8\" (UID: \"21b311c3-3edf-4905-9929-79787eb29bb8\") " pod="openstack/swift-ring-rebalance-t4cc8" Dec 11 10:32:40 crc kubenswrapper[4953]: I1211 10:32:40.191639 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-t4cc8" Dec 11 10:32:40 crc kubenswrapper[4953]: I1211 10:32:40.709716 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-t4cc8"] Dec 11 10:32:40 crc kubenswrapper[4953]: I1211 10:32:40.908764 4953 generic.go:334] "Generic (PLEG): container finished" podID="a71a3f06-3f08-4259-a6e0-5c615e05ee23" containerID="a4094f4c74adbe2649b33224429248127db0a30a08d40f549311ae1ab232d0ea" exitCode=0 Dec 11 10:32:40 crc kubenswrapper[4953]: I1211 10:32:40.908810 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757dc6fff9-xc587" event={"ID":"a71a3f06-3f08-4259-a6e0-5c615e05ee23","Type":"ContainerDied","Data":"a4094f4c74adbe2649b33224429248127db0a30a08d40f549311ae1ab232d0ea"} Dec 11 10:32:40 crc kubenswrapper[4953]: I1211 10:32:40.914111 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"9bc1f5cb-5d27-4ce1-8f01-5219db1cbeab","Type":"ContainerStarted","Data":"6cf3f181073119c217c689c71a3c2197cf56714fb1f0e3129fa899a165c1605b"} Dec 11 10:32:40 crc kubenswrapper[4953]: I1211 10:32:40.915346 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Dec 11 10:32:40 crc kubenswrapper[4953]: I1211 10:32:40.919162 4953 generic.go:334] "Generic (PLEG): container finished" podID="23f99edb-3870-42f3-bdef-ec4db335ba35" containerID="6ee2835a71d7d5e83718a29d1cfff494a3741681572cda87af607b814ba32761" exitCode=0 Dec 11 10:32:40 crc kubenswrapper[4953]: I1211 10:32:40.919437 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"23f99edb-3870-42f3-bdef-ec4db335ba35","Type":"ContainerDied","Data":"6ee2835a71d7d5e83718a29d1cfff494a3741681572cda87af607b814ba32761"} Dec 11 10:32:40 crc kubenswrapper[4953]: I1211 10:32:40.931976 4953 generic.go:334] "Generic (PLEG): container finished" podID="27258186-4cab-45b4-a20c-a4c3ddc82f76" containerID="01fafdd99eaa9ede427831b53dacf59c2f223520959b1141bcba498e96fc5d55" exitCode=0 Dec 11 10:32:40 crc kubenswrapper[4953]: I1211 10:32:40.932049 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"27258186-4cab-45b4-a20c-a4c3ddc82f76","Type":"ContainerDied","Data":"01fafdd99eaa9ede427831b53dacf59c2f223520959b1141bcba498e96fc5d55"} Dec 11 10:32:40 crc kubenswrapper[4953]: I1211 10:32:40.940285 4953 generic.go:334] "Generic (PLEG): container finished" podID="33fa5e5b-3be4-4fb2-8a05-e9f500184264" containerID="468a3ca026c3abee25bf10caf34fafc166670af7abcbe04feb357213379664c8" exitCode=0 Dec 11 10:32:40 crc kubenswrapper[4953]: I1211 10:32:40.940364 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-744ffd65bc-sgr4j" event={"ID":"33fa5e5b-3be4-4fb2-8a05-e9f500184264","Type":"ContainerDied","Data":"468a3ca026c3abee25bf10caf34fafc166670af7abcbe04feb357213379664c8"} Dec 11 10:32:40 crc kubenswrapper[4953]: I1211 10:32:40.954326 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=8.712392533 podStartE2EDuration="1m6.954309492s" podCreationTimestamp="2025-12-11 10:31:34 +0000 UTC" firstStartedPulling="2025-12-11 10:31:42.032808708 +0000 UTC m=+1220.056667731" lastFinishedPulling="2025-12-11 10:32:40.274725657 +0000 UTC m=+1278.298584690" observedRunningTime="2025-12-11 10:32:40.949846519 +0000 UTC m=+1278.973705562" watchObservedRunningTime="2025-12-11 10:32:40.954309492 +0000 UTC m=+1278.978168525" Dec 11 10:32:40 crc kubenswrapper[4953]: I1211 10:32:40.956162 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"4287349e-ff2e-483c-9ede-08ec5e03a2b4","Type":"ContainerStarted","Data":"c0f6853d6258372aa9946f5e58c9f253d8e32cbaa5a5914801b2de468c7d1703"} Dec 11 10:32:40 crc kubenswrapper[4953]: I1211 10:32:40.958006 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-t4cc8" event={"ID":"21b311c3-3edf-4905-9929-79787eb29bb8","Type":"ContainerStarted","Data":"c7e5e46fcb429ae3d03bcd8223a6d734009df75bbcaea7954dd70dac27f966ed"} Dec 11 10:32:40 crc kubenswrapper[4953]: I1211 10:32:40.966878 4953 generic.go:334] "Generic (PLEG): container finished" podID="c3962e78-992c-4a5f-a874-2d744965e3bb" containerID="fbbe128d8acde527edab52a428b96af9f6d264a1fdf7804028ac6595f351e370" exitCode=0 Dec 11 10:32:40 crc kubenswrapper[4953]: I1211 10:32:40.966989 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95f5f6995-p9d9d" event={"ID":"c3962e78-992c-4a5f-a874-2d744965e3bb","Type":"ContainerDied","Data":"fbbe128d8acde527edab52a428b96af9f6d264a1fdf7804028ac6595f351e370"} Dec 11 10:32:40 crc kubenswrapper[4953]: I1211 10:32:40.969934 4953 generic.go:334] "Generic (PLEG): container finished" podID="57347790-7a3f-4af9-acb8-5b6652c4240d" containerID="af9e72b1d8e0d5037140a9e9e8b29b5c4f484f982453553487a22d20185d7b64" exitCode=0 Dec 11 10:32:40 crc kubenswrapper[4953]: I1211 10:32:40.970012 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bbdc7ccd7-h97wh" event={"ID":"57347790-7a3f-4af9-acb8-5b6652c4240d","Type":"ContainerDied","Data":"af9e72b1d8e0d5037140a9e9e8b29b5c4f484f982453553487a22d20185d7b64"} Dec 11 10:32:40 crc kubenswrapper[4953]: I1211 10:32:40.982221 4953 generic.go:334] "Generic (PLEG): container finished" podID="2c1cb581-9d65-4d31-857c-de67900c05bf" containerID="da84b1e37c527f96dd5fde42e9efc32cf460d1be64b21ab30e46f0d9ae377d84" exitCode=0 Dec 11 10:32:40 crc kubenswrapper[4953]: I1211 10:32:40.982329 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-8kmhp" Dec 11 10:32:40 crc kubenswrapper[4953]: I1211 10:32:40.983763 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cb545bd4c-9t76k" event={"ID":"2c1cb581-9d65-4d31-857c-de67900c05bf","Type":"ContainerDied","Data":"da84b1e37c527f96dd5fde42e9efc32cf460d1be64b21ab30e46f0d9ae377d84"} Dec 11 10:32:41 crc kubenswrapper[4953]: I1211 10:32:41.274043 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-8kmhp" Dec 11 10:32:41 crc kubenswrapper[4953]: I1211 10:32:41.335634 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-744ffd65bc-sgr4j" Dec 11 10:32:41 crc kubenswrapper[4953]: I1211 10:32:41.378306 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/40f659e1-a9a4-4b3c-b5e2-0c100c77e91e-etc-swift\") pod \"40f659e1-a9a4-4b3c-b5e2-0c100c77e91e\" (UID: \"40f659e1-a9a4-4b3c-b5e2-0c100c77e91e\") " Dec 11 10:32:41 crc kubenswrapper[4953]: I1211 10:32:41.378377 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/40f659e1-a9a4-4b3c-b5e2-0c100c77e91e-ring-data-devices\") pod \"40f659e1-a9a4-4b3c-b5e2-0c100c77e91e\" (UID: \"40f659e1-a9a4-4b3c-b5e2-0c100c77e91e\") " Dec 11 10:32:41 crc kubenswrapper[4953]: I1211 10:32:41.378416 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/40f659e1-a9a4-4b3c-b5e2-0c100c77e91e-swiftconf\") pod \"40f659e1-a9a4-4b3c-b5e2-0c100c77e91e\" (UID: \"40f659e1-a9a4-4b3c-b5e2-0c100c77e91e\") " Dec 11 10:32:41 crc kubenswrapper[4953]: I1211 10:32:41.378512 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40f659e1-a9a4-4b3c-b5e2-0c100c77e91e-combined-ca-bundle\") pod \"40f659e1-a9a4-4b3c-b5e2-0c100c77e91e\" (UID: \"40f659e1-a9a4-4b3c-b5e2-0c100c77e91e\") " Dec 11 10:32:41 crc kubenswrapper[4953]: I1211 10:32:41.378533 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/40f659e1-a9a4-4b3c-b5e2-0c100c77e91e-scripts\") pod \"40f659e1-a9a4-4b3c-b5e2-0c100c77e91e\" (UID: \"40f659e1-a9a4-4b3c-b5e2-0c100c77e91e\") " Dec 11 10:32:41 crc kubenswrapper[4953]: I1211 10:32:41.378640 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/40f659e1-a9a4-4b3c-b5e2-0c100c77e91e-dispersionconf\") pod \"40f659e1-a9a4-4b3c-b5e2-0c100c77e91e\" (UID: \"40f659e1-a9a4-4b3c-b5e2-0c100c77e91e\") " Dec 11 10:32:41 crc kubenswrapper[4953]: I1211 10:32:41.378678 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7jmz\" (UniqueName: \"kubernetes.io/projected/40f659e1-a9a4-4b3c-b5e2-0c100c77e91e-kube-api-access-d7jmz\") pod \"40f659e1-a9a4-4b3c-b5e2-0c100c77e91e\" (UID: \"40f659e1-a9a4-4b3c-b5e2-0c100c77e91e\") " Dec 11 10:32:41 crc kubenswrapper[4953]: I1211 10:32:41.379886 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/40f659e1-a9a4-4b3c-b5e2-0c100c77e91e-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "40f659e1-a9a4-4b3c-b5e2-0c100c77e91e" (UID: "40f659e1-a9a4-4b3c-b5e2-0c100c77e91e"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:32:41 crc kubenswrapper[4953]: I1211 10:32:41.380405 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40f659e1-a9a4-4b3c-b5e2-0c100c77e91e-scripts" (OuterVolumeSpecName: "scripts") pod "40f659e1-a9a4-4b3c-b5e2-0c100c77e91e" (UID: "40f659e1-a9a4-4b3c-b5e2-0c100c77e91e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:32:41 crc kubenswrapper[4953]: I1211 10:32:41.380613 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40f659e1-a9a4-4b3c-b5e2-0c100c77e91e-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "40f659e1-a9a4-4b3c-b5e2-0c100c77e91e" (UID: "40f659e1-a9a4-4b3c-b5e2-0c100c77e91e"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:32:41 crc kubenswrapper[4953]: I1211 10:32:41.385758 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40f659e1-a9a4-4b3c-b5e2-0c100c77e91e-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "40f659e1-a9a4-4b3c-b5e2-0c100c77e91e" (UID: "40f659e1-a9a4-4b3c-b5e2-0c100c77e91e"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:32:41 crc kubenswrapper[4953]: I1211 10:32:41.386863 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40f659e1-a9a4-4b3c-b5e2-0c100c77e91e-kube-api-access-d7jmz" (OuterVolumeSpecName: "kube-api-access-d7jmz") pod "40f659e1-a9a4-4b3c-b5e2-0c100c77e91e" (UID: "40f659e1-a9a4-4b3c-b5e2-0c100c77e91e"). InnerVolumeSpecName "kube-api-access-d7jmz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:32:41 crc kubenswrapper[4953]: I1211 10:32:41.386919 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40f659e1-a9a4-4b3c-b5e2-0c100c77e91e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "40f659e1-a9a4-4b3c-b5e2-0c100c77e91e" (UID: "40f659e1-a9a4-4b3c-b5e2-0c100c77e91e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:32:41 crc kubenswrapper[4953]: I1211 10:32:41.387062 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40f659e1-a9a4-4b3c-b5e2-0c100c77e91e-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "40f659e1-a9a4-4b3c-b5e2-0c100c77e91e" (UID: "40f659e1-a9a4-4b3c-b5e2-0c100c77e91e"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:32:41 crc kubenswrapper[4953]: I1211 10:32:41.405048 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95f5f6995-p9d9d" Dec 11 10:32:41 crc kubenswrapper[4953]: I1211 10:32:41.430659 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bbdc7ccd7-h97wh" Dec 11 10:32:41 crc kubenswrapper[4953]: E1211 10:32:41.462353 4953 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Dec 11 10:32:41 crc kubenswrapper[4953]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/a71a3f06-3f08-4259-a6e0-5c615e05ee23/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Dec 11 10:32:41 crc kubenswrapper[4953]: > podSandboxID="e7f43fd09471fb45225554bf67efde6385f26a5c009d7af0b81cd81c9eff47c9" Dec 11 10:32:41 crc kubenswrapper[4953]: E1211 10:32:41.462657 4953 kuberuntime_manager.go:1274] "Unhandled Error" err=< Dec 11 10:32:41 crc kubenswrapper[4953]: container &Container{Name:dnsmasq-dns,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n654h99h64ch5dbh6dh555h587h64bh5cfh647h5fdh57ch679h9h597h5f5hbch59bh54fh575h566h667h586h5f5h65ch5bch57h68h65ch58bh694h5cfq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/ovsdbserver-nb,SubPath:ovsdbserver-nb,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/ovsdbserver-sb,SubPath:ovsdbserver-sb,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t6jhc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-757dc6fff9-xc587_openstack(a71a3f06-3f08-4259-a6e0-5c615e05ee23): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/a71a3f06-3f08-4259-a6e0-5c615e05ee23/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Dec 11 10:32:41 crc kubenswrapper[4953]: > logger="UnhandledError" Dec 11 10:32:41 crc kubenswrapper[4953]: E1211 10:32:41.463815 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/a71a3f06-3f08-4259-a6e0-5c615e05ee23/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-757dc6fff9-xc587" podUID="a71a3f06-3f08-4259-a6e0-5c615e05ee23" Dec 11 10:32:41 crc kubenswrapper[4953]: I1211 10:32:41.480372 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/33fa5e5b-3be4-4fb2-8a05-e9f500184264-dns-svc\") pod \"33fa5e5b-3be4-4fb2-8a05-e9f500184264\" (UID: \"33fa5e5b-3be4-4fb2-8a05-e9f500184264\") " Dec 11 10:32:41 crc kubenswrapper[4953]: I1211 10:32:41.480682 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t8xh9\" (UniqueName: \"kubernetes.io/projected/33fa5e5b-3be4-4fb2-8a05-e9f500184264-kube-api-access-t8xh9\") pod \"33fa5e5b-3be4-4fb2-8a05-e9f500184264\" (UID: \"33fa5e5b-3be4-4fb2-8a05-e9f500184264\") " Dec 11 10:32:41 crc kubenswrapper[4953]: I1211 10:32:41.481164 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3962e78-992c-4a5f-a874-2d744965e3bb-config\") pod \"c3962e78-992c-4a5f-a874-2d744965e3bb\" (UID: \"c3962e78-992c-4a5f-a874-2d744965e3bb\") " Dec 11 10:32:41 crc kubenswrapper[4953]: I1211 10:32:41.481267 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f6z6d\" (UniqueName: \"kubernetes.io/projected/c3962e78-992c-4a5f-a874-2d744965e3bb-kube-api-access-f6z6d\") pod \"c3962e78-992c-4a5f-a874-2d744965e3bb\" (UID: \"c3962e78-992c-4a5f-a874-2d744965e3bb\") " Dec 11 10:32:41 crc kubenswrapper[4953]: I1211 10:32:41.481330 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33fa5e5b-3be4-4fb2-8a05-e9f500184264-config\") pod \"33fa5e5b-3be4-4fb2-8a05-e9f500184264\" (UID: \"33fa5e5b-3be4-4fb2-8a05-e9f500184264\") " Dec 11 10:32:41 crc kubenswrapper[4953]: I1211 10:32:41.481924 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c3962e78-992c-4a5f-a874-2d744965e3bb-dns-svc\") pod \"c3962e78-992c-4a5f-a874-2d744965e3bb\" (UID: \"c3962e78-992c-4a5f-a874-2d744965e3bb\") " Dec 11 10:32:41 crc kubenswrapper[4953]: I1211 10:32:41.483675 4953 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/40f659e1-a9a4-4b3c-b5e2-0c100c77e91e-swiftconf\") on node \"crc\" DevicePath \"\"" Dec 11 10:32:41 crc kubenswrapper[4953]: I1211 10:32:41.483709 4953 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40f659e1-a9a4-4b3c-b5e2-0c100c77e91e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 10:32:41 crc kubenswrapper[4953]: I1211 10:32:41.483810 4953 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/40f659e1-a9a4-4b3c-b5e2-0c100c77e91e-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 10:32:41 crc kubenswrapper[4953]: I1211 10:32:41.483831 4953 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/40f659e1-a9a4-4b3c-b5e2-0c100c77e91e-dispersionconf\") on node \"crc\" DevicePath \"\"" Dec 11 10:32:41 crc kubenswrapper[4953]: I1211 10:32:41.483843 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d7jmz\" (UniqueName: \"kubernetes.io/projected/40f659e1-a9a4-4b3c-b5e2-0c100c77e91e-kube-api-access-d7jmz\") on node \"crc\" DevicePath \"\"" Dec 11 10:32:41 crc kubenswrapper[4953]: I1211 10:32:41.483856 4953 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/40f659e1-a9a4-4b3c-b5e2-0c100c77e91e-etc-swift\") on node \"crc\" DevicePath \"\"" Dec 11 10:32:41 crc kubenswrapper[4953]: I1211 10:32:41.483867 4953 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/40f659e1-a9a4-4b3c-b5e2-0c100c77e91e-ring-data-devices\") on node \"crc\" DevicePath \"\"" Dec 11 10:32:41 crc kubenswrapper[4953]: I1211 10:32:41.488982 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3962e78-992c-4a5f-a874-2d744965e3bb-kube-api-access-f6z6d" (OuterVolumeSpecName: "kube-api-access-f6z6d") pod "c3962e78-992c-4a5f-a874-2d744965e3bb" (UID: "c3962e78-992c-4a5f-a874-2d744965e3bb"). InnerVolumeSpecName "kube-api-access-f6z6d". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:32:41 crc kubenswrapper[4953]: I1211 10:32:41.489143 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33fa5e5b-3be4-4fb2-8a05-e9f500184264-kube-api-access-t8xh9" (OuterVolumeSpecName: "kube-api-access-t8xh9") pod "33fa5e5b-3be4-4fb2-8a05-e9f500184264" (UID: "33fa5e5b-3be4-4fb2-8a05-e9f500184264"). InnerVolumeSpecName "kube-api-access-t8xh9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:32:41 crc kubenswrapper[4953]: I1211 10:32:41.521896 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3962e78-992c-4a5f-a874-2d744965e3bb-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c3962e78-992c-4a5f-a874-2d744965e3bb" (UID: "c3962e78-992c-4a5f-a874-2d744965e3bb"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:32:41 crc kubenswrapper[4953]: I1211 10:32:41.528517 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33fa5e5b-3be4-4fb2-8a05-e9f500184264-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "33fa5e5b-3be4-4fb2-8a05-e9f500184264" (UID: "33fa5e5b-3be4-4fb2-8a05-e9f500184264"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:32:41 crc kubenswrapper[4953]: I1211 10:32:41.529762 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33fa5e5b-3be4-4fb2-8a05-e9f500184264-config" (OuterVolumeSpecName: "config") pod "33fa5e5b-3be4-4fb2-8a05-e9f500184264" (UID: "33fa5e5b-3be4-4fb2-8a05-e9f500184264"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:32:41 crc kubenswrapper[4953]: I1211 10:32:41.541750 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3962e78-992c-4a5f-a874-2d744965e3bb-config" (OuterVolumeSpecName: "config") pod "c3962e78-992c-4a5f-a874-2d744965e3bb" (UID: "c3962e78-992c-4a5f-a874-2d744965e3bb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:32:41 crc kubenswrapper[4953]: I1211 10:32:41.585480 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-89vmh\" (UniqueName: \"kubernetes.io/projected/57347790-7a3f-4af9-acb8-5b6652c4240d-kube-api-access-89vmh\") pod \"57347790-7a3f-4af9-acb8-5b6652c4240d\" (UID: \"57347790-7a3f-4af9-acb8-5b6652c4240d\") " Dec 11 10:32:41 crc kubenswrapper[4953]: I1211 10:32:41.585806 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/57347790-7a3f-4af9-acb8-5b6652c4240d-ovsdbserver-nb\") pod \"57347790-7a3f-4af9-acb8-5b6652c4240d\" (UID: \"57347790-7a3f-4af9-acb8-5b6652c4240d\") " Dec 11 10:32:41 crc kubenswrapper[4953]: I1211 10:32:41.585872 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57347790-7a3f-4af9-acb8-5b6652c4240d-config\") pod \"57347790-7a3f-4af9-acb8-5b6652c4240d\" (UID: \"57347790-7a3f-4af9-acb8-5b6652c4240d\") " Dec 11 10:32:41 crc kubenswrapper[4953]: I1211 10:32:41.585949 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/57347790-7a3f-4af9-acb8-5b6652c4240d-dns-svc\") pod \"57347790-7a3f-4af9-acb8-5b6652c4240d\" (UID: \"57347790-7a3f-4af9-acb8-5b6652c4240d\") " Dec 11 10:32:41 crc kubenswrapper[4953]: I1211 10:32:41.586441 4953 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/33fa5e5b-3be4-4fb2-8a05-e9f500184264-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 11 10:32:41 crc kubenswrapper[4953]: I1211 10:32:41.586459 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t8xh9\" (UniqueName: \"kubernetes.io/projected/33fa5e5b-3be4-4fb2-8a05-e9f500184264-kube-api-access-t8xh9\") on node \"crc\" DevicePath \"\"" Dec 11 10:32:41 crc kubenswrapper[4953]: I1211 10:32:41.586469 4953 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3962e78-992c-4a5f-a874-2d744965e3bb-config\") on node \"crc\" DevicePath \"\"" Dec 11 10:32:41 crc kubenswrapper[4953]: I1211 10:32:41.586481 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f6z6d\" (UniqueName: \"kubernetes.io/projected/c3962e78-992c-4a5f-a874-2d744965e3bb-kube-api-access-f6z6d\") on node \"crc\" DevicePath \"\"" Dec 11 10:32:41 crc kubenswrapper[4953]: I1211 10:32:41.586490 4953 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33fa5e5b-3be4-4fb2-8a05-e9f500184264-config\") on node \"crc\" DevicePath \"\"" Dec 11 10:32:41 crc kubenswrapper[4953]: I1211 10:32:41.586497 4953 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c3962e78-992c-4a5f-a874-2d744965e3bb-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 11 10:32:41 crc kubenswrapper[4953]: I1211 10:32:41.589787 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57347790-7a3f-4af9-acb8-5b6652c4240d-kube-api-access-89vmh" (OuterVolumeSpecName: "kube-api-access-89vmh") pod "57347790-7a3f-4af9-acb8-5b6652c4240d" (UID: "57347790-7a3f-4af9-acb8-5b6652c4240d"). InnerVolumeSpecName "kube-api-access-89vmh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:32:41 crc kubenswrapper[4953]: I1211 10:32:41.603898 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/57347790-7a3f-4af9-acb8-5b6652c4240d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "57347790-7a3f-4af9-acb8-5b6652c4240d" (UID: "57347790-7a3f-4af9-acb8-5b6652c4240d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:32:41 crc kubenswrapper[4953]: I1211 10:32:41.606565 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/57347790-7a3f-4af9-acb8-5b6652c4240d-config" (OuterVolumeSpecName: "config") pod "57347790-7a3f-4af9-acb8-5b6652c4240d" (UID: "57347790-7a3f-4af9-acb8-5b6652c4240d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:32:41 crc kubenswrapper[4953]: I1211 10:32:41.626801 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/57347790-7a3f-4af9-acb8-5b6652c4240d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "57347790-7a3f-4af9-acb8-5b6652c4240d" (UID: "57347790-7a3f-4af9-acb8-5b6652c4240d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:32:41 crc kubenswrapper[4953]: I1211 10:32:41.687531 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-89vmh\" (UniqueName: \"kubernetes.io/projected/57347790-7a3f-4af9-acb8-5b6652c4240d-kube-api-access-89vmh\") on node \"crc\" DevicePath \"\"" Dec 11 10:32:41 crc kubenswrapper[4953]: I1211 10:32:41.687586 4953 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/57347790-7a3f-4af9-acb8-5b6652c4240d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 11 10:32:41 crc kubenswrapper[4953]: I1211 10:32:41.687596 4953 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57347790-7a3f-4af9-acb8-5b6652c4240d-config\") on node \"crc\" DevicePath \"\"" Dec 11 10:32:41 crc kubenswrapper[4953]: I1211 10:32:41.687604 4953 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/57347790-7a3f-4af9-acb8-5b6652c4240d-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 11 10:32:41 crc kubenswrapper[4953]: I1211 10:32:41.991790 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bbdc7ccd7-h97wh" event={"ID":"57347790-7a3f-4af9-acb8-5b6652c4240d","Type":"ContainerDied","Data":"211ba5a14cb3b6139e80a90edd6cf04f6193f358c809a108e76be8c77aa983ce"} Dec 11 10:32:41 crc kubenswrapper[4953]: I1211 10:32:41.991837 4953 scope.go:117] "RemoveContainer" containerID="af9e72b1d8e0d5037140a9e9e8b29b5c4f484f982453553487a22d20185d7b64" Dec 11 10:32:41 crc kubenswrapper[4953]: I1211 10:32:41.991859 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bbdc7ccd7-h97wh" Dec 11 10:32:41 crc kubenswrapper[4953]: I1211 10:32:41.996445 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cb545bd4c-9t76k" event={"ID":"2c1cb581-9d65-4d31-857c-de67900c05bf","Type":"ContainerStarted","Data":"185b3b07d33dd1056fb432b491574d0038ab5e253a6ff737dfbfcf3db6f243a8"} Dec 11 10:32:41 crc kubenswrapper[4953]: I1211 10:32:41.996685 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6cb545bd4c-9t76k" Dec 11 10:32:42 crc kubenswrapper[4953]: I1211 10:32:42.000168 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"23f99edb-3870-42f3-bdef-ec4db335ba35","Type":"ContainerStarted","Data":"027468b4fd5a12e7e2d663076aa4064b5a0635d8ef820b16038cc9ed4dd22476"} Dec 11 10:32:42 crc kubenswrapper[4953]: I1211 10:32:42.003237 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95f5f6995-p9d9d" event={"ID":"c3962e78-992c-4a5f-a874-2d744965e3bb","Type":"ContainerDied","Data":"6400f0a3975d3d79d5ef8c72aaf6c1447bdc26992ee711e04ade1270a899e83d"} Dec 11 10:32:42 crc kubenswrapper[4953]: I1211 10:32:42.003246 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95f5f6995-p9d9d" Dec 11 10:32:42 crc kubenswrapper[4953]: I1211 10:32:42.011292 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"27258186-4cab-45b4-a20c-a4c3ddc82f76","Type":"ContainerStarted","Data":"0cfe0bd98f32db174fde1333af2c3108717607f2c93978857021eab34e2c9d4e"} Dec 11 10:32:42 crc kubenswrapper[4953]: I1211 10:32:42.019396 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6cb545bd4c-9t76k" podStartSLOduration=8.019379085 podStartE2EDuration="8.019379085s" podCreationTimestamp="2025-12-11 10:32:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:32:42.014074016 +0000 UTC m=+1280.037933069" watchObservedRunningTime="2025-12-11 10:32:42.019379085 +0000 UTC m=+1280.043238118" Dec 11 10:32:42 crc kubenswrapper[4953]: I1211 10:32:42.024516 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-744ffd65bc-sgr4j" event={"ID":"33fa5e5b-3be4-4fb2-8a05-e9f500184264","Type":"ContainerDied","Data":"1a7367ad2fb38295643b4bdf0c5915cdb6506755fb82f20c4098e6c3067c4c55"} Dec 11 10:32:42 crc kubenswrapper[4953]: I1211 10:32:42.024649 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-744ffd65bc-sgr4j" Dec 11 10:32:42 crc kubenswrapper[4953]: I1211 10:32:42.026343 4953 scope.go:117] "RemoveContainer" containerID="fbbe128d8acde527edab52a428b96af9f6d264a1fdf7804028ac6595f351e370" Dec 11 10:32:42 crc kubenswrapper[4953]: I1211 10:32:42.047660 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"4287349e-ff2e-483c-9ede-08ec5e03a2b4","Type":"ContainerStarted","Data":"08ed06bcd9932bd8cfb8cd17406a3860f1745658a72ff1d03735d51b925d7e64"} Dec 11 10:32:42 crc kubenswrapper[4953]: I1211 10:32:42.047723 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-8kmhp" Dec 11 10:32:42 crc kubenswrapper[4953]: I1211 10:32:42.049665 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Dec 11 10:32:42 crc kubenswrapper[4953]: I1211 10:32:42.057340 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=20.265259935 podStartE2EDuration="1m13.057319355s" podCreationTimestamp="2025-12-11 10:31:29 +0000 UTC" firstStartedPulling="2025-12-11 10:31:32.250954462 +0000 UTC m=+1210.274813495" lastFinishedPulling="2025-12-11 10:32:25.043013882 +0000 UTC m=+1263.066872915" observedRunningTime="2025-12-11 10:32:42.043789073 +0000 UTC m=+1280.067648116" watchObservedRunningTime="2025-12-11 10:32:42.057319355 +0000 UTC m=+1280.081178398" Dec 11 10:32:42 crc kubenswrapper[4953]: I1211 10:32:42.068182 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=-9223371965.78661 podStartE2EDuration="1m11.068166881s" podCreationTimestamp="2025-12-11 10:31:31 +0000 UTC" firstStartedPulling="2025-12-11 10:31:33.142118036 +0000 UTC m=+1211.165977069" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:32:42.066911181 +0000 UTC m=+1280.090770204" watchObservedRunningTime="2025-12-11 10:32:42.068166881 +0000 UTC m=+1280.092025914" Dec 11 10:32:42 crc kubenswrapper[4953]: I1211 10:32:42.096975 4953 scope.go:117] "RemoveContainer" containerID="468a3ca026c3abee25bf10caf34fafc166670af7abcbe04feb357213379664c8" Dec 11 10:32:42 crc kubenswrapper[4953]: I1211 10:32:42.122037 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7bbdc7ccd7-h97wh"] Dec 11 10:32:42 crc kubenswrapper[4953]: I1211 10:32:42.156164 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=4.316655121 podStartE2EDuration="14.156134215s" podCreationTimestamp="2025-12-11 10:32:28 +0000 UTC" firstStartedPulling="2025-12-11 10:32:29.351816344 +0000 UTC m=+1267.375675377" lastFinishedPulling="2025-12-11 10:32:39.191295428 +0000 UTC m=+1277.215154471" observedRunningTime="2025-12-11 10:32:42.117445362 +0000 UTC m=+1280.141304395" watchObservedRunningTime="2025-12-11 10:32:42.156134215 +0000 UTC m=+1280.179993248" Dec 11 10:32:42 crc kubenswrapper[4953]: I1211 10:32:42.158438 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7bbdc7ccd7-h97wh"] Dec 11 10:32:42 crc kubenswrapper[4953]: I1211 10:32:42.230521 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-744ffd65bc-sgr4j"] Dec 11 10:32:42 crc kubenswrapper[4953]: I1211 10:32:42.236149 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-744ffd65bc-sgr4j"] Dec 11 10:32:42 crc kubenswrapper[4953]: I1211 10:32:42.248784 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-8kmhp"] Dec 11 10:32:42 crc kubenswrapper[4953]: I1211 10:32:42.253876 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-8kmhp"] Dec 11 10:32:42 crc kubenswrapper[4953]: I1211 10:32:42.267517 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-p9d9d"] Dec 11 10:32:42 crc kubenswrapper[4953]: I1211 10:32:42.272511 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-p9d9d"] Dec 11 10:32:42 crc kubenswrapper[4953]: I1211 10:32:42.484230 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33fa5e5b-3be4-4fb2-8a05-e9f500184264" path="/var/lib/kubelet/pods/33fa5e5b-3be4-4fb2-8a05-e9f500184264/volumes" Dec 11 10:32:42 crc kubenswrapper[4953]: I1211 10:32:42.485334 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40f659e1-a9a4-4b3c-b5e2-0c100c77e91e" path="/var/lib/kubelet/pods/40f659e1-a9a4-4b3c-b5e2-0c100c77e91e/volumes" Dec 11 10:32:42 crc kubenswrapper[4953]: I1211 10:32:42.485785 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57347790-7a3f-4af9-acb8-5b6652c4240d" path="/var/lib/kubelet/pods/57347790-7a3f-4af9-acb8-5b6652c4240d/volumes" Dec 11 10:32:42 crc kubenswrapper[4953]: I1211 10:32:42.486253 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3962e78-992c-4a5f-a874-2d744965e3bb" path="/var/lib/kubelet/pods/c3962e78-992c-4a5f-a874-2d744965e3bb/volumes" Dec 11 10:32:42 crc kubenswrapper[4953]: I1211 10:32:42.487159 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Dec 11 10:32:42 crc kubenswrapper[4953]: I1211 10:32:42.487190 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Dec 11 10:32:43 crc kubenswrapper[4953]: I1211 10:32:43.063089 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757dc6fff9-xc587" event={"ID":"a71a3f06-3f08-4259-a6e0-5c615e05ee23","Type":"ContainerStarted","Data":"fbc78c2c75f4719145b09e52383e74af887ad9b361f7d4af04ab4e23d0df9c13"} Dec 11 10:32:43 crc kubenswrapper[4953]: I1211 10:32:43.086105 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-757dc6fff9-xc587" podStartSLOduration=4.105319303 podStartE2EDuration="15.086083221s" podCreationTimestamp="2025-12-11 10:32:28 +0000 UTC" firstStartedPulling="2025-12-11 10:32:29.249733999 +0000 UTC m=+1267.273593042" lastFinishedPulling="2025-12-11 10:32:40.230497927 +0000 UTC m=+1278.254356960" observedRunningTime="2025-12-11 10:32:43.083158838 +0000 UTC m=+1281.107017871" watchObservedRunningTime="2025-12-11 10:32:43.086083221 +0000 UTC m=+1281.109942254" Dec 11 10:32:43 crc kubenswrapper[4953]: I1211 10:32:43.687432 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-757dc6fff9-xc587" Dec 11 10:32:43 crc kubenswrapper[4953]: I1211 10:32:43.754031 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7be1c768-78bb-476b-b51d-8e4fe80b8500-etc-swift\") pod \"swift-storage-0\" (UID: \"7be1c768-78bb-476b-b51d-8e4fe80b8500\") " pod="openstack/swift-storage-0" Dec 11 10:32:43 crc kubenswrapper[4953]: E1211 10:32:43.754318 4953 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 11 10:32:43 crc kubenswrapper[4953]: E1211 10:32:43.754350 4953 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 11 10:32:43 crc kubenswrapper[4953]: E1211 10:32:43.754409 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7be1c768-78bb-476b-b51d-8e4fe80b8500-etc-swift podName:7be1c768-78bb-476b-b51d-8e4fe80b8500 nodeName:}" failed. No retries permitted until 2025-12-11 10:32:51.754391276 +0000 UTC m=+1289.778250309 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/7be1c768-78bb-476b-b51d-8e4fe80b8500-etc-swift") pod "swift-storage-0" (UID: "7be1c768-78bb-476b-b51d-8e4fe80b8500") : configmap "swift-ring-files" not found Dec 11 10:32:45 crc kubenswrapper[4953]: I1211 10:32:45.083703 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-t4cc8" event={"ID":"21b311c3-3edf-4905-9929-79787eb29bb8","Type":"ContainerStarted","Data":"7589af4710335884c74da5d91b7244e992b3ef4b45efdaec7dc45eeb9e4bf09c"} Dec 11 10:32:47 crc kubenswrapper[4953]: I1211 10:32:47.511983 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Dec 11 10:32:47 crc kubenswrapper[4953]: I1211 10:32:47.556359 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-t4cc8" podStartSLOduration=4.680613614 podStartE2EDuration="8.556337849s" podCreationTimestamp="2025-12-11 10:32:39 +0000 UTC" firstStartedPulling="2025-12-11 10:32:40.715161238 +0000 UTC m=+1278.739020271" lastFinishedPulling="2025-12-11 10:32:44.590885473 +0000 UTC m=+1282.614744506" observedRunningTime="2025-12-11 10:32:45.105644373 +0000 UTC m=+1283.129503496" watchObservedRunningTime="2025-12-11 10:32:47.556337849 +0000 UTC m=+1285.580196872" Dec 11 10:32:47 crc kubenswrapper[4953]: I1211 10:32:47.614221 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Dec 11 10:32:48 crc kubenswrapper[4953]: I1211 10:32:48.688806 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-757dc6fff9-xc587" Dec 11 10:32:50 crc kubenswrapper[4953]: I1211 10:32:50.203841 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6cb545bd4c-9t76k" Dec 11 10:32:50 crc kubenswrapper[4953]: I1211 10:32:50.263468 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-757dc6fff9-xc587"] Dec 11 10:32:50 crc kubenswrapper[4953]: I1211 10:32:50.263816 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-757dc6fff9-xc587" podUID="a71a3f06-3f08-4259-a6e0-5c615e05ee23" containerName="dnsmasq-dns" containerID="cri-o://fbc78c2c75f4719145b09e52383e74af887ad9b361f7d4af04ab4e23d0df9c13" gracePeriod=10 Dec 11 10:32:50 crc kubenswrapper[4953]: I1211 10:32:50.780065 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757dc6fff9-xc587" Dec 11 10:32:50 crc kubenswrapper[4953]: I1211 10:32:50.873031 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a71a3f06-3f08-4259-a6e0-5c615e05ee23-ovsdbserver-nb\") pod \"a71a3f06-3f08-4259-a6e0-5c615e05ee23\" (UID: \"a71a3f06-3f08-4259-a6e0-5c615e05ee23\") " Dec 11 10:32:50 crc kubenswrapper[4953]: I1211 10:32:50.873206 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t6jhc\" (UniqueName: \"kubernetes.io/projected/a71a3f06-3f08-4259-a6e0-5c615e05ee23-kube-api-access-t6jhc\") pod \"a71a3f06-3f08-4259-a6e0-5c615e05ee23\" (UID: \"a71a3f06-3f08-4259-a6e0-5c615e05ee23\") " Dec 11 10:32:50 crc kubenswrapper[4953]: I1211 10:32:50.873249 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a71a3f06-3f08-4259-a6e0-5c615e05ee23-ovsdbserver-sb\") pod \"a71a3f06-3f08-4259-a6e0-5c615e05ee23\" (UID: \"a71a3f06-3f08-4259-a6e0-5c615e05ee23\") " Dec 11 10:32:50 crc kubenswrapper[4953]: I1211 10:32:50.873282 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a71a3f06-3f08-4259-a6e0-5c615e05ee23-config\") pod \"a71a3f06-3f08-4259-a6e0-5c615e05ee23\" (UID: \"a71a3f06-3f08-4259-a6e0-5c615e05ee23\") " Dec 11 10:32:50 crc kubenswrapper[4953]: I1211 10:32:50.873343 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a71a3f06-3f08-4259-a6e0-5c615e05ee23-dns-svc\") pod \"a71a3f06-3f08-4259-a6e0-5c615e05ee23\" (UID: \"a71a3f06-3f08-4259-a6e0-5c615e05ee23\") " Dec 11 10:32:50 crc kubenswrapper[4953]: I1211 10:32:50.879878 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a71a3f06-3f08-4259-a6e0-5c615e05ee23-kube-api-access-t6jhc" (OuterVolumeSpecName: "kube-api-access-t6jhc") pod "a71a3f06-3f08-4259-a6e0-5c615e05ee23" (UID: "a71a3f06-3f08-4259-a6e0-5c615e05ee23"). InnerVolumeSpecName "kube-api-access-t6jhc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:32:50 crc kubenswrapper[4953]: I1211 10:32:50.953925 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a71a3f06-3f08-4259-a6e0-5c615e05ee23-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a71a3f06-3f08-4259-a6e0-5c615e05ee23" (UID: "a71a3f06-3f08-4259-a6e0-5c615e05ee23"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:32:50 crc kubenswrapper[4953]: I1211 10:32:50.954008 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a71a3f06-3f08-4259-a6e0-5c615e05ee23-config" (OuterVolumeSpecName: "config") pod "a71a3f06-3f08-4259-a6e0-5c615e05ee23" (UID: "a71a3f06-3f08-4259-a6e0-5c615e05ee23"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:32:50 crc kubenswrapper[4953]: I1211 10:32:50.956098 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a71a3f06-3f08-4259-a6e0-5c615e05ee23-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a71a3f06-3f08-4259-a6e0-5c615e05ee23" (UID: "a71a3f06-3f08-4259-a6e0-5c615e05ee23"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:32:50 crc kubenswrapper[4953]: I1211 10:32:50.962093 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a71a3f06-3f08-4259-a6e0-5c615e05ee23-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a71a3f06-3f08-4259-a6e0-5c615e05ee23" (UID: "a71a3f06-3f08-4259-a6e0-5c615e05ee23"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:32:50 crc kubenswrapper[4953]: I1211 10:32:50.976074 4953 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a71a3f06-3f08-4259-a6e0-5c615e05ee23-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 11 10:32:50 crc kubenswrapper[4953]: I1211 10:32:50.976117 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t6jhc\" (UniqueName: \"kubernetes.io/projected/a71a3f06-3f08-4259-a6e0-5c615e05ee23-kube-api-access-t6jhc\") on node \"crc\" DevicePath \"\"" Dec 11 10:32:50 crc kubenswrapper[4953]: I1211 10:32:50.976162 4953 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a71a3f06-3f08-4259-a6e0-5c615e05ee23-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 11 10:32:50 crc kubenswrapper[4953]: I1211 10:32:50.976176 4953 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a71a3f06-3f08-4259-a6e0-5c615e05ee23-config\") on node \"crc\" DevicePath \"\"" Dec 11 10:32:50 crc kubenswrapper[4953]: I1211 10:32:50.976187 4953 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a71a3f06-3f08-4259-a6e0-5c615e05ee23-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 11 10:32:51 crc kubenswrapper[4953]: I1211 10:32:51.128369 4953 generic.go:334] "Generic (PLEG): container finished" podID="a71a3f06-3f08-4259-a6e0-5c615e05ee23" containerID="fbc78c2c75f4719145b09e52383e74af887ad9b361f7d4af04ab4e23d0df9c13" exitCode=0 Dec 11 10:32:51 crc kubenswrapper[4953]: I1211 10:32:51.128436 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757dc6fff9-xc587" Dec 11 10:32:51 crc kubenswrapper[4953]: I1211 10:32:51.128430 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757dc6fff9-xc587" event={"ID":"a71a3f06-3f08-4259-a6e0-5c615e05ee23","Type":"ContainerDied","Data":"fbc78c2c75f4719145b09e52383e74af887ad9b361f7d4af04ab4e23d0df9c13"} Dec 11 10:32:51 crc kubenswrapper[4953]: I1211 10:32:51.128885 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757dc6fff9-xc587" event={"ID":"a71a3f06-3f08-4259-a6e0-5c615e05ee23","Type":"ContainerDied","Data":"e7f43fd09471fb45225554bf67efde6385f26a5c009d7af0b81cd81c9eff47c9"} Dec 11 10:32:51 crc kubenswrapper[4953]: I1211 10:32:51.128913 4953 scope.go:117] "RemoveContainer" containerID="fbc78c2c75f4719145b09e52383e74af887ad9b361f7d4af04ab4e23d0df9c13" Dec 11 10:32:51 crc kubenswrapper[4953]: I1211 10:32:51.147568 4953 scope.go:117] "RemoveContainer" containerID="a4094f4c74adbe2649b33224429248127db0a30a08d40f549311ae1ab232d0ea" Dec 11 10:32:51 crc kubenswrapper[4953]: I1211 10:32:51.217137 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-757dc6fff9-xc587"] Dec 11 10:32:51 crc kubenswrapper[4953]: I1211 10:32:51.219171 4953 scope.go:117] "RemoveContainer" containerID="fbc78c2c75f4719145b09e52383e74af887ad9b361f7d4af04ab4e23d0df9c13" Dec 11 10:32:51 crc kubenswrapper[4953]: E1211 10:32:51.219717 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fbc78c2c75f4719145b09e52383e74af887ad9b361f7d4af04ab4e23d0df9c13\": container with ID starting with fbc78c2c75f4719145b09e52383e74af887ad9b361f7d4af04ab4e23d0df9c13 not found: ID does not exist" containerID="fbc78c2c75f4719145b09e52383e74af887ad9b361f7d4af04ab4e23d0df9c13" Dec 11 10:32:51 crc kubenswrapper[4953]: I1211 10:32:51.219756 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fbc78c2c75f4719145b09e52383e74af887ad9b361f7d4af04ab4e23d0df9c13"} err="failed to get container status \"fbc78c2c75f4719145b09e52383e74af887ad9b361f7d4af04ab4e23d0df9c13\": rpc error: code = NotFound desc = could not find container \"fbc78c2c75f4719145b09e52383e74af887ad9b361f7d4af04ab4e23d0df9c13\": container with ID starting with fbc78c2c75f4719145b09e52383e74af887ad9b361f7d4af04ab4e23d0df9c13 not found: ID does not exist" Dec 11 10:32:51 crc kubenswrapper[4953]: I1211 10:32:51.219783 4953 scope.go:117] "RemoveContainer" containerID="a4094f4c74adbe2649b33224429248127db0a30a08d40f549311ae1ab232d0ea" Dec 11 10:32:51 crc kubenswrapper[4953]: E1211 10:32:51.220151 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4094f4c74adbe2649b33224429248127db0a30a08d40f549311ae1ab232d0ea\": container with ID starting with a4094f4c74adbe2649b33224429248127db0a30a08d40f549311ae1ab232d0ea not found: ID does not exist" containerID="a4094f4c74adbe2649b33224429248127db0a30a08d40f549311ae1ab232d0ea" Dec 11 10:32:51 crc kubenswrapper[4953]: I1211 10:32:51.220173 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4094f4c74adbe2649b33224429248127db0a30a08d40f549311ae1ab232d0ea"} err="failed to get container status \"a4094f4c74adbe2649b33224429248127db0a30a08d40f549311ae1ab232d0ea\": rpc error: code = NotFound desc = could not find container \"a4094f4c74adbe2649b33224429248127db0a30a08d40f549311ae1ab232d0ea\": container with ID starting with a4094f4c74adbe2649b33224429248127db0a30a08d40f549311ae1ab232d0ea not found: ID does not exist" Dec 11 10:32:51 crc kubenswrapper[4953]: I1211 10:32:51.226222 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-757dc6fff9-xc587"] Dec 11 10:32:51 crc kubenswrapper[4953]: I1211 10:32:51.649563 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Dec 11 10:32:51 crc kubenswrapper[4953]: I1211 10:32:51.649633 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Dec 11 10:32:51 crc kubenswrapper[4953]: I1211 10:32:51.718529 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Dec 11 10:32:51 crc kubenswrapper[4953]: I1211 10:32:51.835018 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7be1c768-78bb-476b-b51d-8e4fe80b8500-etc-swift\") pod \"swift-storage-0\" (UID: \"7be1c768-78bb-476b-b51d-8e4fe80b8500\") " pod="openstack/swift-storage-0" Dec 11 10:32:51 crc kubenswrapper[4953]: E1211 10:32:51.835225 4953 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 11 10:32:51 crc kubenswrapper[4953]: E1211 10:32:51.835293 4953 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 11 10:32:51 crc kubenswrapper[4953]: E1211 10:32:51.835391 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7be1c768-78bb-476b-b51d-8e4fe80b8500-etc-swift podName:7be1c768-78bb-476b-b51d-8e4fe80b8500 nodeName:}" failed. No retries permitted until 2025-12-11 10:33:07.835341251 +0000 UTC m=+1305.859200294 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/7be1c768-78bb-476b-b51d-8e4fe80b8500-etc-swift") pod "swift-storage-0" (UID: "7be1c768-78bb-476b-b51d-8e4fe80b8500") : configmap "swift-ring-files" not found Dec 11 10:32:52 crc kubenswrapper[4953]: I1211 10:32:52.139302 4953 generic.go:334] "Generic (PLEG): container finished" podID="21b311c3-3edf-4905-9929-79787eb29bb8" containerID="7589af4710335884c74da5d91b7244e992b3ef4b45efdaec7dc45eeb9e4bf09c" exitCode=0 Dec 11 10:32:52 crc kubenswrapper[4953]: I1211 10:32:52.139409 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-t4cc8" event={"ID":"21b311c3-3edf-4905-9929-79787eb29bb8","Type":"ContainerDied","Data":"7589af4710335884c74da5d91b7244e992b3ef4b45efdaec7dc45eeb9e4bf09c"} Dec 11 10:32:52 crc kubenswrapper[4953]: I1211 10:32:52.217924 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Dec 11 10:32:52 crc kubenswrapper[4953]: I1211 10:32:52.484233 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a71a3f06-3f08-4259-a6e0-5c615e05ee23" path="/var/lib/kubelet/pods/a71a3f06-3f08-4259-a6e0-5c615e05ee23/volumes" Dec 11 10:32:52 crc kubenswrapper[4953]: I1211 10:32:52.515113 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-7975-account-create-update-8dscm"] Dec 11 10:32:52 crc kubenswrapper[4953]: E1211 10:32:52.516499 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3962e78-992c-4a5f-a874-2d744965e3bb" containerName="init" Dec 11 10:32:52 crc kubenswrapper[4953]: I1211 10:32:52.516663 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3962e78-992c-4a5f-a874-2d744965e3bb" containerName="init" Dec 11 10:32:52 crc kubenswrapper[4953]: E1211 10:32:52.516864 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33fa5e5b-3be4-4fb2-8a05-e9f500184264" containerName="init" Dec 11 10:32:52 crc kubenswrapper[4953]: I1211 10:32:52.516953 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="33fa5e5b-3be4-4fb2-8a05-e9f500184264" containerName="init" Dec 11 10:32:52 crc kubenswrapper[4953]: E1211 10:32:52.517033 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a71a3f06-3f08-4259-a6e0-5c615e05ee23" containerName="dnsmasq-dns" Dec 11 10:32:52 crc kubenswrapper[4953]: I1211 10:32:52.517120 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="a71a3f06-3f08-4259-a6e0-5c615e05ee23" containerName="dnsmasq-dns" Dec 11 10:32:52 crc kubenswrapper[4953]: E1211 10:32:52.517197 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a71a3f06-3f08-4259-a6e0-5c615e05ee23" containerName="init" Dec 11 10:32:52 crc kubenswrapper[4953]: I1211 10:32:52.517263 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="a71a3f06-3f08-4259-a6e0-5c615e05ee23" containerName="init" Dec 11 10:32:52 crc kubenswrapper[4953]: E1211 10:32:52.517343 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57347790-7a3f-4af9-acb8-5b6652c4240d" containerName="init" Dec 11 10:32:52 crc kubenswrapper[4953]: I1211 10:32:52.517529 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="57347790-7a3f-4af9-acb8-5b6652c4240d" containerName="init" Dec 11 10:32:52 crc kubenswrapper[4953]: I1211 10:32:52.518098 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="57347790-7a3f-4af9-acb8-5b6652c4240d" containerName="init" Dec 11 10:32:52 crc kubenswrapper[4953]: I1211 10:32:52.518258 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3962e78-992c-4a5f-a874-2d744965e3bb" containerName="init" Dec 11 10:32:52 crc kubenswrapper[4953]: I1211 10:32:52.519809 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="a71a3f06-3f08-4259-a6e0-5c615e05ee23" containerName="dnsmasq-dns" Dec 11 10:32:52 crc kubenswrapper[4953]: I1211 10:32:52.519987 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="33fa5e5b-3be4-4fb2-8a05-e9f500184264" containerName="init" Dec 11 10:32:52 crc kubenswrapper[4953]: I1211 10:32:52.525634 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7975-account-create-update-8dscm" Dec 11 10:32:52 crc kubenswrapper[4953]: I1211 10:32:52.527887 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Dec 11 10:32:52 crc kubenswrapper[4953]: I1211 10:32:52.533216 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7975-account-create-update-8dscm"] Dec 11 10:32:52 crc kubenswrapper[4953]: I1211 10:32:52.604728 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-jndc6"] Dec 11 10:32:52 crc kubenswrapper[4953]: I1211 10:32:52.605984 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-jndc6" Dec 11 10:32:52 crc kubenswrapper[4953]: I1211 10:32:52.612588 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-jndc6"] Dec 11 10:32:52 crc kubenswrapper[4953]: I1211 10:32:52.676060 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfw64\" (UniqueName: \"kubernetes.io/projected/d635145c-c504-4916-910f-6a5c18c25aac-kube-api-access-sfw64\") pod \"keystone-7975-account-create-update-8dscm\" (UID: \"d635145c-c504-4916-910f-6a5c18c25aac\") " pod="openstack/keystone-7975-account-create-update-8dscm" Dec 11 10:32:52 crc kubenswrapper[4953]: I1211 10:32:52.676155 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d635145c-c504-4916-910f-6a5c18c25aac-operator-scripts\") pod \"keystone-7975-account-create-update-8dscm\" (UID: \"d635145c-c504-4916-910f-6a5c18c25aac\") " pod="openstack/keystone-7975-account-create-update-8dscm" Dec 11 10:32:52 crc kubenswrapper[4953]: I1211 10:32:52.777644 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e47953ec-41e5-458b-ad9f-a7e72a002b8a-operator-scripts\") pod \"keystone-db-create-jndc6\" (UID: \"e47953ec-41e5-458b-ad9f-a7e72a002b8a\") " pod="openstack/keystone-db-create-jndc6" Dec 11 10:32:52 crc kubenswrapper[4953]: I1211 10:32:52.777928 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sfw64\" (UniqueName: \"kubernetes.io/projected/d635145c-c504-4916-910f-6a5c18c25aac-kube-api-access-sfw64\") pod \"keystone-7975-account-create-update-8dscm\" (UID: \"d635145c-c504-4916-910f-6a5c18c25aac\") " pod="openstack/keystone-7975-account-create-update-8dscm" Dec 11 10:32:52 crc kubenswrapper[4953]: I1211 10:32:52.778075 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqb4j\" (UniqueName: \"kubernetes.io/projected/e47953ec-41e5-458b-ad9f-a7e72a002b8a-kube-api-access-wqb4j\") pod \"keystone-db-create-jndc6\" (UID: \"e47953ec-41e5-458b-ad9f-a7e72a002b8a\") " pod="openstack/keystone-db-create-jndc6" Dec 11 10:32:52 crc kubenswrapper[4953]: I1211 10:32:52.778187 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d635145c-c504-4916-910f-6a5c18c25aac-operator-scripts\") pod \"keystone-7975-account-create-update-8dscm\" (UID: \"d635145c-c504-4916-910f-6a5c18c25aac\") " pod="openstack/keystone-7975-account-create-update-8dscm" Dec 11 10:32:52 crc kubenswrapper[4953]: I1211 10:32:52.780550 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d635145c-c504-4916-910f-6a5c18c25aac-operator-scripts\") pod \"keystone-7975-account-create-update-8dscm\" (UID: \"d635145c-c504-4916-910f-6a5c18c25aac\") " pod="openstack/keystone-7975-account-create-update-8dscm" Dec 11 10:32:52 crc kubenswrapper[4953]: I1211 10:32:52.892753 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e47953ec-41e5-458b-ad9f-a7e72a002b8a-operator-scripts\") pod \"keystone-db-create-jndc6\" (UID: \"e47953ec-41e5-458b-ad9f-a7e72a002b8a\") " pod="openstack/keystone-db-create-jndc6" Dec 11 10:32:52 crc kubenswrapper[4953]: I1211 10:32:52.892884 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqb4j\" (UniqueName: \"kubernetes.io/projected/e47953ec-41e5-458b-ad9f-a7e72a002b8a-kube-api-access-wqb4j\") pod \"keystone-db-create-jndc6\" (UID: \"e47953ec-41e5-458b-ad9f-a7e72a002b8a\") " pod="openstack/keystone-db-create-jndc6" Dec 11 10:32:52 crc kubenswrapper[4953]: I1211 10:32:52.892757 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e47953ec-41e5-458b-ad9f-a7e72a002b8a-operator-scripts\") pod \"keystone-db-create-jndc6\" (UID: \"e47953ec-41e5-458b-ad9f-a7e72a002b8a\") " pod="openstack/keystone-db-create-jndc6" Dec 11 10:32:52 crc kubenswrapper[4953]: I1211 10:32:52.905102 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sfw64\" (UniqueName: \"kubernetes.io/projected/d635145c-c504-4916-910f-6a5c18c25aac-kube-api-access-sfw64\") pod \"keystone-7975-account-create-update-8dscm\" (UID: \"d635145c-c504-4916-910f-6a5c18c25aac\") " pod="openstack/keystone-7975-account-create-update-8dscm" Dec 11 10:32:52 crc kubenswrapper[4953]: I1211 10:32:52.908901 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-ktvrp"] Dec 11 10:32:52 crc kubenswrapper[4953]: I1211 10:32:52.910304 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-ktvrp" Dec 11 10:32:52 crc kubenswrapper[4953]: I1211 10:32:52.922724 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqb4j\" (UniqueName: \"kubernetes.io/projected/e47953ec-41e5-458b-ad9f-a7e72a002b8a-kube-api-access-wqb4j\") pod \"keystone-db-create-jndc6\" (UID: \"e47953ec-41e5-458b-ad9f-a7e72a002b8a\") " pod="openstack/keystone-db-create-jndc6" Dec 11 10:32:52 crc kubenswrapper[4953]: I1211 10:32:52.942714 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-ktvrp"] Dec 11 10:32:52 crc kubenswrapper[4953]: I1211 10:32:52.994728 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2nww\" (UniqueName: \"kubernetes.io/projected/4dc8c1b4-a275-4c7f-bfd1-a38cfd35b62d-kube-api-access-p2nww\") pod \"placement-db-create-ktvrp\" (UID: \"4dc8c1b4-a275-4c7f-bfd1-a38cfd35b62d\") " pod="openstack/placement-db-create-ktvrp" Dec 11 10:32:52 crc kubenswrapper[4953]: I1211 10:32:52.994847 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4dc8c1b4-a275-4c7f-bfd1-a38cfd35b62d-operator-scripts\") pod \"placement-db-create-ktvrp\" (UID: \"4dc8c1b4-a275-4c7f-bfd1-a38cfd35b62d\") " pod="openstack/placement-db-create-ktvrp" Dec 11 10:32:53 crc kubenswrapper[4953]: I1211 10:32:53.065102 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-a6a0-account-create-update-6rf9r"] Dec 11 10:32:53 crc kubenswrapper[4953]: I1211 10:32:53.066741 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-a6a0-account-create-update-6rf9r" Dec 11 10:32:53 crc kubenswrapper[4953]: I1211 10:32:53.069298 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Dec 11 10:32:53 crc kubenswrapper[4953]: I1211 10:32:53.081114 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-a6a0-account-create-update-6rf9r"] Dec 11 10:32:53 crc kubenswrapper[4953]: I1211 10:32:53.082192 4953 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-n6pxp" podUID="498f7a43-7db9-42e8-b722-a5fb6ae4749f" containerName="ovn-controller" probeResult="failure" output=< Dec 11 10:32:53 crc kubenswrapper[4953]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Dec 11 10:32:53 crc kubenswrapper[4953]: > Dec 11 10:32:53 crc kubenswrapper[4953]: I1211 10:32:53.099414 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-mbtwm" Dec 11 10:32:53 crc kubenswrapper[4953]: I1211 10:32:53.101200 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2nww\" (UniqueName: \"kubernetes.io/projected/4dc8c1b4-a275-4c7f-bfd1-a38cfd35b62d-kube-api-access-p2nww\") pod \"placement-db-create-ktvrp\" (UID: \"4dc8c1b4-a275-4c7f-bfd1-a38cfd35b62d\") " pod="openstack/placement-db-create-ktvrp" Dec 11 10:32:53 crc kubenswrapper[4953]: I1211 10:32:53.102248 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4dc8c1b4-a275-4c7f-bfd1-a38cfd35b62d-operator-scripts\") pod \"placement-db-create-ktvrp\" (UID: \"4dc8c1b4-a275-4c7f-bfd1-a38cfd35b62d\") " pod="openstack/placement-db-create-ktvrp" Dec 11 10:32:53 crc kubenswrapper[4953]: I1211 10:32:53.103613 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4dc8c1b4-a275-4c7f-bfd1-a38cfd35b62d-operator-scripts\") pod \"placement-db-create-ktvrp\" (UID: \"4dc8c1b4-a275-4c7f-bfd1-a38cfd35b62d\") " pod="openstack/placement-db-create-ktvrp" Dec 11 10:32:53 crc kubenswrapper[4953]: I1211 10:32:53.122044 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-lpbjw"] Dec 11 10:32:53 crc kubenswrapper[4953]: I1211 10:32:53.123416 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-lpbjw" Dec 11 10:32:53 crc kubenswrapper[4953]: I1211 10:32:53.136809 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-lpbjw"] Dec 11 10:32:53 crc kubenswrapper[4953]: I1211 10:32:53.151232 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2nww\" (UniqueName: \"kubernetes.io/projected/4dc8c1b4-a275-4c7f-bfd1-a38cfd35b62d-kube-api-access-p2nww\") pod \"placement-db-create-ktvrp\" (UID: \"4dc8c1b4-a275-4c7f-bfd1-a38cfd35b62d\") " pod="openstack/placement-db-create-ktvrp" Dec 11 10:32:53 crc kubenswrapper[4953]: I1211 10:32:53.180150 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7975-account-create-update-8dscm" Dec 11 10:32:53 crc kubenswrapper[4953]: I1211 10:32:53.203701 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5531d3f7-dc86-4e44-8044-3fd0a6f05afc-operator-scripts\") pod \"glance-db-create-lpbjw\" (UID: \"5531d3f7-dc86-4e44-8044-3fd0a6f05afc\") " pod="openstack/glance-db-create-lpbjw" Dec 11 10:32:53 crc kubenswrapper[4953]: I1211 10:32:53.203787 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f43e322f-fe22-4942-8536-2e29d5bb0639-operator-scripts\") pod \"placement-a6a0-account-create-update-6rf9r\" (UID: \"f43e322f-fe22-4942-8536-2e29d5bb0639\") " pod="openstack/placement-a6a0-account-create-update-6rf9r" Dec 11 10:32:53 crc kubenswrapper[4953]: I1211 10:32:53.203968 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bf8ss\" (UniqueName: \"kubernetes.io/projected/f43e322f-fe22-4942-8536-2e29d5bb0639-kube-api-access-bf8ss\") pod \"placement-a6a0-account-create-update-6rf9r\" (UID: \"f43e322f-fe22-4942-8536-2e29d5bb0639\") " pod="openstack/placement-a6a0-account-create-update-6rf9r" Dec 11 10:32:53 crc kubenswrapper[4953]: I1211 10:32:53.204008 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kx88j\" (UniqueName: \"kubernetes.io/projected/5531d3f7-dc86-4e44-8044-3fd0a6f05afc-kube-api-access-kx88j\") pod \"glance-db-create-lpbjw\" (UID: \"5531d3f7-dc86-4e44-8044-3fd0a6f05afc\") " pod="openstack/glance-db-create-lpbjw" Dec 11 10:32:53 crc kubenswrapper[4953]: I1211 10:32:53.225200 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-jndc6" Dec 11 10:32:53 crc kubenswrapper[4953]: I1211 10:32:53.241617 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-5aff-account-create-update-8rn6d"] Dec 11 10:32:53 crc kubenswrapper[4953]: I1211 10:32:53.243231 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-5aff-account-create-update-8rn6d" Dec 11 10:32:53 crc kubenswrapper[4953]: I1211 10:32:53.245984 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Dec 11 10:32:53 crc kubenswrapper[4953]: I1211 10:32:53.270270 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-5aff-account-create-update-8rn6d"] Dec 11 10:32:53 crc kubenswrapper[4953]: I1211 10:32:53.288659 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-ktvrp" Dec 11 10:32:53 crc kubenswrapper[4953]: I1211 10:32:53.305524 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bf8ss\" (UniqueName: \"kubernetes.io/projected/f43e322f-fe22-4942-8536-2e29d5bb0639-kube-api-access-bf8ss\") pod \"placement-a6a0-account-create-update-6rf9r\" (UID: \"f43e322f-fe22-4942-8536-2e29d5bb0639\") " pod="openstack/placement-a6a0-account-create-update-6rf9r" Dec 11 10:32:53 crc kubenswrapper[4953]: I1211 10:32:53.305614 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kx88j\" (UniqueName: \"kubernetes.io/projected/5531d3f7-dc86-4e44-8044-3fd0a6f05afc-kube-api-access-kx88j\") pod \"glance-db-create-lpbjw\" (UID: \"5531d3f7-dc86-4e44-8044-3fd0a6f05afc\") " pod="openstack/glance-db-create-lpbjw" Dec 11 10:32:53 crc kubenswrapper[4953]: I1211 10:32:53.305661 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5531d3f7-dc86-4e44-8044-3fd0a6f05afc-operator-scripts\") pod \"glance-db-create-lpbjw\" (UID: \"5531d3f7-dc86-4e44-8044-3fd0a6f05afc\") " pod="openstack/glance-db-create-lpbjw" Dec 11 10:32:53 crc kubenswrapper[4953]: I1211 10:32:53.305703 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f43e322f-fe22-4942-8536-2e29d5bb0639-operator-scripts\") pod \"placement-a6a0-account-create-update-6rf9r\" (UID: \"f43e322f-fe22-4942-8536-2e29d5bb0639\") " pod="openstack/placement-a6a0-account-create-update-6rf9r" Dec 11 10:32:53 crc kubenswrapper[4953]: I1211 10:32:53.306410 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f43e322f-fe22-4942-8536-2e29d5bb0639-operator-scripts\") pod \"placement-a6a0-account-create-update-6rf9r\" (UID: \"f43e322f-fe22-4942-8536-2e29d5bb0639\") " pod="openstack/placement-a6a0-account-create-update-6rf9r" Dec 11 10:32:53 crc kubenswrapper[4953]: I1211 10:32:53.308486 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5531d3f7-dc86-4e44-8044-3fd0a6f05afc-operator-scripts\") pod \"glance-db-create-lpbjw\" (UID: \"5531d3f7-dc86-4e44-8044-3fd0a6f05afc\") " pod="openstack/glance-db-create-lpbjw" Dec 11 10:32:53 crc kubenswrapper[4953]: I1211 10:32:53.359530 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bf8ss\" (UniqueName: \"kubernetes.io/projected/f43e322f-fe22-4942-8536-2e29d5bb0639-kube-api-access-bf8ss\") pod \"placement-a6a0-account-create-update-6rf9r\" (UID: \"f43e322f-fe22-4942-8536-2e29d5bb0639\") " pod="openstack/placement-a6a0-account-create-update-6rf9r" Dec 11 10:32:53 crc kubenswrapper[4953]: I1211 10:32:53.365216 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kx88j\" (UniqueName: \"kubernetes.io/projected/5531d3f7-dc86-4e44-8044-3fd0a6f05afc-kube-api-access-kx88j\") pod \"glance-db-create-lpbjw\" (UID: \"5531d3f7-dc86-4e44-8044-3fd0a6f05afc\") " pod="openstack/glance-db-create-lpbjw" Dec 11 10:32:53 crc kubenswrapper[4953]: I1211 10:32:53.385168 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-a6a0-account-create-update-6rf9r" Dec 11 10:32:53 crc kubenswrapper[4953]: I1211 10:32:53.408305 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26dz4\" (UniqueName: \"kubernetes.io/projected/97da65db-6787-4eee-b1de-cd7da56f16e3-kube-api-access-26dz4\") pod \"glance-5aff-account-create-update-8rn6d\" (UID: \"97da65db-6787-4eee-b1de-cd7da56f16e3\") " pod="openstack/glance-5aff-account-create-update-8rn6d" Dec 11 10:32:53 crc kubenswrapper[4953]: I1211 10:32:53.408677 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/97da65db-6787-4eee-b1de-cd7da56f16e3-operator-scripts\") pod \"glance-5aff-account-create-update-8rn6d\" (UID: \"97da65db-6787-4eee-b1de-cd7da56f16e3\") " pod="openstack/glance-5aff-account-create-update-8rn6d" Dec 11 10:32:53 crc kubenswrapper[4953]: I1211 10:32:53.439623 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-lpbjw" Dec 11 10:32:53 crc kubenswrapper[4953]: I1211 10:32:53.511131 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-26dz4\" (UniqueName: \"kubernetes.io/projected/97da65db-6787-4eee-b1de-cd7da56f16e3-kube-api-access-26dz4\") pod \"glance-5aff-account-create-update-8rn6d\" (UID: \"97da65db-6787-4eee-b1de-cd7da56f16e3\") " pod="openstack/glance-5aff-account-create-update-8rn6d" Dec 11 10:32:53 crc kubenswrapper[4953]: I1211 10:32:53.511840 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/97da65db-6787-4eee-b1de-cd7da56f16e3-operator-scripts\") pod \"glance-5aff-account-create-update-8rn6d\" (UID: \"97da65db-6787-4eee-b1de-cd7da56f16e3\") " pod="openstack/glance-5aff-account-create-update-8rn6d" Dec 11 10:32:53 crc kubenswrapper[4953]: I1211 10:32:53.513491 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/97da65db-6787-4eee-b1de-cd7da56f16e3-operator-scripts\") pod \"glance-5aff-account-create-update-8rn6d\" (UID: \"97da65db-6787-4eee-b1de-cd7da56f16e3\") " pod="openstack/glance-5aff-account-create-update-8rn6d" Dec 11 10:32:53 crc kubenswrapper[4953]: I1211 10:32:53.523199 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-t4cc8" Dec 11 10:32:53 crc kubenswrapper[4953]: I1211 10:32:53.529685 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-26dz4\" (UniqueName: \"kubernetes.io/projected/97da65db-6787-4eee-b1de-cd7da56f16e3-kube-api-access-26dz4\") pod \"glance-5aff-account-create-update-8rn6d\" (UID: \"97da65db-6787-4eee-b1de-cd7da56f16e3\") " pod="openstack/glance-5aff-account-create-update-8rn6d" Dec 11 10:32:53 crc kubenswrapper[4953]: I1211 10:32:53.580086 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-5aff-account-create-update-8rn6d" Dec 11 10:32:53 crc kubenswrapper[4953]: I1211 10:32:53.707948 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7975-account-create-update-8dscm"] Dec 11 10:32:53 crc kubenswrapper[4953]: W1211 10:32:53.709559 4953 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd635145c_c504_4916_910f_6a5c18c25aac.slice/crio-03c9b43016d6a32a16aa71a57177cec318c01629f66ed713d85b7ee7b46d9bec WatchSource:0}: Error finding container 03c9b43016d6a32a16aa71a57177cec318c01629f66ed713d85b7ee7b46d9bec: Status 404 returned error can't find the container with id 03c9b43016d6a32a16aa71a57177cec318c01629f66ed713d85b7ee7b46d9bec Dec 11 10:32:53 crc kubenswrapper[4953]: I1211 10:32:53.714366 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/21b311c3-3edf-4905-9929-79787eb29bb8-scripts\") pod \"21b311c3-3edf-4905-9929-79787eb29bb8\" (UID: \"21b311c3-3edf-4905-9929-79787eb29bb8\") " Dec 11 10:32:53 crc kubenswrapper[4953]: I1211 10:32:53.714492 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/21b311c3-3edf-4905-9929-79787eb29bb8-ring-data-devices\") pod \"21b311c3-3edf-4905-9929-79787eb29bb8\" (UID: \"21b311c3-3edf-4905-9929-79787eb29bb8\") " Dec 11 10:32:53 crc kubenswrapper[4953]: I1211 10:32:53.714595 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/21b311c3-3edf-4905-9929-79787eb29bb8-dispersionconf\") pod \"21b311c3-3edf-4905-9929-79787eb29bb8\" (UID: \"21b311c3-3edf-4905-9929-79787eb29bb8\") " Dec 11 10:32:53 crc kubenswrapper[4953]: I1211 10:32:53.714659 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c4m8j\" (UniqueName: \"kubernetes.io/projected/21b311c3-3edf-4905-9929-79787eb29bb8-kube-api-access-c4m8j\") pod \"21b311c3-3edf-4905-9929-79787eb29bb8\" (UID: \"21b311c3-3edf-4905-9929-79787eb29bb8\") " Dec 11 10:32:53 crc kubenswrapper[4953]: I1211 10:32:53.714683 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21b311c3-3edf-4905-9929-79787eb29bb8-combined-ca-bundle\") pod \"21b311c3-3edf-4905-9929-79787eb29bb8\" (UID: \"21b311c3-3edf-4905-9929-79787eb29bb8\") " Dec 11 10:32:53 crc kubenswrapper[4953]: I1211 10:32:53.714709 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/21b311c3-3edf-4905-9929-79787eb29bb8-swiftconf\") pod \"21b311c3-3edf-4905-9929-79787eb29bb8\" (UID: \"21b311c3-3edf-4905-9929-79787eb29bb8\") " Dec 11 10:32:53 crc kubenswrapper[4953]: I1211 10:32:53.714852 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/21b311c3-3edf-4905-9929-79787eb29bb8-etc-swift\") pod \"21b311c3-3edf-4905-9929-79787eb29bb8\" (UID: \"21b311c3-3edf-4905-9929-79787eb29bb8\") " Dec 11 10:32:53 crc kubenswrapper[4953]: I1211 10:32:53.715318 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21b311c3-3edf-4905-9929-79787eb29bb8-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "21b311c3-3edf-4905-9929-79787eb29bb8" (UID: "21b311c3-3edf-4905-9929-79787eb29bb8"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:32:53 crc kubenswrapper[4953]: I1211 10:32:53.715504 4953 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/21b311c3-3edf-4905-9929-79787eb29bb8-ring-data-devices\") on node \"crc\" DevicePath \"\"" Dec 11 10:32:53 crc kubenswrapper[4953]: I1211 10:32:53.716155 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/21b311c3-3edf-4905-9929-79787eb29bb8-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "21b311c3-3edf-4905-9929-79787eb29bb8" (UID: "21b311c3-3edf-4905-9929-79787eb29bb8"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:32:53 crc kubenswrapper[4953]: I1211 10:32:53.721083 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21b311c3-3edf-4905-9929-79787eb29bb8-kube-api-access-c4m8j" (OuterVolumeSpecName: "kube-api-access-c4m8j") pod "21b311c3-3edf-4905-9929-79787eb29bb8" (UID: "21b311c3-3edf-4905-9929-79787eb29bb8"). InnerVolumeSpecName "kube-api-access-c4m8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:32:53 crc kubenswrapper[4953]: I1211 10:32:53.727236 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21b311c3-3edf-4905-9929-79787eb29bb8-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "21b311c3-3edf-4905-9929-79787eb29bb8" (UID: "21b311c3-3edf-4905-9929-79787eb29bb8"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:32:53 crc kubenswrapper[4953]: I1211 10:32:53.741044 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21b311c3-3edf-4905-9929-79787eb29bb8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "21b311c3-3edf-4905-9929-79787eb29bb8" (UID: "21b311c3-3edf-4905-9929-79787eb29bb8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:32:53 crc kubenswrapper[4953]: I1211 10:32:53.742728 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21b311c3-3edf-4905-9929-79787eb29bb8-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "21b311c3-3edf-4905-9929-79787eb29bb8" (UID: "21b311c3-3edf-4905-9929-79787eb29bb8"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:32:53 crc kubenswrapper[4953]: I1211 10:32:53.746342 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21b311c3-3edf-4905-9929-79787eb29bb8-scripts" (OuterVolumeSpecName: "scripts") pod "21b311c3-3edf-4905-9929-79787eb29bb8" (UID: "21b311c3-3edf-4905-9929-79787eb29bb8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:32:53 crc kubenswrapper[4953]: I1211 10:32:53.806342 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Dec 11 10:32:53 crc kubenswrapper[4953]: I1211 10:32:53.818015 4953 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/21b311c3-3edf-4905-9929-79787eb29bb8-etc-swift\") on node \"crc\" DevicePath \"\"" Dec 11 10:32:53 crc kubenswrapper[4953]: I1211 10:32:53.818047 4953 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/21b311c3-3edf-4905-9929-79787eb29bb8-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 10:32:53 crc kubenswrapper[4953]: I1211 10:32:53.818056 4953 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/21b311c3-3edf-4905-9929-79787eb29bb8-dispersionconf\") on node \"crc\" DevicePath \"\"" Dec 11 10:32:53 crc kubenswrapper[4953]: I1211 10:32:53.818064 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c4m8j\" (UniqueName: \"kubernetes.io/projected/21b311c3-3edf-4905-9929-79787eb29bb8-kube-api-access-c4m8j\") on node \"crc\" DevicePath \"\"" Dec 11 10:32:53 crc kubenswrapper[4953]: I1211 10:32:53.818074 4953 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21b311c3-3edf-4905-9929-79787eb29bb8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 10:32:53 crc kubenswrapper[4953]: I1211 10:32:53.818083 4953 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/21b311c3-3edf-4905-9929-79787eb29bb8-swiftconf\") on node \"crc\" DevicePath \"\"" Dec 11 10:32:53 crc kubenswrapper[4953]: I1211 10:32:53.875237 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-jndc6"] Dec 11 10:32:54 crc kubenswrapper[4953]: I1211 10:32:54.017193 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-a6a0-account-create-update-6rf9r"] Dec 11 10:32:54 crc kubenswrapper[4953]: I1211 10:32:54.027747 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-ktvrp"] Dec 11 10:32:54 crc kubenswrapper[4953]: I1211 10:32:54.116738 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-lpbjw"] Dec 11 10:32:54 crc kubenswrapper[4953]: W1211 10:32:54.127866 4953 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5531d3f7_dc86_4e44_8044_3fd0a6f05afc.slice/crio-3d30942e41b8a3f15438c578f53bf8c974cd91b4fb162053acc3e66611cdf830 WatchSource:0}: Error finding container 3d30942e41b8a3f15438c578f53bf8c974cd91b4fb162053acc3e66611cdf830: Status 404 returned error can't find the container with id 3d30942e41b8a3f15438c578f53bf8c974cd91b4fb162053acc3e66611cdf830 Dec 11 10:32:54 crc kubenswrapper[4953]: I1211 10:32:54.146141 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-5aff-account-create-update-8rn6d"] Dec 11 10:32:54 crc kubenswrapper[4953]: W1211 10:32:54.159390 4953 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod97da65db_6787_4eee_b1de_cd7da56f16e3.slice/crio-ca1745b983e2ee195586650bfc790443d6a7dfaddafac385e782a42c6d56102a WatchSource:0}: Error finding container ca1745b983e2ee195586650bfc790443d6a7dfaddafac385e782a42c6d56102a: Status 404 returned error can't find the container with id ca1745b983e2ee195586650bfc790443d6a7dfaddafac385e782a42c6d56102a Dec 11 10:32:54 crc kubenswrapper[4953]: I1211 10:32:54.162093 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-t4cc8" event={"ID":"21b311c3-3edf-4905-9929-79787eb29bb8","Type":"ContainerDied","Data":"c7e5e46fcb429ae3d03bcd8223a6d734009df75bbcaea7954dd70dac27f966ed"} Dec 11 10:32:54 crc kubenswrapper[4953]: I1211 10:32:54.162168 4953 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c7e5e46fcb429ae3d03bcd8223a6d734009df75bbcaea7954dd70dac27f966ed" Dec 11 10:32:54 crc kubenswrapper[4953]: I1211 10:32:54.162129 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-t4cc8" Dec 11 10:32:54 crc kubenswrapper[4953]: I1211 10:32:54.164101 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-jndc6" event={"ID":"e47953ec-41e5-458b-ad9f-a7e72a002b8a","Type":"ContainerStarted","Data":"777ea53173ee3076bb00d3e01309f872be6e7d24725a16b5dfcbbcedc728efa6"} Dec 11 10:32:54 crc kubenswrapper[4953]: I1211 10:32:54.168772 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-ktvrp" event={"ID":"4dc8c1b4-a275-4c7f-bfd1-a38cfd35b62d","Type":"ContainerStarted","Data":"b0ee509a1d85b4e220466070b575853a1d69a7991517af904126ddfe69aeccda"} Dec 11 10:32:54 crc kubenswrapper[4953]: I1211 10:32:54.169869 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-lpbjw" event={"ID":"5531d3f7-dc86-4e44-8044-3fd0a6f05afc","Type":"ContainerStarted","Data":"3d30942e41b8a3f15438c578f53bf8c974cd91b4fb162053acc3e66611cdf830"} Dec 11 10:32:54 crc kubenswrapper[4953]: I1211 10:32:54.171205 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-a6a0-account-create-update-6rf9r" event={"ID":"f43e322f-fe22-4942-8536-2e29d5bb0639","Type":"ContainerStarted","Data":"f011cf1a494b6ada3c993bcddfcb24236a680a9c3135924c499e23bbe10a32ba"} Dec 11 10:32:54 crc kubenswrapper[4953]: I1211 10:32:54.228922 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7975-account-create-update-8dscm" event={"ID":"d635145c-c504-4916-910f-6a5c18c25aac","Type":"ContainerStarted","Data":"ed0ad3252bc35dde5add6a8556d73829cf5ce860401ea68746bcbdd08f771641"} Dec 11 10:32:54 crc kubenswrapper[4953]: I1211 10:32:54.228974 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7975-account-create-update-8dscm" event={"ID":"d635145c-c504-4916-910f-6a5c18c25aac","Type":"ContainerStarted","Data":"03c9b43016d6a32a16aa71a57177cec318c01629f66ed713d85b7ee7b46d9bec"} Dec 11 10:32:54 crc kubenswrapper[4953]: I1211 10:32:54.370743 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-7975-account-create-update-8dscm" podStartSLOduration=2.370725507 podStartE2EDuration="2.370725507s" podCreationTimestamp="2025-12-11 10:32:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:32:54.367145853 +0000 UTC m=+1292.391004876" watchObservedRunningTime="2025-12-11 10:32:54.370725507 +0000 UTC m=+1292.394584540" Dec 11 10:32:54 crc kubenswrapper[4953]: I1211 10:32:54.818845 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Dec 11 10:32:55 crc kubenswrapper[4953]: I1211 10:32:55.236613 4953 generic.go:334] "Generic (PLEG): container finished" podID="97da65db-6787-4eee-b1de-cd7da56f16e3" containerID="72ef224d6c1c02029434c22876ede6fd4c8207724a4168eb2cdb7e194df7370b" exitCode=0 Dec 11 10:32:55 crc kubenswrapper[4953]: I1211 10:32:55.236690 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-5aff-account-create-update-8rn6d" event={"ID":"97da65db-6787-4eee-b1de-cd7da56f16e3","Type":"ContainerDied","Data":"72ef224d6c1c02029434c22876ede6fd4c8207724a4168eb2cdb7e194df7370b"} Dec 11 10:32:55 crc kubenswrapper[4953]: I1211 10:32:55.236714 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-5aff-account-create-update-8rn6d" event={"ID":"97da65db-6787-4eee-b1de-cd7da56f16e3","Type":"ContainerStarted","Data":"ca1745b983e2ee195586650bfc790443d6a7dfaddafac385e782a42c6d56102a"} Dec 11 10:32:55 crc kubenswrapper[4953]: I1211 10:32:55.239051 4953 generic.go:334] "Generic (PLEG): container finished" podID="d635145c-c504-4916-910f-6a5c18c25aac" containerID="ed0ad3252bc35dde5add6a8556d73829cf5ce860401ea68746bcbdd08f771641" exitCode=0 Dec 11 10:32:55 crc kubenswrapper[4953]: I1211 10:32:55.239204 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7975-account-create-update-8dscm" event={"ID":"d635145c-c504-4916-910f-6a5c18c25aac","Type":"ContainerDied","Data":"ed0ad3252bc35dde5add6a8556d73829cf5ce860401ea68746bcbdd08f771641"} Dec 11 10:32:55 crc kubenswrapper[4953]: I1211 10:32:55.241747 4953 generic.go:334] "Generic (PLEG): container finished" podID="4dc8c1b4-a275-4c7f-bfd1-a38cfd35b62d" containerID="a54a95763bb0d56b10459a347095618414ea8f3db4ecc2dcde6e2baafdcdf4c0" exitCode=0 Dec 11 10:32:55 crc kubenswrapper[4953]: I1211 10:32:55.241793 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-ktvrp" event={"ID":"4dc8c1b4-a275-4c7f-bfd1-a38cfd35b62d","Type":"ContainerDied","Data":"a54a95763bb0d56b10459a347095618414ea8f3db4ecc2dcde6e2baafdcdf4c0"} Dec 11 10:32:55 crc kubenswrapper[4953]: I1211 10:32:55.245045 4953 generic.go:334] "Generic (PLEG): container finished" podID="e47953ec-41e5-458b-ad9f-a7e72a002b8a" containerID="eea627d5e97aa1a1057dbb22ff6c90022c45a3a65b7f437101642242dc20cf70" exitCode=0 Dec 11 10:32:55 crc kubenswrapper[4953]: I1211 10:32:55.245172 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-jndc6" event={"ID":"e47953ec-41e5-458b-ad9f-a7e72a002b8a","Type":"ContainerDied","Data":"eea627d5e97aa1a1057dbb22ff6c90022c45a3a65b7f437101642242dc20cf70"} Dec 11 10:32:55 crc kubenswrapper[4953]: I1211 10:32:55.246970 4953 generic.go:334] "Generic (PLEG): container finished" podID="5531d3f7-dc86-4e44-8044-3fd0a6f05afc" containerID="55b113bd75a4484bbac3106dd545ff87217763b1e7a5646cc0bcaf98c719fdee" exitCode=0 Dec 11 10:32:55 crc kubenswrapper[4953]: I1211 10:32:55.247070 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-lpbjw" event={"ID":"5531d3f7-dc86-4e44-8044-3fd0a6f05afc","Type":"ContainerDied","Data":"55b113bd75a4484bbac3106dd545ff87217763b1e7a5646cc0bcaf98c719fdee"} Dec 11 10:32:55 crc kubenswrapper[4953]: I1211 10:32:55.248549 4953 generic.go:334] "Generic (PLEG): container finished" podID="f43e322f-fe22-4942-8536-2e29d5bb0639" containerID="d94ea683beee7d4145a352f9da743957bfb974fd3dcdfc07f2199ee8dca67ba5" exitCode=0 Dec 11 10:32:55 crc kubenswrapper[4953]: I1211 10:32:55.248602 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-a6a0-account-create-update-6rf9r" event={"ID":"f43e322f-fe22-4942-8536-2e29d5bb0639","Type":"ContainerDied","Data":"d94ea683beee7d4145a352f9da743957bfb974fd3dcdfc07f2199ee8dca67ba5"} Dec 11 10:32:56 crc kubenswrapper[4953]: I1211 10:32:56.262470 4953 generic.go:334] "Generic (PLEG): container finished" podID="01196778-96de-4f79-b9ac-e01243f86ebb" containerID="94ecea46a02f645c72f741be8c0e8c18496d154632db9f0e42995f5ff8e48207" exitCode=0 Dec 11 10:32:56 crc kubenswrapper[4953]: I1211 10:32:56.262633 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"01196778-96de-4f79-b9ac-e01243f86ebb","Type":"ContainerDied","Data":"94ecea46a02f645c72f741be8c0e8c18496d154632db9f0e42995f5ff8e48207"} Dec 11 10:32:56 crc kubenswrapper[4953]: I1211 10:32:56.805035 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-a6a0-account-create-update-6rf9r" Dec 11 10:32:56 crc kubenswrapper[4953]: I1211 10:32:56.947009 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f43e322f-fe22-4942-8536-2e29d5bb0639-operator-scripts\") pod \"f43e322f-fe22-4942-8536-2e29d5bb0639\" (UID: \"f43e322f-fe22-4942-8536-2e29d5bb0639\") " Dec 11 10:32:56 crc kubenswrapper[4953]: I1211 10:32:56.947982 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f43e322f-fe22-4942-8536-2e29d5bb0639-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f43e322f-fe22-4942-8536-2e29d5bb0639" (UID: "f43e322f-fe22-4942-8536-2e29d5bb0639"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:32:56 crc kubenswrapper[4953]: I1211 10:32:56.949674 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf8ss\" (UniqueName: \"kubernetes.io/projected/f43e322f-fe22-4942-8536-2e29d5bb0639-kube-api-access-bf8ss\") pod \"f43e322f-fe22-4942-8536-2e29d5bb0639\" (UID: \"f43e322f-fe22-4942-8536-2e29d5bb0639\") " Dec 11 10:32:56 crc kubenswrapper[4953]: I1211 10:32:56.950271 4953 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f43e322f-fe22-4942-8536-2e29d5bb0639-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 10:32:56 crc kubenswrapper[4953]: I1211 10:32:56.951282 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-5aff-account-create-update-8rn6d" Dec 11 10:32:57 crc kubenswrapper[4953]: I1211 10:32:57.089299 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/97da65db-6787-4eee-b1de-cd7da56f16e3-operator-scripts\") pod \"97da65db-6787-4eee-b1de-cd7da56f16e3\" (UID: \"97da65db-6787-4eee-b1de-cd7da56f16e3\") " Dec 11 10:32:57 crc kubenswrapper[4953]: I1211 10:32:57.089382 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-26dz4\" (UniqueName: \"kubernetes.io/projected/97da65db-6787-4eee-b1de-cd7da56f16e3-kube-api-access-26dz4\") pod \"97da65db-6787-4eee-b1de-cd7da56f16e3\" (UID: \"97da65db-6787-4eee-b1de-cd7da56f16e3\") " Dec 11 10:32:57 crc kubenswrapper[4953]: I1211 10:32:57.092977 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97da65db-6787-4eee-b1de-cd7da56f16e3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "97da65db-6787-4eee-b1de-cd7da56f16e3" (UID: "97da65db-6787-4eee-b1de-cd7da56f16e3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:32:57 crc kubenswrapper[4953]: I1211 10:32:57.098886 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97da65db-6787-4eee-b1de-cd7da56f16e3-kube-api-access-26dz4" (OuterVolumeSpecName: "kube-api-access-26dz4") pod "97da65db-6787-4eee-b1de-cd7da56f16e3" (UID: "97da65db-6787-4eee-b1de-cd7da56f16e3"). InnerVolumeSpecName "kube-api-access-26dz4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:32:57 crc kubenswrapper[4953]: I1211 10:32:57.106461 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f43e322f-fe22-4942-8536-2e29d5bb0639-kube-api-access-bf8ss" (OuterVolumeSpecName: "kube-api-access-bf8ss") pod "f43e322f-fe22-4942-8536-2e29d5bb0639" (UID: "f43e322f-fe22-4942-8536-2e29d5bb0639"). InnerVolumeSpecName "kube-api-access-bf8ss". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:32:57 crc kubenswrapper[4953]: I1211 10:32:57.112916 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-ktvrp" Dec 11 10:32:57 crc kubenswrapper[4953]: I1211 10:32:57.121239 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-jndc6" Dec 11 10:32:57 crc kubenswrapper[4953]: I1211 10:32:57.135211 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7975-account-create-update-8dscm" Dec 11 10:32:57 crc kubenswrapper[4953]: I1211 10:32:57.138639 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-lpbjw" Dec 11 10:32:57 crc kubenswrapper[4953]: I1211 10:32:57.190908 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e47953ec-41e5-458b-ad9f-a7e72a002b8a-operator-scripts\") pod \"e47953ec-41e5-458b-ad9f-a7e72a002b8a\" (UID: \"e47953ec-41e5-458b-ad9f-a7e72a002b8a\") " Dec 11 10:32:57 crc kubenswrapper[4953]: I1211 10:32:57.190987 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d635145c-c504-4916-910f-6a5c18c25aac-operator-scripts\") pod \"d635145c-c504-4916-910f-6a5c18c25aac\" (UID: \"d635145c-c504-4916-910f-6a5c18c25aac\") " Dec 11 10:32:57 crc kubenswrapper[4953]: I1211 10:32:57.191025 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sfw64\" (UniqueName: \"kubernetes.io/projected/d635145c-c504-4916-910f-6a5c18c25aac-kube-api-access-sfw64\") pod \"d635145c-c504-4916-910f-6a5c18c25aac\" (UID: \"d635145c-c504-4916-910f-6a5c18c25aac\") " Dec 11 10:32:57 crc kubenswrapper[4953]: I1211 10:32:57.191047 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4dc8c1b4-a275-4c7f-bfd1-a38cfd35b62d-operator-scripts\") pod \"4dc8c1b4-a275-4c7f-bfd1-a38cfd35b62d\" (UID: \"4dc8c1b4-a275-4c7f-bfd1-a38cfd35b62d\") " Dec 11 10:32:57 crc kubenswrapper[4953]: I1211 10:32:57.191096 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wqb4j\" (UniqueName: \"kubernetes.io/projected/e47953ec-41e5-458b-ad9f-a7e72a002b8a-kube-api-access-wqb4j\") pod \"e47953ec-41e5-458b-ad9f-a7e72a002b8a\" (UID: \"e47953ec-41e5-458b-ad9f-a7e72a002b8a\") " Dec 11 10:32:57 crc kubenswrapper[4953]: I1211 10:32:57.191149 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5531d3f7-dc86-4e44-8044-3fd0a6f05afc-operator-scripts\") pod \"5531d3f7-dc86-4e44-8044-3fd0a6f05afc\" (UID: \"5531d3f7-dc86-4e44-8044-3fd0a6f05afc\") " Dec 11 10:32:57 crc kubenswrapper[4953]: I1211 10:32:57.191177 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kx88j\" (UniqueName: \"kubernetes.io/projected/5531d3f7-dc86-4e44-8044-3fd0a6f05afc-kube-api-access-kx88j\") pod \"5531d3f7-dc86-4e44-8044-3fd0a6f05afc\" (UID: \"5531d3f7-dc86-4e44-8044-3fd0a6f05afc\") " Dec 11 10:32:57 crc kubenswrapper[4953]: I1211 10:32:57.191288 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p2nww\" (UniqueName: \"kubernetes.io/projected/4dc8c1b4-a275-4c7f-bfd1-a38cfd35b62d-kube-api-access-p2nww\") pod \"4dc8c1b4-a275-4c7f-bfd1-a38cfd35b62d\" (UID: \"4dc8c1b4-a275-4c7f-bfd1-a38cfd35b62d\") " Dec 11 10:32:57 crc kubenswrapper[4953]: I1211 10:32:57.191382 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e47953ec-41e5-458b-ad9f-a7e72a002b8a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e47953ec-41e5-458b-ad9f-a7e72a002b8a" (UID: "e47953ec-41e5-458b-ad9f-a7e72a002b8a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:32:57 crc kubenswrapper[4953]: I1211 10:32:57.191657 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4dc8c1b4-a275-4c7f-bfd1-a38cfd35b62d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4dc8c1b4-a275-4c7f-bfd1-a38cfd35b62d" (UID: "4dc8c1b4-a275-4c7f-bfd1-a38cfd35b62d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:32:57 crc kubenswrapper[4953]: I1211 10:32:57.191673 4953 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/97da65db-6787-4eee-b1de-cd7da56f16e3-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 10:32:57 crc kubenswrapper[4953]: I1211 10:32:57.191695 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-26dz4\" (UniqueName: \"kubernetes.io/projected/97da65db-6787-4eee-b1de-cd7da56f16e3-kube-api-access-26dz4\") on node \"crc\" DevicePath \"\"" Dec 11 10:32:57 crc kubenswrapper[4953]: I1211 10:32:57.191709 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf8ss\" (UniqueName: \"kubernetes.io/projected/f43e322f-fe22-4942-8536-2e29d5bb0639-kube-api-access-bf8ss\") on node \"crc\" DevicePath \"\"" Dec 11 10:32:57 crc kubenswrapper[4953]: I1211 10:32:57.191725 4953 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e47953ec-41e5-458b-ad9f-a7e72a002b8a-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 10:32:57 crc kubenswrapper[4953]: I1211 10:32:57.192240 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d635145c-c504-4916-910f-6a5c18c25aac-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d635145c-c504-4916-910f-6a5c18c25aac" (UID: "d635145c-c504-4916-910f-6a5c18c25aac"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:32:57 crc kubenswrapper[4953]: I1211 10:32:57.192601 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5531d3f7-dc86-4e44-8044-3fd0a6f05afc-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5531d3f7-dc86-4e44-8044-3fd0a6f05afc" (UID: "5531d3f7-dc86-4e44-8044-3fd0a6f05afc"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:32:57 crc kubenswrapper[4953]: I1211 10:32:57.196003 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e47953ec-41e5-458b-ad9f-a7e72a002b8a-kube-api-access-wqb4j" (OuterVolumeSpecName: "kube-api-access-wqb4j") pod "e47953ec-41e5-458b-ad9f-a7e72a002b8a" (UID: "e47953ec-41e5-458b-ad9f-a7e72a002b8a"). InnerVolumeSpecName "kube-api-access-wqb4j". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:32:57 crc kubenswrapper[4953]: I1211 10:32:57.196391 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5531d3f7-dc86-4e44-8044-3fd0a6f05afc-kube-api-access-kx88j" (OuterVolumeSpecName: "kube-api-access-kx88j") pod "5531d3f7-dc86-4e44-8044-3fd0a6f05afc" (UID: "5531d3f7-dc86-4e44-8044-3fd0a6f05afc"). InnerVolumeSpecName "kube-api-access-kx88j". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:32:57 crc kubenswrapper[4953]: I1211 10:32:57.196433 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d635145c-c504-4916-910f-6a5c18c25aac-kube-api-access-sfw64" (OuterVolumeSpecName: "kube-api-access-sfw64") pod "d635145c-c504-4916-910f-6a5c18c25aac" (UID: "d635145c-c504-4916-910f-6a5c18c25aac"). InnerVolumeSpecName "kube-api-access-sfw64". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:32:57 crc kubenswrapper[4953]: I1211 10:32:57.198328 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4dc8c1b4-a275-4c7f-bfd1-a38cfd35b62d-kube-api-access-p2nww" (OuterVolumeSpecName: "kube-api-access-p2nww") pod "4dc8c1b4-a275-4c7f-bfd1-a38cfd35b62d" (UID: "4dc8c1b4-a275-4c7f-bfd1-a38cfd35b62d"). InnerVolumeSpecName "kube-api-access-p2nww". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:32:57 crc kubenswrapper[4953]: I1211 10:32:57.272265 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-5aff-account-create-update-8rn6d" event={"ID":"97da65db-6787-4eee-b1de-cd7da56f16e3","Type":"ContainerDied","Data":"ca1745b983e2ee195586650bfc790443d6a7dfaddafac385e782a42c6d56102a"} Dec 11 10:32:57 crc kubenswrapper[4953]: I1211 10:32:57.273377 4953 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ca1745b983e2ee195586650bfc790443d6a7dfaddafac385e782a42c6d56102a" Dec 11 10:32:57 crc kubenswrapper[4953]: I1211 10:32:57.272283 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-5aff-account-create-update-8rn6d" Dec 11 10:32:57 crc kubenswrapper[4953]: I1211 10:32:57.274674 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7975-account-create-update-8dscm" event={"ID":"d635145c-c504-4916-910f-6a5c18c25aac","Type":"ContainerDied","Data":"03c9b43016d6a32a16aa71a57177cec318c01629f66ed713d85b7ee7b46d9bec"} Dec 11 10:32:57 crc kubenswrapper[4953]: I1211 10:32:57.274709 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7975-account-create-update-8dscm" Dec 11 10:32:57 crc kubenswrapper[4953]: I1211 10:32:57.274732 4953 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="03c9b43016d6a32a16aa71a57177cec318c01629f66ed713d85b7ee7b46d9bec" Dec 11 10:32:57 crc kubenswrapper[4953]: I1211 10:32:57.276658 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"01196778-96de-4f79-b9ac-e01243f86ebb","Type":"ContainerStarted","Data":"a1bc8164296634778d4abaa0460ca228c5ac0bad626c3a54c3a93f97fe857237"} Dec 11 10:32:57 crc kubenswrapper[4953]: I1211 10:32:57.277032 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Dec 11 10:32:57 crc kubenswrapper[4953]: I1211 10:32:57.278107 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-ktvrp" event={"ID":"4dc8c1b4-a275-4c7f-bfd1-a38cfd35b62d","Type":"ContainerDied","Data":"b0ee509a1d85b4e220466070b575853a1d69a7991517af904126ddfe69aeccda"} Dec 11 10:32:57 crc kubenswrapper[4953]: I1211 10:32:57.278133 4953 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b0ee509a1d85b4e220466070b575853a1d69a7991517af904126ddfe69aeccda" Dec 11 10:32:57 crc kubenswrapper[4953]: I1211 10:32:57.278150 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-ktvrp" Dec 11 10:32:57 crc kubenswrapper[4953]: I1211 10:32:57.279342 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-jndc6" event={"ID":"e47953ec-41e5-458b-ad9f-a7e72a002b8a","Type":"ContainerDied","Data":"777ea53173ee3076bb00d3e01309f872be6e7d24725a16b5dfcbbcedc728efa6"} Dec 11 10:32:57 crc kubenswrapper[4953]: I1211 10:32:57.279391 4953 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="777ea53173ee3076bb00d3e01309f872be6e7d24725a16b5dfcbbcedc728efa6" Dec 11 10:32:57 crc kubenswrapper[4953]: I1211 10:32:57.279444 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-jndc6" Dec 11 10:32:57 crc kubenswrapper[4953]: I1211 10:32:57.282417 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-lpbjw" event={"ID":"5531d3f7-dc86-4e44-8044-3fd0a6f05afc","Type":"ContainerDied","Data":"3d30942e41b8a3f15438c578f53bf8c974cd91b4fb162053acc3e66611cdf830"} Dec 11 10:32:57 crc kubenswrapper[4953]: I1211 10:32:57.282461 4953 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3d30942e41b8a3f15438c578f53bf8c974cd91b4fb162053acc3e66611cdf830" Dec 11 10:32:57 crc kubenswrapper[4953]: I1211 10:32:57.282531 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-lpbjw" Dec 11 10:32:57 crc kubenswrapper[4953]: I1211 10:32:57.285754 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-a6a0-account-create-update-6rf9r" event={"ID":"f43e322f-fe22-4942-8536-2e29d5bb0639","Type":"ContainerDied","Data":"f011cf1a494b6ada3c993bcddfcb24236a680a9c3135924c499e23bbe10a32ba"} Dec 11 10:32:57 crc kubenswrapper[4953]: I1211 10:32:57.285854 4953 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f011cf1a494b6ada3c993bcddfcb24236a680a9c3135924c499e23bbe10a32ba" Dec 11 10:32:57 crc kubenswrapper[4953]: I1211 10:32:57.285955 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-a6a0-account-create-update-6rf9r" Dec 11 10:32:57 crc kubenswrapper[4953]: I1211 10:32:57.292991 4953 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d635145c-c504-4916-910f-6a5c18c25aac-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 10:32:57 crc kubenswrapper[4953]: I1211 10:32:57.293029 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sfw64\" (UniqueName: \"kubernetes.io/projected/d635145c-c504-4916-910f-6a5c18c25aac-kube-api-access-sfw64\") on node \"crc\" DevicePath \"\"" Dec 11 10:32:57 crc kubenswrapper[4953]: I1211 10:32:57.293071 4953 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4dc8c1b4-a275-4c7f-bfd1-a38cfd35b62d-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 10:32:57 crc kubenswrapper[4953]: I1211 10:32:57.293088 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wqb4j\" (UniqueName: \"kubernetes.io/projected/e47953ec-41e5-458b-ad9f-a7e72a002b8a-kube-api-access-wqb4j\") on node \"crc\" DevicePath \"\"" Dec 11 10:32:57 crc kubenswrapper[4953]: I1211 10:32:57.293100 4953 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5531d3f7-dc86-4e44-8044-3fd0a6f05afc-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 10:32:57 crc kubenswrapper[4953]: I1211 10:32:57.293112 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kx88j\" (UniqueName: \"kubernetes.io/projected/5531d3f7-dc86-4e44-8044-3fd0a6f05afc-kube-api-access-kx88j\") on node \"crc\" DevicePath \"\"" Dec 11 10:32:57 crc kubenswrapper[4953]: I1211 10:32:57.293147 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p2nww\" (UniqueName: \"kubernetes.io/projected/4dc8c1b4-a275-4c7f-bfd1-a38cfd35b62d-kube-api-access-p2nww\") on node \"crc\" DevicePath \"\"" Dec 11 10:32:57 crc kubenswrapper[4953]: I1211 10:32:57.302458 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.908283332 podStartE2EDuration="1m29.302443368s" podCreationTimestamp="2025-12-11 10:31:28 +0000 UTC" firstStartedPulling="2025-12-11 10:31:30.835844029 +0000 UTC m=+1208.859703062" lastFinishedPulling="2025-12-11 10:32:22.230004065 +0000 UTC m=+1260.253863098" observedRunningTime="2025-12-11 10:32:57.298050988 +0000 UTC m=+1295.321910021" watchObservedRunningTime="2025-12-11 10:32:57.302443368 +0000 UTC m=+1295.326302401" Dec 11 10:32:58 crc kubenswrapper[4953]: I1211 10:32:58.038968 4953 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-n6pxp" podUID="498f7a43-7db9-42e8-b722-a5fb6ae4749f" containerName="ovn-controller" probeResult="failure" output=< Dec 11 10:32:58 crc kubenswrapper[4953]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Dec 11 10:32:58 crc kubenswrapper[4953]: > Dec 11 10:32:58 crc kubenswrapper[4953]: I1211 10:32:58.060376 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-mbtwm" Dec 11 10:32:58 crc kubenswrapper[4953]: I1211 10:32:58.406752 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-n6pxp-config-8ncjt"] Dec 11 10:32:58 crc kubenswrapper[4953]: E1211 10:32:58.407182 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4dc8c1b4-a275-4c7f-bfd1-a38cfd35b62d" containerName="mariadb-database-create" Dec 11 10:32:58 crc kubenswrapper[4953]: I1211 10:32:58.407199 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="4dc8c1b4-a275-4c7f-bfd1-a38cfd35b62d" containerName="mariadb-database-create" Dec 11 10:32:58 crc kubenswrapper[4953]: E1211 10:32:58.407216 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e47953ec-41e5-458b-ad9f-a7e72a002b8a" containerName="mariadb-database-create" Dec 11 10:32:58 crc kubenswrapper[4953]: I1211 10:32:58.407224 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="e47953ec-41e5-458b-ad9f-a7e72a002b8a" containerName="mariadb-database-create" Dec 11 10:32:58 crc kubenswrapper[4953]: E1211 10:32:58.407248 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5531d3f7-dc86-4e44-8044-3fd0a6f05afc" containerName="mariadb-database-create" Dec 11 10:32:58 crc kubenswrapper[4953]: I1211 10:32:58.407256 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="5531d3f7-dc86-4e44-8044-3fd0a6f05afc" containerName="mariadb-database-create" Dec 11 10:32:58 crc kubenswrapper[4953]: E1211 10:32:58.407270 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21b311c3-3edf-4905-9929-79787eb29bb8" containerName="swift-ring-rebalance" Dec 11 10:32:58 crc kubenswrapper[4953]: I1211 10:32:58.407278 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="21b311c3-3edf-4905-9929-79787eb29bb8" containerName="swift-ring-rebalance" Dec 11 10:32:58 crc kubenswrapper[4953]: E1211 10:32:58.407295 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97da65db-6787-4eee-b1de-cd7da56f16e3" containerName="mariadb-account-create-update" Dec 11 10:32:58 crc kubenswrapper[4953]: I1211 10:32:58.407303 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="97da65db-6787-4eee-b1de-cd7da56f16e3" containerName="mariadb-account-create-update" Dec 11 10:32:58 crc kubenswrapper[4953]: E1211 10:32:58.407319 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d635145c-c504-4916-910f-6a5c18c25aac" containerName="mariadb-account-create-update" Dec 11 10:32:58 crc kubenswrapper[4953]: I1211 10:32:58.407326 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="d635145c-c504-4916-910f-6a5c18c25aac" containerName="mariadb-account-create-update" Dec 11 10:32:58 crc kubenswrapper[4953]: E1211 10:32:58.407340 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f43e322f-fe22-4942-8536-2e29d5bb0639" containerName="mariadb-account-create-update" Dec 11 10:32:58 crc kubenswrapper[4953]: I1211 10:32:58.407348 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="f43e322f-fe22-4942-8536-2e29d5bb0639" containerName="mariadb-account-create-update" Dec 11 10:32:58 crc kubenswrapper[4953]: I1211 10:32:58.407541 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="5531d3f7-dc86-4e44-8044-3fd0a6f05afc" containerName="mariadb-database-create" Dec 11 10:32:58 crc kubenswrapper[4953]: I1211 10:32:58.407604 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="21b311c3-3edf-4905-9929-79787eb29bb8" containerName="swift-ring-rebalance" Dec 11 10:32:58 crc kubenswrapper[4953]: I1211 10:32:58.407623 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="d635145c-c504-4916-910f-6a5c18c25aac" containerName="mariadb-account-create-update" Dec 11 10:32:58 crc kubenswrapper[4953]: I1211 10:32:58.407643 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="4dc8c1b4-a275-4c7f-bfd1-a38cfd35b62d" containerName="mariadb-database-create" Dec 11 10:32:58 crc kubenswrapper[4953]: I1211 10:32:58.407658 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="e47953ec-41e5-458b-ad9f-a7e72a002b8a" containerName="mariadb-database-create" Dec 11 10:32:58 crc kubenswrapper[4953]: I1211 10:32:58.407670 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="97da65db-6787-4eee-b1de-cd7da56f16e3" containerName="mariadb-account-create-update" Dec 11 10:32:58 crc kubenswrapper[4953]: I1211 10:32:58.407709 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="f43e322f-fe22-4942-8536-2e29d5bb0639" containerName="mariadb-account-create-update" Dec 11 10:32:58 crc kubenswrapper[4953]: I1211 10:32:58.408469 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-n6pxp-config-8ncjt" Dec 11 10:32:58 crc kubenswrapper[4953]: I1211 10:32:58.410260 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Dec 11 10:32:58 crc kubenswrapper[4953]: I1211 10:32:58.421235 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-n6pxp-config-8ncjt"] Dec 11 10:32:58 crc kubenswrapper[4953]: I1211 10:32:58.481442 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzz9s\" (UniqueName: \"kubernetes.io/projected/4f5d74d5-93d8-4ed3-9d06-b7311f291838-kube-api-access-fzz9s\") pod \"ovn-controller-n6pxp-config-8ncjt\" (UID: \"4f5d74d5-93d8-4ed3-9d06-b7311f291838\") " pod="openstack/ovn-controller-n6pxp-config-8ncjt" Dec 11 10:32:58 crc kubenswrapper[4953]: I1211 10:32:58.481492 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4f5d74d5-93d8-4ed3-9d06-b7311f291838-var-log-ovn\") pod \"ovn-controller-n6pxp-config-8ncjt\" (UID: \"4f5d74d5-93d8-4ed3-9d06-b7311f291838\") " pod="openstack/ovn-controller-n6pxp-config-8ncjt" Dec 11 10:32:58 crc kubenswrapper[4953]: I1211 10:32:58.481594 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4f5d74d5-93d8-4ed3-9d06-b7311f291838-var-run-ovn\") pod \"ovn-controller-n6pxp-config-8ncjt\" (UID: \"4f5d74d5-93d8-4ed3-9d06-b7311f291838\") " pod="openstack/ovn-controller-n6pxp-config-8ncjt" Dec 11 10:32:58 crc kubenswrapper[4953]: I1211 10:32:58.481719 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4f5d74d5-93d8-4ed3-9d06-b7311f291838-scripts\") pod \"ovn-controller-n6pxp-config-8ncjt\" (UID: \"4f5d74d5-93d8-4ed3-9d06-b7311f291838\") " pod="openstack/ovn-controller-n6pxp-config-8ncjt" Dec 11 10:32:58 crc kubenswrapper[4953]: I1211 10:32:58.481836 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4f5d74d5-93d8-4ed3-9d06-b7311f291838-var-run\") pod \"ovn-controller-n6pxp-config-8ncjt\" (UID: \"4f5d74d5-93d8-4ed3-9d06-b7311f291838\") " pod="openstack/ovn-controller-n6pxp-config-8ncjt" Dec 11 10:32:58 crc kubenswrapper[4953]: I1211 10:32:58.482009 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/4f5d74d5-93d8-4ed3-9d06-b7311f291838-additional-scripts\") pod \"ovn-controller-n6pxp-config-8ncjt\" (UID: \"4f5d74d5-93d8-4ed3-9d06-b7311f291838\") " pod="openstack/ovn-controller-n6pxp-config-8ncjt" Dec 11 10:32:58 crc kubenswrapper[4953]: I1211 10:32:58.583742 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/4f5d74d5-93d8-4ed3-9d06-b7311f291838-additional-scripts\") pod \"ovn-controller-n6pxp-config-8ncjt\" (UID: \"4f5d74d5-93d8-4ed3-9d06-b7311f291838\") " pod="openstack/ovn-controller-n6pxp-config-8ncjt" Dec 11 10:32:58 crc kubenswrapper[4953]: I1211 10:32:58.583858 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzz9s\" (UniqueName: \"kubernetes.io/projected/4f5d74d5-93d8-4ed3-9d06-b7311f291838-kube-api-access-fzz9s\") pod \"ovn-controller-n6pxp-config-8ncjt\" (UID: \"4f5d74d5-93d8-4ed3-9d06-b7311f291838\") " pod="openstack/ovn-controller-n6pxp-config-8ncjt" Dec 11 10:32:58 crc kubenswrapper[4953]: I1211 10:32:58.583895 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4f5d74d5-93d8-4ed3-9d06-b7311f291838-var-log-ovn\") pod \"ovn-controller-n6pxp-config-8ncjt\" (UID: \"4f5d74d5-93d8-4ed3-9d06-b7311f291838\") " pod="openstack/ovn-controller-n6pxp-config-8ncjt" Dec 11 10:32:58 crc kubenswrapper[4953]: I1211 10:32:58.584003 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4f5d74d5-93d8-4ed3-9d06-b7311f291838-var-run-ovn\") pod \"ovn-controller-n6pxp-config-8ncjt\" (UID: \"4f5d74d5-93d8-4ed3-9d06-b7311f291838\") " pod="openstack/ovn-controller-n6pxp-config-8ncjt" Dec 11 10:32:58 crc kubenswrapper[4953]: I1211 10:32:58.584052 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4f5d74d5-93d8-4ed3-9d06-b7311f291838-scripts\") pod \"ovn-controller-n6pxp-config-8ncjt\" (UID: \"4f5d74d5-93d8-4ed3-9d06-b7311f291838\") " pod="openstack/ovn-controller-n6pxp-config-8ncjt" Dec 11 10:32:58 crc kubenswrapper[4953]: I1211 10:32:58.584128 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4f5d74d5-93d8-4ed3-9d06-b7311f291838-var-run\") pod \"ovn-controller-n6pxp-config-8ncjt\" (UID: \"4f5d74d5-93d8-4ed3-9d06-b7311f291838\") " pod="openstack/ovn-controller-n6pxp-config-8ncjt" Dec 11 10:32:58 crc kubenswrapper[4953]: I1211 10:32:58.584252 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4f5d74d5-93d8-4ed3-9d06-b7311f291838-var-log-ovn\") pod \"ovn-controller-n6pxp-config-8ncjt\" (UID: \"4f5d74d5-93d8-4ed3-9d06-b7311f291838\") " pod="openstack/ovn-controller-n6pxp-config-8ncjt" Dec 11 10:32:58 crc kubenswrapper[4953]: I1211 10:32:58.584395 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4f5d74d5-93d8-4ed3-9d06-b7311f291838-var-run-ovn\") pod \"ovn-controller-n6pxp-config-8ncjt\" (UID: \"4f5d74d5-93d8-4ed3-9d06-b7311f291838\") " pod="openstack/ovn-controller-n6pxp-config-8ncjt" Dec 11 10:32:58 crc kubenswrapper[4953]: I1211 10:32:58.584674 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/4f5d74d5-93d8-4ed3-9d06-b7311f291838-additional-scripts\") pod \"ovn-controller-n6pxp-config-8ncjt\" (UID: \"4f5d74d5-93d8-4ed3-9d06-b7311f291838\") " pod="openstack/ovn-controller-n6pxp-config-8ncjt" Dec 11 10:32:58 crc kubenswrapper[4953]: I1211 10:32:58.584864 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4f5d74d5-93d8-4ed3-9d06-b7311f291838-var-run\") pod \"ovn-controller-n6pxp-config-8ncjt\" (UID: \"4f5d74d5-93d8-4ed3-9d06-b7311f291838\") " pod="openstack/ovn-controller-n6pxp-config-8ncjt" Dec 11 10:32:58 crc kubenswrapper[4953]: I1211 10:32:58.586227 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4f5d74d5-93d8-4ed3-9d06-b7311f291838-scripts\") pod \"ovn-controller-n6pxp-config-8ncjt\" (UID: \"4f5d74d5-93d8-4ed3-9d06-b7311f291838\") " pod="openstack/ovn-controller-n6pxp-config-8ncjt" Dec 11 10:32:58 crc kubenswrapper[4953]: I1211 10:32:58.602670 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzz9s\" (UniqueName: \"kubernetes.io/projected/4f5d74d5-93d8-4ed3-9d06-b7311f291838-kube-api-access-fzz9s\") pod \"ovn-controller-n6pxp-config-8ncjt\" (UID: \"4f5d74d5-93d8-4ed3-9d06-b7311f291838\") " pod="openstack/ovn-controller-n6pxp-config-8ncjt" Dec 11 10:32:58 crc kubenswrapper[4953]: I1211 10:32:58.728830 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-n6pxp-config-8ncjt" Dec 11 10:32:59 crc kubenswrapper[4953]: I1211 10:32:59.311614 4953 generic.go:334] "Generic (PLEG): container finished" podID="b29c8985-0d8c-4382-9969-29422929136f" containerID="8ccd21efbe477435dfe6f0792b8d26e2c55b2f1636676f65356fe0d625e5ad71" exitCode=0 Dec 11 10:32:59 crc kubenswrapper[4953]: I1211 10:32:59.311716 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b29c8985-0d8c-4382-9969-29422929136f","Type":"ContainerDied","Data":"8ccd21efbe477435dfe6f0792b8d26e2c55b2f1636676f65356fe0d625e5ad71"} Dec 11 10:32:59 crc kubenswrapper[4953]: I1211 10:32:59.328334 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-n6pxp-config-8ncjt"] Dec 11 10:32:59 crc kubenswrapper[4953]: W1211 10:32:59.347721 4953 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4f5d74d5_93d8_4ed3_9d06_b7311f291838.slice/crio-1352bb14e744363a850031e23cc501a27373cf5edfdda76fe7951715e4d57010 WatchSource:0}: Error finding container 1352bb14e744363a850031e23cc501a27373cf5edfdda76fe7951715e4d57010: Status 404 returned error can't find the container with id 1352bb14e744363a850031e23cc501a27373cf5edfdda76fe7951715e4d57010 Dec 11 10:33:00 crc kubenswrapper[4953]: I1211 10:33:00.323867 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b29c8985-0d8c-4382-9969-29422929136f","Type":"ContainerStarted","Data":"8193f374115b267f95840c2fe78180f26fa81a7641851959e8cc0f1231cdb480"} Dec 11 10:33:00 crc kubenswrapper[4953]: I1211 10:33:00.324421 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Dec 11 10:33:00 crc kubenswrapper[4953]: I1211 10:33:00.325743 4953 generic.go:334] "Generic (PLEG): container finished" podID="4f5d74d5-93d8-4ed3-9d06-b7311f291838" containerID="6cb63fc021abbf51294f3340998ca79c270fb7c016342e5021294c8868f4f534" exitCode=0 Dec 11 10:33:00 crc kubenswrapper[4953]: I1211 10:33:00.325793 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-n6pxp-config-8ncjt" event={"ID":"4f5d74d5-93d8-4ed3-9d06-b7311f291838","Type":"ContainerDied","Data":"6cb63fc021abbf51294f3340998ca79c270fb7c016342e5021294c8868f4f534"} Dec 11 10:33:00 crc kubenswrapper[4953]: I1211 10:33:00.325813 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-n6pxp-config-8ncjt" event={"ID":"4f5d74d5-93d8-4ed3-9d06-b7311f291838","Type":"ContainerStarted","Data":"1352bb14e744363a850031e23cc501a27373cf5edfdda76fe7951715e4d57010"} Dec 11 10:33:00 crc kubenswrapper[4953]: I1211 10:33:00.357475 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=-9223371944.497316 podStartE2EDuration="1m32.357459514s" podCreationTimestamp="2025-12-11 10:31:28 +0000 UTC" firstStartedPulling="2025-12-11 10:31:30.717824611 +0000 UTC m=+1208.741683644" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:33:00.353370479 +0000 UTC m=+1298.377229522" watchObservedRunningTime="2025-12-11 10:33:00.357459514 +0000 UTC m=+1298.381318547" Dec 11 10:33:01 crc kubenswrapper[4953]: I1211 10:33:01.888628 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-n6pxp-config-8ncjt" Dec 11 10:33:02 crc kubenswrapper[4953]: I1211 10:33:02.079708 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4f5d74d5-93d8-4ed3-9d06-b7311f291838-var-run-ovn\") pod \"4f5d74d5-93d8-4ed3-9d06-b7311f291838\" (UID: \"4f5d74d5-93d8-4ed3-9d06-b7311f291838\") " Dec 11 10:33:02 crc kubenswrapper[4953]: I1211 10:33:02.079877 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/4f5d74d5-93d8-4ed3-9d06-b7311f291838-additional-scripts\") pod \"4f5d74d5-93d8-4ed3-9d06-b7311f291838\" (UID: \"4f5d74d5-93d8-4ed3-9d06-b7311f291838\") " Dec 11 10:33:02 crc kubenswrapper[4953]: I1211 10:33:02.079878 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4f5d74d5-93d8-4ed3-9d06-b7311f291838-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "4f5d74d5-93d8-4ed3-9d06-b7311f291838" (UID: "4f5d74d5-93d8-4ed3-9d06-b7311f291838"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 10:33:02 crc kubenswrapper[4953]: I1211 10:33:02.079956 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4f5d74d5-93d8-4ed3-9d06-b7311f291838-scripts\") pod \"4f5d74d5-93d8-4ed3-9d06-b7311f291838\" (UID: \"4f5d74d5-93d8-4ed3-9d06-b7311f291838\") " Dec 11 10:33:02 crc kubenswrapper[4953]: I1211 10:33:02.080033 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fzz9s\" (UniqueName: \"kubernetes.io/projected/4f5d74d5-93d8-4ed3-9d06-b7311f291838-kube-api-access-fzz9s\") pod \"4f5d74d5-93d8-4ed3-9d06-b7311f291838\" (UID: \"4f5d74d5-93d8-4ed3-9d06-b7311f291838\") " Dec 11 10:33:02 crc kubenswrapper[4953]: I1211 10:33:02.080062 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4f5d74d5-93d8-4ed3-9d06-b7311f291838-var-log-ovn\") pod \"4f5d74d5-93d8-4ed3-9d06-b7311f291838\" (UID: \"4f5d74d5-93d8-4ed3-9d06-b7311f291838\") " Dec 11 10:33:02 crc kubenswrapper[4953]: I1211 10:33:02.080118 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4f5d74d5-93d8-4ed3-9d06-b7311f291838-var-run\") pod \"4f5d74d5-93d8-4ed3-9d06-b7311f291838\" (UID: \"4f5d74d5-93d8-4ed3-9d06-b7311f291838\") " Dec 11 10:33:02 crc kubenswrapper[4953]: I1211 10:33:02.080273 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4f5d74d5-93d8-4ed3-9d06-b7311f291838-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "4f5d74d5-93d8-4ed3-9d06-b7311f291838" (UID: "4f5d74d5-93d8-4ed3-9d06-b7311f291838"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 10:33:02 crc kubenswrapper[4953]: I1211 10:33:02.080355 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4f5d74d5-93d8-4ed3-9d06-b7311f291838-var-run" (OuterVolumeSpecName: "var-run") pod "4f5d74d5-93d8-4ed3-9d06-b7311f291838" (UID: "4f5d74d5-93d8-4ed3-9d06-b7311f291838"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 10:33:02 crc kubenswrapper[4953]: I1211 10:33:02.080827 4953 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4f5d74d5-93d8-4ed3-9d06-b7311f291838-var-log-ovn\") on node \"crc\" DevicePath \"\"" Dec 11 10:33:02 crc kubenswrapper[4953]: I1211 10:33:02.080844 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f5d74d5-93d8-4ed3-9d06-b7311f291838-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "4f5d74d5-93d8-4ed3-9d06-b7311f291838" (UID: "4f5d74d5-93d8-4ed3-9d06-b7311f291838"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:33:02 crc kubenswrapper[4953]: I1211 10:33:02.080858 4953 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4f5d74d5-93d8-4ed3-9d06-b7311f291838-var-run\") on node \"crc\" DevicePath \"\"" Dec 11 10:33:02 crc kubenswrapper[4953]: I1211 10:33:02.080886 4953 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4f5d74d5-93d8-4ed3-9d06-b7311f291838-var-run-ovn\") on node \"crc\" DevicePath \"\"" Dec 11 10:33:02 crc kubenswrapper[4953]: I1211 10:33:02.080970 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f5d74d5-93d8-4ed3-9d06-b7311f291838-scripts" (OuterVolumeSpecName: "scripts") pod "4f5d74d5-93d8-4ed3-9d06-b7311f291838" (UID: "4f5d74d5-93d8-4ed3-9d06-b7311f291838"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:33:02 crc kubenswrapper[4953]: I1211 10:33:02.091811 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f5d74d5-93d8-4ed3-9d06-b7311f291838-kube-api-access-fzz9s" (OuterVolumeSpecName: "kube-api-access-fzz9s") pod "4f5d74d5-93d8-4ed3-9d06-b7311f291838" (UID: "4f5d74d5-93d8-4ed3-9d06-b7311f291838"). InnerVolumeSpecName "kube-api-access-fzz9s". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:33:02 crc kubenswrapper[4953]: I1211 10:33:02.278969 4953 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/4f5d74d5-93d8-4ed3-9d06-b7311f291838-additional-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 10:33:02 crc kubenswrapper[4953]: I1211 10:33:02.279003 4953 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4f5d74d5-93d8-4ed3-9d06-b7311f291838-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 10:33:02 crc kubenswrapper[4953]: I1211 10:33:02.279012 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fzz9s\" (UniqueName: \"kubernetes.io/projected/4f5d74d5-93d8-4ed3-9d06-b7311f291838-kube-api-access-fzz9s\") on node \"crc\" DevicePath \"\"" Dec 11 10:33:02 crc kubenswrapper[4953]: I1211 10:33:02.346093 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-n6pxp-config-8ncjt" event={"ID":"4f5d74d5-93d8-4ed3-9d06-b7311f291838","Type":"ContainerDied","Data":"1352bb14e744363a850031e23cc501a27373cf5edfdda76fe7951715e4d57010"} Dec 11 10:33:02 crc kubenswrapper[4953]: I1211 10:33:02.346141 4953 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1352bb14e744363a850031e23cc501a27373cf5edfdda76fe7951715e4d57010" Dec 11 10:33:02 crc kubenswrapper[4953]: I1211 10:33:02.346210 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-n6pxp-config-8ncjt" Dec 11 10:33:03 crc kubenswrapper[4953]: I1211 10:33:03.036964 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-n6pxp-config-8ncjt"] Dec 11 10:33:03 crc kubenswrapper[4953]: I1211 10:33:03.045181 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-n6pxp-config-8ncjt"] Dec 11 10:33:03 crc kubenswrapper[4953]: I1211 10:33:03.051221 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-n6pxp" Dec 11 10:33:03 crc kubenswrapper[4953]: I1211 10:33:03.311250 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-vzw7v"] Dec 11 10:33:03 crc kubenswrapper[4953]: E1211 10:33:03.311791 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f5d74d5-93d8-4ed3-9d06-b7311f291838" containerName="ovn-config" Dec 11 10:33:03 crc kubenswrapper[4953]: I1211 10:33:03.311825 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f5d74d5-93d8-4ed3-9d06-b7311f291838" containerName="ovn-config" Dec 11 10:33:03 crc kubenswrapper[4953]: I1211 10:33:03.312042 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f5d74d5-93d8-4ed3-9d06-b7311f291838" containerName="ovn-config" Dec 11 10:33:03 crc kubenswrapper[4953]: I1211 10:33:03.312900 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-vzw7v" Dec 11 10:33:03 crc kubenswrapper[4953]: I1211 10:33:03.315860 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Dec 11 10:33:03 crc kubenswrapper[4953]: I1211 10:33:03.316226 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-bpvgf" Dec 11 10:33:03 crc kubenswrapper[4953]: I1211 10:33:03.325703 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-n6pxp-config-vp6f6"] Dec 11 10:33:03 crc kubenswrapper[4953]: I1211 10:33:03.327106 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-n6pxp-config-vp6f6" Dec 11 10:33:03 crc kubenswrapper[4953]: I1211 10:33:03.329264 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Dec 11 10:33:03 crc kubenswrapper[4953]: I1211 10:33:03.332421 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-vzw7v"] Dec 11 10:33:03 crc kubenswrapper[4953]: I1211 10:33:03.456213 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9a60437b-c6d5-41ba-a4cd-152b5476c385-scripts\") pod \"ovn-controller-n6pxp-config-vp6f6\" (UID: \"9a60437b-c6d5-41ba-a4cd-152b5476c385\") " pod="openstack/ovn-controller-n6pxp-config-vp6f6" Dec 11 10:33:03 crc kubenswrapper[4953]: I1211 10:33:03.456267 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f099a9d1-d895-4fdc-84cc-28df6fb24db0-config-data\") pod \"glance-db-sync-vzw7v\" (UID: \"f099a9d1-d895-4fdc-84cc-28df6fb24db0\") " pod="openstack/glance-db-sync-vzw7v" Dec 11 10:33:03 crc kubenswrapper[4953]: I1211 10:33:03.456297 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/9a60437b-c6d5-41ba-a4cd-152b5476c385-additional-scripts\") pod \"ovn-controller-n6pxp-config-vp6f6\" (UID: \"9a60437b-c6d5-41ba-a4cd-152b5476c385\") " pod="openstack/ovn-controller-n6pxp-config-vp6f6" Dec 11 10:33:03 crc kubenswrapper[4953]: I1211 10:33:03.456316 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g969t\" (UniqueName: \"kubernetes.io/projected/9a60437b-c6d5-41ba-a4cd-152b5476c385-kube-api-access-g969t\") pod \"ovn-controller-n6pxp-config-vp6f6\" (UID: \"9a60437b-c6d5-41ba-a4cd-152b5476c385\") " pod="openstack/ovn-controller-n6pxp-config-vp6f6" Dec 11 10:33:03 crc kubenswrapper[4953]: I1211 10:33:03.456405 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f099a9d1-d895-4fdc-84cc-28df6fb24db0-db-sync-config-data\") pod \"glance-db-sync-vzw7v\" (UID: \"f099a9d1-d895-4fdc-84cc-28df6fb24db0\") " pod="openstack/glance-db-sync-vzw7v" Dec 11 10:33:03 crc kubenswrapper[4953]: I1211 10:33:03.456832 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9a60437b-c6d5-41ba-a4cd-152b5476c385-var-run\") pod \"ovn-controller-n6pxp-config-vp6f6\" (UID: \"9a60437b-c6d5-41ba-a4cd-152b5476c385\") " pod="openstack/ovn-controller-n6pxp-config-vp6f6" Dec 11 10:33:03 crc kubenswrapper[4953]: I1211 10:33:03.456914 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9a60437b-c6d5-41ba-a4cd-152b5476c385-var-run-ovn\") pod \"ovn-controller-n6pxp-config-vp6f6\" (UID: \"9a60437b-c6d5-41ba-a4cd-152b5476c385\") " pod="openstack/ovn-controller-n6pxp-config-vp6f6" Dec 11 10:33:03 crc kubenswrapper[4953]: I1211 10:33:03.457032 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f099a9d1-d895-4fdc-84cc-28df6fb24db0-combined-ca-bundle\") pod \"glance-db-sync-vzw7v\" (UID: \"f099a9d1-d895-4fdc-84cc-28df6fb24db0\") " pod="openstack/glance-db-sync-vzw7v" Dec 11 10:33:03 crc kubenswrapper[4953]: I1211 10:33:03.457213 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9a60437b-c6d5-41ba-a4cd-152b5476c385-var-log-ovn\") pod \"ovn-controller-n6pxp-config-vp6f6\" (UID: \"9a60437b-c6d5-41ba-a4cd-152b5476c385\") " pod="openstack/ovn-controller-n6pxp-config-vp6f6" Dec 11 10:33:03 crc kubenswrapper[4953]: I1211 10:33:03.457267 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcrts\" (UniqueName: \"kubernetes.io/projected/f099a9d1-d895-4fdc-84cc-28df6fb24db0-kube-api-access-xcrts\") pod \"glance-db-sync-vzw7v\" (UID: \"f099a9d1-d895-4fdc-84cc-28df6fb24db0\") " pod="openstack/glance-db-sync-vzw7v" Dec 11 10:33:03 crc kubenswrapper[4953]: I1211 10:33:03.458313 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-n6pxp-config-vp6f6"] Dec 11 10:33:03 crc kubenswrapper[4953]: I1211 10:33:03.559016 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f099a9d1-d895-4fdc-84cc-28df6fb24db0-combined-ca-bundle\") pod \"glance-db-sync-vzw7v\" (UID: \"f099a9d1-d895-4fdc-84cc-28df6fb24db0\") " pod="openstack/glance-db-sync-vzw7v" Dec 11 10:33:03 crc kubenswrapper[4953]: I1211 10:33:03.559127 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9a60437b-c6d5-41ba-a4cd-152b5476c385-var-log-ovn\") pod \"ovn-controller-n6pxp-config-vp6f6\" (UID: \"9a60437b-c6d5-41ba-a4cd-152b5476c385\") " pod="openstack/ovn-controller-n6pxp-config-vp6f6" Dec 11 10:33:03 crc kubenswrapper[4953]: I1211 10:33:03.559156 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xcrts\" (UniqueName: \"kubernetes.io/projected/f099a9d1-d895-4fdc-84cc-28df6fb24db0-kube-api-access-xcrts\") pod \"glance-db-sync-vzw7v\" (UID: \"f099a9d1-d895-4fdc-84cc-28df6fb24db0\") " pod="openstack/glance-db-sync-vzw7v" Dec 11 10:33:03 crc kubenswrapper[4953]: I1211 10:33:03.559191 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9a60437b-c6d5-41ba-a4cd-152b5476c385-scripts\") pod \"ovn-controller-n6pxp-config-vp6f6\" (UID: \"9a60437b-c6d5-41ba-a4cd-152b5476c385\") " pod="openstack/ovn-controller-n6pxp-config-vp6f6" Dec 11 10:33:03 crc kubenswrapper[4953]: I1211 10:33:03.559229 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f099a9d1-d895-4fdc-84cc-28df6fb24db0-config-data\") pod \"glance-db-sync-vzw7v\" (UID: \"f099a9d1-d895-4fdc-84cc-28df6fb24db0\") " pod="openstack/glance-db-sync-vzw7v" Dec 11 10:33:03 crc kubenswrapper[4953]: I1211 10:33:03.559291 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/9a60437b-c6d5-41ba-a4cd-152b5476c385-additional-scripts\") pod \"ovn-controller-n6pxp-config-vp6f6\" (UID: \"9a60437b-c6d5-41ba-a4cd-152b5476c385\") " pod="openstack/ovn-controller-n6pxp-config-vp6f6" Dec 11 10:33:03 crc kubenswrapper[4953]: I1211 10:33:03.559313 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g969t\" (UniqueName: \"kubernetes.io/projected/9a60437b-c6d5-41ba-a4cd-152b5476c385-kube-api-access-g969t\") pod \"ovn-controller-n6pxp-config-vp6f6\" (UID: \"9a60437b-c6d5-41ba-a4cd-152b5476c385\") " pod="openstack/ovn-controller-n6pxp-config-vp6f6" Dec 11 10:33:03 crc kubenswrapper[4953]: I1211 10:33:03.559425 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f099a9d1-d895-4fdc-84cc-28df6fb24db0-db-sync-config-data\") pod \"glance-db-sync-vzw7v\" (UID: \"f099a9d1-d895-4fdc-84cc-28df6fb24db0\") " pod="openstack/glance-db-sync-vzw7v" Dec 11 10:33:03 crc kubenswrapper[4953]: I1211 10:33:03.559474 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9a60437b-c6d5-41ba-a4cd-152b5476c385-var-run\") pod \"ovn-controller-n6pxp-config-vp6f6\" (UID: \"9a60437b-c6d5-41ba-a4cd-152b5476c385\") " pod="openstack/ovn-controller-n6pxp-config-vp6f6" Dec 11 10:33:03 crc kubenswrapper[4953]: I1211 10:33:03.559497 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9a60437b-c6d5-41ba-a4cd-152b5476c385-var-log-ovn\") pod \"ovn-controller-n6pxp-config-vp6f6\" (UID: \"9a60437b-c6d5-41ba-a4cd-152b5476c385\") " pod="openstack/ovn-controller-n6pxp-config-vp6f6" Dec 11 10:33:03 crc kubenswrapper[4953]: I1211 10:33:03.559514 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9a60437b-c6d5-41ba-a4cd-152b5476c385-var-run-ovn\") pod \"ovn-controller-n6pxp-config-vp6f6\" (UID: \"9a60437b-c6d5-41ba-a4cd-152b5476c385\") " pod="openstack/ovn-controller-n6pxp-config-vp6f6" Dec 11 10:33:03 crc kubenswrapper[4953]: I1211 10:33:03.559616 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9a60437b-c6d5-41ba-a4cd-152b5476c385-var-run-ovn\") pod \"ovn-controller-n6pxp-config-vp6f6\" (UID: \"9a60437b-c6d5-41ba-a4cd-152b5476c385\") " pod="openstack/ovn-controller-n6pxp-config-vp6f6" Dec 11 10:33:03 crc kubenswrapper[4953]: I1211 10:33:03.560471 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/9a60437b-c6d5-41ba-a4cd-152b5476c385-additional-scripts\") pod \"ovn-controller-n6pxp-config-vp6f6\" (UID: \"9a60437b-c6d5-41ba-a4cd-152b5476c385\") " pod="openstack/ovn-controller-n6pxp-config-vp6f6" Dec 11 10:33:03 crc kubenswrapper[4953]: I1211 10:33:03.560950 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9a60437b-c6d5-41ba-a4cd-152b5476c385-var-run\") pod \"ovn-controller-n6pxp-config-vp6f6\" (UID: \"9a60437b-c6d5-41ba-a4cd-152b5476c385\") " pod="openstack/ovn-controller-n6pxp-config-vp6f6" Dec 11 10:33:03 crc kubenswrapper[4953]: I1211 10:33:03.562127 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9a60437b-c6d5-41ba-a4cd-152b5476c385-scripts\") pod \"ovn-controller-n6pxp-config-vp6f6\" (UID: \"9a60437b-c6d5-41ba-a4cd-152b5476c385\") " pod="openstack/ovn-controller-n6pxp-config-vp6f6" Dec 11 10:33:03 crc kubenswrapper[4953]: I1211 10:33:03.564212 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f099a9d1-d895-4fdc-84cc-28df6fb24db0-combined-ca-bundle\") pod \"glance-db-sync-vzw7v\" (UID: \"f099a9d1-d895-4fdc-84cc-28df6fb24db0\") " pod="openstack/glance-db-sync-vzw7v" Dec 11 10:33:03 crc kubenswrapper[4953]: I1211 10:33:03.566228 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f099a9d1-d895-4fdc-84cc-28df6fb24db0-db-sync-config-data\") pod \"glance-db-sync-vzw7v\" (UID: \"f099a9d1-d895-4fdc-84cc-28df6fb24db0\") " pod="openstack/glance-db-sync-vzw7v" Dec 11 10:33:03 crc kubenswrapper[4953]: I1211 10:33:03.572671 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f099a9d1-d895-4fdc-84cc-28df6fb24db0-config-data\") pod \"glance-db-sync-vzw7v\" (UID: \"f099a9d1-d895-4fdc-84cc-28df6fb24db0\") " pod="openstack/glance-db-sync-vzw7v" Dec 11 10:33:03 crc kubenswrapper[4953]: I1211 10:33:03.587087 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g969t\" (UniqueName: \"kubernetes.io/projected/9a60437b-c6d5-41ba-a4cd-152b5476c385-kube-api-access-g969t\") pod \"ovn-controller-n6pxp-config-vp6f6\" (UID: \"9a60437b-c6d5-41ba-a4cd-152b5476c385\") " pod="openstack/ovn-controller-n6pxp-config-vp6f6" Dec 11 10:33:03 crc kubenswrapper[4953]: I1211 10:33:03.587220 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xcrts\" (UniqueName: \"kubernetes.io/projected/f099a9d1-d895-4fdc-84cc-28df6fb24db0-kube-api-access-xcrts\") pod \"glance-db-sync-vzw7v\" (UID: \"f099a9d1-d895-4fdc-84cc-28df6fb24db0\") " pod="openstack/glance-db-sync-vzw7v" Dec 11 10:33:03 crc kubenswrapper[4953]: I1211 10:33:03.706443 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-vzw7v" Dec 11 10:33:03 crc kubenswrapper[4953]: I1211 10:33:03.707500 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-n6pxp-config-vp6f6" Dec 11 10:33:04 crc kubenswrapper[4953]: I1211 10:33:04.510808 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f5d74d5-93d8-4ed3-9d06-b7311f291838" path="/var/lib/kubelet/pods/4f5d74d5-93d8-4ed3-9d06-b7311f291838/volumes" Dec 11 10:33:04 crc kubenswrapper[4953]: I1211 10:33:04.511855 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-n6pxp-config-vp6f6"] Dec 11 10:33:04 crc kubenswrapper[4953]: I1211 10:33:04.534486 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-vzw7v"] Dec 11 10:33:05 crc kubenswrapper[4953]: I1211 10:33:05.495435 4953 generic.go:334] "Generic (PLEG): container finished" podID="9a60437b-c6d5-41ba-a4cd-152b5476c385" containerID="e24028122b0541ca1dcdd9f22fbc323c08f54b9267d8e6861c4cf08008ec58e7" exitCode=0 Dec 11 10:33:05 crc kubenswrapper[4953]: I1211 10:33:05.495491 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-n6pxp-config-vp6f6" event={"ID":"9a60437b-c6d5-41ba-a4cd-152b5476c385","Type":"ContainerDied","Data":"e24028122b0541ca1dcdd9f22fbc323c08f54b9267d8e6861c4cf08008ec58e7"} Dec 11 10:33:05 crc kubenswrapper[4953]: I1211 10:33:05.496020 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-n6pxp-config-vp6f6" event={"ID":"9a60437b-c6d5-41ba-a4cd-152b5476c385","Type":"ContainerStarted","Data":"ac46b7d14e2cf9b705f90de69fd48f593b47b7b201ff4b7d398331d6f2f48ede"} Dec 11 10:33:05 crc kubenswrapper[4953]: I1211 10:33:05.497455 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-vzw7v" event={"ID":"f099a9d1-d895-4fdc-84cc-28df6fb24db0","Type":"ContainerStarted","Data":"d12f8c3eb4f6a86e0bb66dfb59a68462dca1bce85c82ba932d6cca1c44c8a5b7"} Dec 11 10:33:06 crc kubenswrapper[4953]: I1211 10:33:06.839437 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-n6pxp-config-vp6f6" Dec 11 10:33:06 crc kubenswrapper[4953]: I1211 10:33:06.875480 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9a60437b-c6d5-41ba-a4cd-152b5476c385-scripts\") pod \"9a60437b-c6d5-41ba-a4cd-152b5476c385\" (UID: \"9a60437b-c6d5-41ba-a4cd-152b5476c385\") " Dec 11 10:33:06 crc kubenswrapper[4953]: I1211 10:33:06.875528 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9a60437b-c6d5-41ba-a4cd-152b5476c385-var-log-ovn\") pod \"9a60437b-c6d5-41ba-a4cd-152b5476c385\" (UID: \"9a60437b-c6d5-41ba-a4cd-152b5476c385\") " Dec 11 10:33:06 crc kubenswrapper[4953]: I1211 10:33:06.875566 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9a60437b-c6d5-41ba-a4cd-152b5476c385-var-run\") pod \"9a60437b-c6d5-41ba-a4cd-152b5476c385\" (UID: \"9a60437b-c6d5-41ba-a4cd-152b5476c385\") " Dec 11 10:33:06 crc kubenswrapper[4953]: I1211 10:33:06.875599 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g969t\" (UniqueName: \"kubernetes.io/projected/9a60437b-c6d5-41ba-a4cd-152b5476c385-kube-api-access-g969t\") pod \"9a60437b-c6d5-41ba-a4cd-152b5476c385\" (UID: \"9a60437b-c6d5-41ba-a4cd-152b5476c385\") " Dec 11 10:33:06 crc kubenswrapper[4953]: I1211 10:33:06.875675 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/9a60437b-c6d5-41ba-a4cd-152b5476c385-additional-scripts\") pod \"9a60437b-c6d5-41ba-a4cd-152b5476c385\" (UID: \"9a60437b-c6d5-41ba-a4cd-152b5476c385\") " Dec 11 10:33:06 crc kubenswrapper[4953]: I1211 10:33:06.875708 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9a60437b-c6d5-41ba-a4cd-152b5476c385-var-run-ovn\") pod \"9a60437b-c6d5-41ba-a4cd-152b5476c385\" (UID: \"9a60437b-c6d5-41ba-a4cd-152b5476c385\") " Dec 11 10:33:06 crc kubenswrapper[4953]: I1211 10:33:06.876004 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a60437b-c6d5-41ba-a4cd-152b5476c385-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "9a60437b-c6d5-41ba-a4cd-152b5476c385" (UID: "9a60437b-c6d5-41ba-a4cd-152b5476c385"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 10:33:06 crc kubenswrapper[4953]: I1211 10:33:06.876661 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a60437b-c6d5-41ba-a4cd-152b5476c385-var-run" (OuterVolumeSpecName: "var-run") pod "9a60437b-c6d5-41ba-a4cd-152b5476c385" (UID: "9a60437b-c6d5-41ba-a4cd-152b5476c385"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 10:33:06 crc kubenswrapper[4953]: I1211 10:33:06.876700 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a60437b-c6d5-41ba-a4cd-152b5476c385-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "9a60437b-c6d5-41ba-a4cd-152b5476c385" (UID: "9a60437b-c6d5-41ba-a4cd-152b5476c385"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 10:33:06 crc kubenswrapper[4953]: I1211 10:33:06.877063 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a60437b-c6d5-41ba-a4cd-152b5476c385-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "9a60437b-c6d5-41ba-a4cd-152b5476c385" (UID: "9a60437b-c6d5-41ba-a4cd-152b5476c385"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:33:06 crc kubenswrapper[4953]: I1211 10:33:06.877459 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a60437b-c6d5-41ba-a4cd-152b5476c385-scripts" (OuterVolumeSpecName: "scripts") pod "9a60437b-c6d5-41ba-a4cd-152b5476c385" (UID: "9a60437b-c6d5-41ba-a4cd-152b5476c385"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:33:06 crc kubenswrapper[4953]: I1211 10:33:06.884264 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a60437b-c6d5-41ba-a4cd-152b5476c385-kube-api-access-g969t" (OuterVolumeSpecName: "kube-api-access-g969t") pod "9a60437b-c6d5-41ba-a4cd-152b5476c385" (UID: "9a60437b-c6d5-41ba-a4cd-152b5476c385"). InnerVolumeSpecName "kube-api-access-g969t". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:33:06 crc kubenswrapper[4953]: I1211 10:33:06.977309 4953 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9a60437b-c6d5-41ba-a4cd-152b5476c385-var-log-ovn\") on node \"crc\" DevicePath \"\"" Dec 11 10:33:06 crc kubenswrapper[4953]: I1211 10:33:06.977352 4953 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9a60437b-c6d5-41ba-a4cd-152b5476c385-var-run\") on node \"crc\" DevicePath \"\"" Dec 11 10:33:06 crc kubenswrapper[4953]: I1211 10:33:06.977363 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g969t\" (UniqueName: \"kubernetes.io/projected/9a60437b-c6d5-41ba-a4cd-152b5476c385-kube-api-access-g969t\") on node \"crc\" DevicePath \"\"" Dec 11 10:33:06 crc kubenswrapper[4953]: I1211 10:33:06.977373 4953 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/9a60437b-c6d5-41ba-a4cd-152b5476c385-additional-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 10:33:06 crc kubenswrapper[4953]: I1211 10:33:06.977383 4953 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9a60437b-c6d5-41ba-a4cd-152b5476c385-var-run-ovn\") on node \"crc\" DevicePath \"\"" Dec 11 10:33:06 crc kubenswrapper[4953]: I1211 10:33:06.977394 4953 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9a60437b-c6d5-41ba-a4cd-152b5476c385-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 10:33:07 crc kubenswrapper[4953]: I1211 10:33:07.513122 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-n6pxp-config-vp6f6" event={"ID":"9a60437b-c6d5-41ba-a4cd-152b5476c385","Type":"ContainerDied","Data":"ac46b7d14e2cf9b705f90de69fd48f593b47b7b201ff4b7d398331d6f2f48ede"} Dec 11 10:33:07 crc kubenswrapper[4953]: I1211 10:33:07.513399 4953 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ac46b7d14e2cf9b705f90de69fd48f593b47b7b201ff4b7d398331d6f2f48ede" Dec 11 10:33:07 crc kubenswrapper[4953]: I1211 10:33:07.513183 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-n6pxp-config-vp6f6" Dec 11 10:33:07 crc kubenswrapper[4953]: I1211 10:33:07.862644 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7be1c768-78bb-476b-b51d-8e4fe80b8500-etc-swift\") pod \"swift-storage-0\" (UID: \"7be1c768-78bb-476b-b51d-8e4fe80b8500\") " pod="openstack/swift-storage-0" Dec 11 10:33:07 crc kubenswrapper[4953]: I1211 10:33:07.869168 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7be1c768-78bb-476b-b51d-8e4fe80b8500-etc-swift\") pod \"swift-storage-0\" (UID: \"7be1c768-78bb-476b-b51d-8e4fe80b8500\") " pod="openstack/swift-storage-0" Dec 11 10:33:07 crc kubenswrapper[4953]: I1211 10:33:07.972066 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-n6pxp-config-vp6f6"] Dec 11 10:33:07 crc kubenswrapper[4953]: I1211 10:33:07.978750 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-n6pxp-config-vp6f6"] Dec 11 10:33:08 crc kubenswrapper[4953]: I1211 10:33:08.009546 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Dec 11 10:33:08 crc kubenswrapper[4953]: I1211 10:33:08.493897 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a60437b-c6d5-41ba-a4cd-152b5476c385" path="/var/lib/kubelet/pods/9a60437b-c6d5-41ba-a4cd-152b5476c385/volumes" Dec 11 10:33:08 crc kubenswrapper[4953]: I1211 10:33:08.679950 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Dec 11 10:33:08 crc kubenswrapper[4953]: W1211 10:33:08.689639 4953 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7be1c768_78bb_476b_b51d_8e4fe80b8500.slice/crio-c9663b5e599da44a2b74b878eb12528722e78579e0db32a7b865205da603cdb9 WatchSource:0}: Error finding container c9663b5e599da44a2b74b878eb12528722e78579e0db32a7b865205da603cdb9: Status 404 returned error can't find the container with id c9663b5e599da44a2b74b878eb12528722e78579e0db32a7b865205da603cdb9 Dec 11 10:33:09 crc kubenswrapper[4953]: I1211 10:33:09.538170 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7be1c768-78bb-476b-b51d-8e4fe80b8500","Type":"ContainerStarted","Data":"c9663b5e599da44a2b74b878eb12528722e78579e0db32a7b865205da603cdb9"} Dec 11 10:33:10 crc kubenswrapper[4953]: I1211 10:33:10.102056 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Dec 11 10:33:10 crc kubenswrapper[4953]: I1211 10:33:10.228823 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Dec 11 10:33:10 crc kubenswrapper[4953]: I1211 10:33:10.537929 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-af3a-account-create-update-jhm5l"] Dec 11 10:33:10 crc kubenswrapper[4953]: E1211 10:33:10.538743 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a60437b-c6d5-41ba-a4cd-152b5476c385" containerName="ovn-config" Dec 11 10:33:10 crc kubenswrapper[4953]: I1211 10:33:10.538771 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a60437b-c6d5-41ba-a4cd-152b5476c385" containerName="ovn-config" Dec 11 10:33:10 crc kubenswrapper[4953]: I1211 10:33:10.539016 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a60437b-c6d5-41ba-a4cd-152b5476c385" containerName="ovn-config" Dec 11 10:33:10 crc kubenswrapper[4953]: I1211 10:33:10.539838 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-af3a-account-create-update-jhm5l" Dec 11 10:33:10 crc kubenswrapper[4953]: I1211 10:33:10.546330 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Dec 11 10:33:10 crc kubenswrapper[4953]: I1211 10:33:10.552202 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-6zplv"] Dec 11 10:33:10 crc kubenswrapper[4953]: I1211 10:33:10.554441 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-6zplv" Dec 11 10:33:10 crc kubenswrapper[4953]: I1211 10:33:10.566152 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-af3a-account-create-update-jhm5l"] Dec 11 10:33:10 crc kubenswrapper[4953]: I1211 10:33:10.599933 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-6zplv"] Dec 11 10:33:10 crc kubenswrapper[4953]: I1211 10:33:10.625248 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-scfp6\" (UniqueName: \"kubernetes.io/projected/97b9ff8e-f944-48ee-803a-d6873a9db805-kube-api-access-scfp6\") pod \"cinder-af3a-account-create-update-jhm5l\" (UID: \"97b9ff8e-f944-48ee-803a-d6873a9db805\") " pod="openstack/cinder-af3a-account-create-update-jhm5l" Dec 11 10:33:10 crc kubenswrapper[4953]: I1211 10:33:10.625365 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/97b9ff8e-f944-48ee-803a-d6873a9db805-operator-scripts\") pod \"cinder-af3a-account-create-update-jhm5l\" (UID: \"97b9ff8e-f944-48ee-803a-d6873a9db805\") " pod="openstack/cinder-af3a-account-create-update-jhm5l" Dec 11 10:33:10 crc kubenswrapper[4953]: I1211 10:33:10.649159 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-qnwm6"] Dec 11 10:33:10 crc kubenswrapper[4953]: I1211 10:33:10.651306 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-qnwm6" Dec 11 10:33:10 crc kubenswrapper[4953]: I1211 10:33:10.664988 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-3c8c-account-create-update-rrd7p"] Dec 11 10:33:10 crc kubenswrapper[4953]: I1211 10:33:10.668135 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-3c8c-account-create-update-rrd7p" Dec 11 10:33:10 crc kubenswrapper[4953]: I1211 10:33:10.669897 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Dec 11 10:33:10 crc kubenswrapper[4953]: I1211 10:33:10.689719 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-qnwm6"] Dec 11 10:33:10 crc kubenswrapper[4953]: I1211 10:33:10.699221 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-3c8c-account-create-update-rrd7p"] Dec 11 10:33:10 crc kubenswrapper[4953]: I1211 10:33:10.755840 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f7e05c83-6a7a-453a-89d5-ba471aba22e8-operator-scripts\") pod \"cinder-db-create-6zplv\" (UID: \"f7e05c83-6a7a-453a-89d5-ba471aba22e8\") " pod="openstack/cinder-db-create-6zplv" Dec 11 10:33:10 crc kubenswrapper[4953]: I1211 10:33:10.755949 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-scfp6\" (UniqueName: \"kubernetes.io/projected/97b9ff8e-f944-48ee-803a-d6873a9db805-kube-api-access-scfp6\") pod \"cinder-af3a-account-create-update-jhm5l\" (UID: \"97b9ff8e-f944-48ee-803a-d6873a9db805\") " pod="openstack/cinder-af3a-account-create-update-jhm5l" Dec 11 10:33:10 crc kubenswrapper[4953]: I1211 10:33:10.756068 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d3193537-daf8-4c54-9200-4db57f86b98d-operator-scripts\") pod \"barbican-db-create-qnwm6\" (UID: \"d3193537-daf8-4c54-9200-4db57f86b98d\") " pod="openstack/barbican-db-create-qnwm6" Dec 11 10:33:10 crc kubenswrapper[4953]: I1211 10:33:10.756125 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/97b9ff8e-f944-48ee-803a-d6873a9db805-operator-scripts\") pod \"cinder-af3a-account-create-update-jhm5l\" (UID: \"97b9ff8e-f944-48ee-803a-d6873a9db805\") " pod="openstack/cinder-af3a-account-create-update-jhm5l" Dec 11 10:33:10 crc kubenswrapper[4953]: I1211 10:33:10.756166 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbb2d\" (UniqueName: \"kubernetes.io/projected/d3193537-daf8-4c54-9200-4db57f86b98d-kube-api-access-lbb2d\") pod \"barbican-db-create-qnwm6\" (UID: \"d3193537-daf8-4c54-9200-4db57f86b98d\") " pod="openstack/barbican-db-create-qnwm6" Dec 11 10:33:10 crc kubenswrapper[4953]: I1211 10:33:10.756204 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/150b2b4b-1e20-4e44-a696-ccca1d850081-operator-scripts\") pod \"barbican-3c8c-account-create-update-rrd7p\" (UID: \"150b2b4b-1e20-4e44-a696-ccca1d850081\") " pod="openstack/barbican-3c8c-account-create-update-rrd7p" Dec 11 10:33:10 crc kubenswrapper[4953]: I1211 10:33:10.756336 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bl9vv\" (UniqueName: \"kubernetes.io/projected/f7e05c83-6a7a-453a-89d5-ba471aba22e8-kube-api-access-bl9vv\") pod \"cinder-db-create-6zplv\" (UID: \"f7e05c83-6a7a-453a-89d5-ba471aba22e8\") " pod="openstack/cinder-db-create-6zplv" Dec 11 10:33:10 crc kubenswrapper[4953]: I1211 10:33:10.756394 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzz2m\" (UniqueName: \"kubernetes.io/projected/150b2b4b-1e20-4e44-a696-ccca1d850081-kube-api-access-pzz2m\") pod \"barbican-3c8c-account-create-update-rrd7p\" (UID: \"150b2b4b-1e20-4e44-a696-ccca1d850081\") " pod="openstack/barbican-3c8c-account-create-update-rrd7p" Dec 11 10:33:10 crc kubenswrapper[4953]: I1211 10:33:10.757897 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/97b9ff8e-f944-48ee-803a-d6873a9db805-operator-scripts\") pod \"cinder-af3a-account-create-update-jhm5l\" (UID: \"97b9ff8e-f944-48ee-803a-d6873a9db805\") " pod="openstack/cinder-af3a-account-create-update-jhm5l" Dec 11 10:33:10 crc kubenswrapper[4953]: I1211 10:33:10.797668 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-pqxj6"] Dec 11 10:33:10 crc kubenswrapper[4953]: I1211 10:33:10.799588 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-pqxj6" Dec 11 10:33:10 crc kubenswrapper[4953]: I1211 10:33:10.802566 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-scfp6\" (UniqueName: \"kubernetes.io/projected/97b9ff8e-f944-48ee-803a-d6873a9db805-kube-api-access-scfp6\") pod \"cinder-af3a-account-create-update-jhm5l\" (UID: \"97b9ff8e-f944-48ee-803a-d6873a9db805\") " pod="openstack/cinder-af3a-account-create-update-jhm5l" Dec 11 10:33:10 crc kubenswrapper[4953]: I1211 10:33:10.803172 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 11 10:33:10 crc kubenswrapper[4953]: I1211 10:33:10.803565 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 11 10:33:10 crc kubenswrapper[4953]: I1211 10:33:10.804197 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-v2jcr" Dec 11 10:33:10 crc kubenswrapper[4953]: I1211 10:33:10.804887 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 11 10:33:10 crc kubenswrapper[4953]: I1211 10:33:10.811392 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-pqxj6"] Dec 11 10:33:10 crc kubenswrapper[4953]: I1211 10:33:10.858189 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbb2d\" (UniqueName: \"kubernetes.io/projected/d3193537-daf8-4c54-9200-4db57f86b98d-kube-api-access-lbb2d\") pod \"barbican-db-create-qnwm6\" (UID: \"d3193537-daf8-4c54-9200-4db57f86b98d\") " pod="openstack/barbican-db-create-qnwm6" Dec 11 10:33:10 crc kubenswrapper[4953]: I1211 10:33:10.858249 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/150b2b4b-1e20-4e44-a696-ccca1d850081-operator-scripts\") pod \"barbican-3c8c-account-create-update-rrd7p\" (UID: \"150b2b4b-1e20-4e44-a696-ccca1d850081\") " pod="openstack/barbican-3c8c-account-create-update-rrd7p" Dec 11 10:33:10 crc kubenswrapper[4953]: I1211 10:33:10.858342 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bl9vv\" (UniqueName: \"kubernetes.io/projected/f7e05c83-6a7a-453a-89d5-ba471aba22e8-kube-api-access-bl9vv\") pod \"cinder-db-create-6zplv\" (UID: \"f7e05c83-6a7a-453a-89d5-ba471aba22e8\") " pod="openstack/cinder-db-create-6zplv" Dec 11 10:33:10 crc kubenswrapper[4953]: I1211 10:33:10.858373 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pzz2m\" (UniqueName: \"kubernetes.io/projected/150b2b4b-1e20-4e44-a696-ccca1d850081-kube-api-access-pzz2m\") pod \"barbican-3c8c-account-create-update-rrd7p\" (UID: \"150b2b4b-1e20-4e44-a696-ccca1d850081\") " pod="openstack/barbican-3c8c-account-create-update-rrd7p" Dec 11 10:33:10 crc kubenswrapper[4953]: I1211 10:33:10.858412 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4l6j8\" (UniqueName: \"kubernetes.io/projected/4318ee7e-5751-4dc5-becf-a06da8ab5a59-kube-api-access-4l6j8\") pod \"keystone-db-sync-pqxj6\" (UID: \"4318ee7e-5751-4dc5-becf-a06da8ab5a59\") " pod="openstack/keystone-db-sync-pqxj6" Dec 11 10:33:10 crc kubenswrapper[4953]: I1211 10:33:10.858443 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4318ee7e-5751-4dc5-becf-a06da8ab5a59-config-data\") pod \"keystone-db-sync-pqxj6\" (UID: \"4318ee7e-5751-4dc5-becf-a06da8ab5a59\") " pod="openstack/keystone-db-sync-pqxj6" Dec 11 10:33:10 crc kubenswrapper[4953]: I1211 10:33:10.858489 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f7e05c83-6a7a-453a-89d5-ba471aba22e8-operator-scripts\") pod \"cinder-db-create-6zplv\" (UID: \"f7e05c83-6a7a-453a-89d5-ba471aba22e8\") " pod="openstack/cinder-db-create-6zplv" Dec 11 10:33:10 crc kubenswrapper[4953]: I1211 10:33:10.858532 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4318ee7e-5751-4dc5-becf-a06da8ab5a59-combined-ca-bundle\") pod \"keystone-db-sync-pqxj6\" (UID: \"4318ee7e-5751-4dc5-becf-a06da8ab5a59\") " pod="openstack/keystone-db-sync-pqxj6" Dec 11 10:33:10 crc kubenswrapper[4953]: I1211 10:33:10.858620 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d3193537-daf8-4c54-9200-4db57f86b98d-operator-scripts\") pod \"barbican-db-create-qnwm6\" (UID: \"d3193537-daf8-4c54-9200-4db57f86b98d\") " pod="openstack/barbican-db-create-qnwm6" Dec 11 10:33:10 crc kubenswrapper[4953]: I1211 10:33:10.861752 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d3193537-daf8-4c54-9200-4db57f86b98d-operator-scripts\") pod \"barbican-db-create-qnwm6\" (UID: \"d3193537-daf8-4c54-9200-4db57f86b98d\") " pod="openstack/barbican-db-create-qnwm6" Dec 11 10:33:10 crc kubenswrapper[4953]: I1211 10:33:10.862267 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f7e05c83-6a7a-453a-89d5-ba471aba22e8-operator-scripts\") pod \"cinder-db-create-6zplv\" (UID: \"f7e05c83-6a7a-453a-89d5-ba471aba22e8\") " pod="openstack/cinder-db-create-6zplv" Dec 11 10:33:10 crc kubenswrapper[4953]: I1211 10:33:10.862453 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/150b2b4b-1e20-4e44-a696-ccca1d850081-operator-scripts\") pod \"barbican-3c8c-account-create-update-rrd7p\" (UID: \"150b2b4b-1e20-4e44-a696-ccca1d850081\") " pod="openstack/barbican-3c8c-account-create-update-rrd7p" Dec 11 10:33:10 crc kubenswrapper[4953]: I1211 10:33:10.876041 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-af3a-account-create-update-jhm5l" Dec 11 10:33:10 crc kubenswrapper[4953]: I1211 10:33:10.889764 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pzz2m\" (UniqueName: \"kubernetes.io/projected/150b2b4b-1e20-4e44-a696-ccca1d850081-kube-api-access-pzz2m\") pod \"barbican-3c8c-account-create-update-rrd7p\" (UID: \"150b2b4b-1e20-4e44-a696-ccca1d850081\") " pod="openstack/barbican-3c8c-account-create-update-rrd7p" Dec 11 10:33:10 crc kubenswrapper[4953]: I1211 10:33:10.893311 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-6vj2c"] Dec 11 10:33:10 crc kubenswrapper[4953]: I1211 10:33:10.893430 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbb2d\" (UniqueName: \"kubernetes.io/projected/d3193537-daf8-4c54-9200-4db57f86b98d-kube-api-access-lbb2d\") pod \"barbican-db-create-qnwm6\" (UID: \"d3193537-daf8-4c54-9200-4db57f86b98d\") " pod="openstack/barbican-db-create-qnwm6" Dec 11 10:33:10 crc kubenswrapper[4953]: I1211 10:33:10.894401 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-6vj2c" Dec 11 10:33:10 crc kubenswrapper[4953]: I1211 10:33:10.905990 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bl9vv\" (UniqueName: \"kubernetes.io/projected/f7e05c83-6a7a-453a-89d5-ba471aba22e8-kube-api-access-bl9vv\") pod \"cinder-db-create-6zplv\" (UID: \"f7e05c83-6a7a-453a-89d5-ba471aba22e8\") " pod="openstack/cinder-db-create-6zplv" Dec 11 10:33:10 crc kubenswrapper[4953]: I1211 10:33:10.914509 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-242b-account-create-update-x7c2z"] Dec 11 10:33:10 crc kubenswrapper[4953]: I1211 10:33:10.930882 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-6vj2c"] Dec 11 10:33:10 crc kubenswrapper[4953]: I1211 10:33:10.930991 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-242b-account-create-update-x7c2z" Dec 11 10:33:10 crc kubenswrapper[4953]: I1211 10:33:10.935876 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-242b-account-create-update-x7c2z"] Dec 11 10:33:10 crc kubenswrapper[4953]: I1211 10:33:10.937153 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Dec 11 10:33:10 crc kubenswrapper[4953]: I1211 10:33:10.967425 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8022e972-07c0-4d22-837f-d70700c0fc83-operator-scripts\") pod \"neutron-242b-account-create-update-x7c2z\" (UID: \"8022e972-07c0-4d22-837f-d70700c0fc83\") " pod="openstack/neutron-242b-account-create-update-x7c2z" Dec 11 10:33:10 crc kubenswrapper[4953]: I1211 10:33:10.967563 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4l6j8\" (UniqueName: \"kubernetes.io/projected/4318ee7e-5751-4dc5-becf-a06da8ab5a59-kube-api-access-4l6j8\") pod \"keystone-db-sync-pqxj6\" (UID: \"4318ee7e-5751-4dc5-becf-a06da8ab5a59\") " pod="openstack/keystone-db-sync-pqxj6" Dec 11 10:33:10 crc kubenswrapper[4953]: I1211 10:33:10.967615 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4318ee7e-5751-4dc5-becf-a06da8ab5a59-config-data\") pod \"keystone-db-sync-pqxj6\" (UID: \"4318ee7e-5751-4dc5-becf-a06da8ab5a59\") " pod="openstack/keystone-db-sync-pqxj6" Dec 11 10:33:10 crc kubenswrapper[4953]: I1211 10:33:10.967690 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4318ee7e-5751-4dc5-becf-a06da8ab5a59-combined-ca-bundle\") pod \"keystone-db-sync-pqxj6\" (UID: \"4318ee7e-5751-4dc5-becf-a06da8ab5a59\") " pod="openstack/keystone-db-sync-pqxj6" Dec 11 10:33:10 crc kubenswrapper[4953]: I1211 10:33:10.967723 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6dxn\" (UniqueName: \"kubernetes.io/projected/8022e972-07c0-4d22-837f-d70700c0fc83-kube-api-access-t6dxn\") pod \"neutron-242b-account-create-update-x7c2z\" (UID: \"8022e972-07c0-4d22-837f-d70700c0fc83\") " pod="openstack/neutron-242b-account-create-update-x7c2z" Dec 11 10:33:10 crc kubenswrapper[4953]: I1211 10:33:10.973477 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4318ee7e-5751-4dc5-becf-a06da8ab5a59-combined-ca-bundle\") pod \"keystone-db-sync-pqxj6\" (UID: \"4318ee7e-5751-4dc5-becf-a06da8ab5a59\") " pod="openstack/keystone-db-sync-pqxj6" Dec 11 10:33:10 crc kubenswrapper[4953]: I1211 10:33:10.982184 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-qnwm6" Dec 11 10:33:10 crc kubenswrapper[4953]: I1211 10:33:10.986917 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4318ee7e-5751-4dc5-becf-a06da8ab5a59-config-data\") pod \"keystone-db-sync-pqxj6\" (UID: \"4318ee7e-5751-4dc5-becf-a06da8ab5a59\") " pod="openstack/keystone-db-sync-pqxj6" Dec 11 10:33:10 crc kubenswrapper[4953]: I1211 10:33:10.991396 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4l6j8\" (UniqueName: \"kubernetes.io/projected/4318ee7e-5751-4dc5-becf-a06da8ab5a59-kube-api-access-4l6j8\") pod \"keystone-db-sync-pqxj6\" (UID: \"4318ee7e-5751-4dc5-becf-a06da8ab5a59\") " pod="openstack/keystone-db-sync-pqxj6" Dec 11 10:33:10 crc kubenswrapper[4953]: I1211 10:33:10.994748 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-3c8c-account-create-update-rrd7p" Dec 11 10:33:11 crc kubenswrapper[4953]: I1211 10:33:11.069505 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6dxn\" (UniqueName: \"kubernetes.io/projected/8022e972-07c0-4d22-837f-d70700c0fc83-kube-api-access-t6dxn\") pod \"neutron-242b-account-create-update-x7c2z\" (UID: \"8022e972-07c0-4d22-837f-d70700c0fc83\") " pod="openstack/neutron-242b-account-create-update-x7c2z" Dec 11 10:33:11 crc kubenswrapper[4953]: I1211 10:33:11.069611 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gcggb\" (UniqueName: \"kubernetes.io/projected/d0e6f8ed-38a5-46f4-b408-54f0f8f0be59-kube-api-access-gcggb\") pod \"neutron-db-create-6vj2c\" (UID: \"d0e6f8ed-38a5-46f4-b408-54f0f8f0be59\") " pod="openstack/neutron-db-create-6vj2c" Dec 11 10:33:11 crc kubenswrapper[4953]: I1211 10:33:11.069636 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8022e972-07c0-4d22-837f-d70700c0fc83-operator-scripts\") pod \"neutron-242b-account-create-update-x7c2z\" (UID: \"8022e972-07c0-4d22-837f-d70700c0fc83\") " pod="openstack/neutron-242b-account-create-update-x7c2z" Dec 11 10:33:11 crc kubenswrapper[4953]: I1211 10:33:11.069736 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d0e6f8ed-38a5-46f4-b408-54f0f8f0be59-operator-scripts\") pod \"neutron-db-create-6vj2c\" (UID: \"d0e6f8ed-38a5-46f4-b408-54f0f8f0be59\") " pod="openstack/neutron-db-create-6vj2c" Dec 11 10:33:11 crc kubenswrapper[4953]: I1211 10:33:11.070541 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8022e972-07c0-4d22-837f-d70700c0fc83-operator-scripts\") pod \"neutron-242b-account-create-update-x7c2z\" (UID: \"8022e972-07c0-4d22-837f-d70700c0fc83\") " pod="openstack/neutron-242b-account-create-update-x7c2z" Dec 11 10:33:11 crc kubenswrapper[4953]: I1211 10:33:11.088139 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6dxn\" (UniqueName: \"kubernetes.io/projected/8022e972-07c0-4d22-837f-d70700c0fc83-kube-api-access-t6dxn\") pod \"neutron-242b-account-create-update-x7c2z\" (UID: \"8022e972-07c0-4d22-837f-d70700c0fc83\") " pod="openstack/neutron-242b-account-create-update-x7c2z" Dec 11 10:33:11 crc kubenswrapper[4953]: I1211 10:33:11.143698 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-pqxj6" Dec 11 10:33:11 crc kubenswrapper[4953]: I1211 10:33:11.172231 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d0e6f8ed-38a5-46f4-b408-54f0f8f0be59-operator-scripts\") pod \"neutron-db-create-6vj2c\" (UID: \"d0e6f8ed-38a5-46f4-b408-54f0f8f0be59\") " pod="openstack/neutron-db-create-6vj2c" Dec 11 10:33:11 crc kubenswrapper[4953]: I1211 10:33:11.171564 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d0e6f8ed-38a5-46f4-b408-54f0f8f0be59-operator-scripts\") pod \"neutron-db-create-6vj2c\" (UID: \"d0e6f8ed-38a5-46f4-b408-54f0f8f0be59\") " pod="openstack/neutron-db-create-6vj2c" Dec 11 10:33:11 crc kubenswrapper[4953]: I1211 10:33:11.172395 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gcggb\" (UniqueName: \"kubernetes.io/projected/d0e6f8ed-38a5-46f4-b408-54f0f8f0be59-kube-api-access-gcggb\") pod \"neutron-db-create-6vj2c\" (UID: \"d0e6f8ed-38a5-46f4-b408-54f0f8f0be59\") " pod="openstack/neutron-db-create-6vj2c" Dec 11 10:33:11 crc kubenswrapper[4953]: I1211 10:33:11.188157 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-6zplv" Dec 11 10:33:11 crc kubenswrapper[4953]: I1211 10:33:11.191602 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gcggb\" (UniqueName: \"kubernetes.io/projected/d0e6f8ed-38a5-46f4-b408-54f0f8f0be59-kube-api-access-gcggb\") pod \"neutron-db-create-6vj2c\" (UID: \"d0e6f8ed-38a5-46f4-b408-54f0f8f0be59\") " pod="openstack/neutron-db-create-6vj2c" Dec 11 10:33:11 crc kubenswrapper[4953]: I1211 10:33:11.357344 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-6vj2c" Dec 11 10:33:11 crc kubenswrapper[4953]: I1211 10:33:11.366967 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-242b-account-create-update-x7c2z" Dec 11 10:33:21 crc kubenswrapper[4953]: E1211 10:33:21.230752 4953 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-glance-api@sha256:e4aa4ebbb1e581a12040e9ad2ae2709ac31b5d965bb64fc4252d1028b05c565f" Dec 11 10:33:21 crc kubenswrapper[4953]: E1211 10:33:21.231460 4953 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:quay.io/podified-antelope-centos9/openstack-glance-api@sha256:e4aa4ebbb1e581a12040e9ad2ae2709ac31b5d965bb64fc4252d1028b05c565f,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xcrts,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-vzw7v_openstack(f099a9d1-d895-4fdc-84cc-28df6fb24db0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 11 10:33:21 crc kubenswrapper[4953]: E1211 10:33:21.232720 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-vzw7v" podUID="f099a9d1-d895-4fdc-84cc-28df6fb24db0" Dec 11 10:33:21 crc kubenswrapper[4953]: E1211 10:33:21.645242 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-glance-api@sha256:e4aa4ebbb1e581a12040e9ad2ae2709ac31b5d965bb64fc4252d1028b05c565f\\\"\"" pod="openstack/glance-db-sync-vzw7v" podUID="f099a9d1-d895-4fdc-84cc-28df6fb24db0" Dec 11 10:33:21 crc kubenswrapper[4953]: I1211 10:33:21.726376 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-242b-account-create-update-x7c2z"] Dec 11 10:33:22 crc kubenswrapper[4953]: I1211 10:33:22.005468 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-af3a-account-create-update-jhm5l"] Dec 11 10:33:22 crc kubenswrapper[4953]: I1211 10:33:22.034420 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-pqxj6"] Dec 11 10:33:22 crc kubenswrapper[4953]: I1211 10:33:22.042286 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-6vj2c"] Dec 11 10:33:22 crc kubenswrapper[4953]: W1211 10:33:22.050475 4953 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd0e6f8ed_38a5_46f4_b408_54f0f8f0be59.slice/crio-6dcfeec84a1867a4ef5c2a745d1223dfc0a5a9daebaec8268a31f5aef55f7320 WatchSource:0}: Error finding container 6dcfeec84a1867a4ef5c2a745d1223dfc0a5a9daebaec8268a31f5aef55f7320: Status 404 returned error can't find the container with id 6dcfeec84a1867a4ef5c2a745d1223dfc0a5a9daebaec8268a31f5aef55f7320 Dec 11 10:33:22 crc kubenswrapper[4953]: I1211 10:33:22.151105 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-3c8c-account-create-update-rrd7p"] Dec 11 10:33:22 crc kubenswrapper[4953]: I1211 10:33:22.160109 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-6zplv"] Dec 11 10:33:22 crc kubenswrapper[4953]: I1211 10:33:22.166533 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-qnwm6"] Dec 11 10:33:22 crc kubenswrapper[4953]: I1211 10:33:22.657134 4953 generic.go:334] "Generic (PLEG): container finished" podID="8022e972-07c0-4d22-837f-d70700c0fc83" containerID="6f26c9a908150dcfa2ac5012a20a7a3945672bf67dd3cf13f78bf68bf18ade63" exitCode=0 Dec 11 10:33:22 crc kubenswrapper[4953]: I1211 10:33:22.657739 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-242b-account-create-update-x7c2z" event={"ID":"8022e972-07c0-4d22-837f-d70700c0fc83","Type":"ContainerDied","Data":"6f26c9a908150dcfa2ac5012a20a7a3945672bf67dd3cf13f78bf68bf18ade63"} Dec 11 10:33:22 crc kubenswrapper[4953]: I1211 10:33:22.657795 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-242b-account-create-update-x7c2z" event={"ID":"8022e972-07c0-4d22-837f-d70700c0fc83","Type":"ContainerStarted","Data":"033f27904802c53a28a9ce714a75f1fee78850181be6eb1535159007d2dee781"} Dec 11 10:33:22 crc kubenswrapper[4953]: I1211 10:33:22.662354 4953 generic.go:334] "Generic (PLEG): container finished" podID="97b9ff8e-f944-48ee-803a-d6873a9db805" containerID="79cc47d9dc3c03e712eaad55e52c68d02d784451419037cdd7fbdbf61ac6149e" exitCode=0 Dec 11 10:33:22 crc kubenswrapper[4953]: I1211 10:33:22.662445 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-af3a-account-create-update-jhm5l" event={"ID":"97b9ff8e-f944-48ee-803a-d6873a9db805","Type":"ContainerDied","Data":"79cc47d9dc3c03e712eaad55e52c68d02d784451419037cdd7fbdbf61ac6149e"} Dec 11 10:33:22 crc kubenswrapper[4953]: I1211 10:33:22.662475 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-af3a-account-create-update-jhm5l" event={"ID":"97b9ff8e-f944-48ee-803a-d6873a9db805","Type":"ContainerStarted","Data":"5a738edda3946d4373a7481091699e3bb830aeffcc146d29c5ec27cab6fbc5f7"} Dec 11 10:33:22 crc kubenswrapper[4953]: I1211 10:33:22.666977 4953 generic.go:334] "Generic (PLEG): container finished" podID="d0e6f8ed-38a5-46f4-b408-54f0f8f0be59" containerID="e7f9da89bc4cc69fcfc29c86025b95c1e7102d882393aafef545cd1a55d66176" exitCode=0 Dec 11 10:33:22 crc kubenswrapper[4953]: I1211 10:33:22.667032 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-6vj2c" event={"ID":"d0e6f8ed-38a5-46f4-b408-54f0f8f0be59","Type":"ContainerDied","Data":"e7f9da89bc4cc69fcfc29c86025b95c1e7102d882393aafef545cd1a55d66176"} Dec 11 10:33:22 crc kubenswrapper[4953]: I1211 10:33:22.667101 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-6vj2c" event={"ID":"d0e6f8ed-38a5-46f4-b408-54f0f8f0be59","Type":"ContainerStarted","Data":"6dcfeec84a1867a4ef5c2a745d1223dfc0a5a9daebaec8268a31f5aef55f7320"} Dec 11 10:33:22 crc kubenswrapper[4953]: I1211 10:33:22.669148 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-pqxj6" event={"ID":"4318ee7e-5751-4dc5-becf-a06da8ab5a59","Type":"ContainerStarted","Data":"3762c695691421869ac42b8dc7842adda1e790c56789e96b8122a75e3dccaa8f"} Dec 11 10:33:22 crc kubenswrapper[4953]: I1211 10:33:22.678874 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-6zplv" event={"ID":"f7e05c83-6a7a-453a-89d5-ba471aba22e8","Type":"ContainerStarted","Data":"c0ec3c53a1addac2b6eaec9e38e1601216e6fa7d2457dfa7d6b2ce8322a36b97"} Dec 11 10:33:22 crc kubenswrapper[4953]: I1211 10:33:22.678921 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-6zplv" event={"ID":"f7e05c83-6a7a-453a-89d5-ba471aba22e8","Type":"ContainerStarted","Data":"162dd7dc9c412895c99669ec2821f09d25ad249d539a3388fa7673fd10d930c6"} Dec 11 10:33:22 crc kubenswrapper[4953]: I1211 10:33:22.681632 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-qnwm6" event={"ID":"d3193537-daf8-4c54-9200-4db57f86b98d","Type":"ContainerStarted","Data":"ff4df84b5455a06234056225db00cca3e71fae62243dad9881fe1f0298afdb96"} Dec 11 10:33:22 crc kubenswrapper[4953]: I1211 10:33:22.681691 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-qnwm6" event={"ID":"d3193537-daf8-4c54-9200-4db57f86b98d","Type":"ContainerStarted","Data":"f31336391549862261ae0b4b609c96eaf75aa3461f2c8fb68866d38109d87746"} Dec 11 10:33:22 crc kubenswrapper[4953]: I1211 10:33:22.684229 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7be1c768-78bb-476b-b51d-8e4fe80b8500","Type":"ContainerStarted","Data":"84916ff0808e4afae4bbc6dc9c0bfcc649e85608c78bcca53fc062955964d97f"} Dec 11 10:33:22 crc kubenswrapper[4953]: I1211 10:33:22.684364 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7be1c768-78bb-476b-b51d-8e4fe80b8500","Type":"ContainerStarted","Data":"8bd2acaf8a28b1f1656e66014334ca8748f846ad6e8ad38b27cb4bdf466f3173"} Dec 11 10:33:22 crc kubenswrapper[4953]: I1211 10:33:22.684496 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7be1c768-78bb-476b-b51d-8e4fe80b8500","Type":"ContainerStarted","Data":"679d2553c36012b1b180157877c057ec44f2c2462adfbbecdb5379d3b623b02c"} Dec 11 10:33:22 crc kubenswrapper[4953]: I1211 10:33:22.684738 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7be1c768-78bb-476b-b51d-8e4fe80b8500","Type":"ContainerStarted","Data":"8f4c46cb4b9e3e20f278144150f92781df0603ba1ce189953a04f830ee3bc004"} Dec 11 10:33:22 crc kubenswrapper[4953]: I1211 10:33:22.688066 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-3c8c-account-create-update-rrd7p" event={"ID":"150b2b4b-1e20-4e44-a696-ccca1d850081","Type":"ContainerStarted","Data":"ce35d9c8db3f099f3512933e98a8fa5956b4c1181fcb2c19d2c45adeaab76074"} Dec 11 10:33:22 crc kubenswrapper[4953]: I1211 10:33:22.688106 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-3c8c-account-create-update-rrd7p" event={"ID":"150b2b4b-1e20-4e44-a696-ccca1d850081","Type":"ContainerStarted","Data":"4022da1840ac75a27124d6bd834cb6c84518a6e6aa677d7c88f2f65f6ae6cc45"} Dec 11 10:33:22 crc kubenswrapper[4953]: I1211 10:33:22.740038 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-3c8c-account-create-update-rrd7p" podStartSLOduration=12.74001593 podStartE2EDuration="12.74001593s" podCreationTimestamp="2025-12-11 10:33:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:33:22.717963218 +0000 UTC m=+1320.741822251" watchObservedRunningTime="2025-12-11 10:33:22.74001593 +0000 UTC m=+1320.763874963" Dec 11 10:33:22 crc kubenswrapper[4953]: I1211 10:33:22.745987 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-qnwm6" podStartSLOduration=12.745968512 podStartE2EDuration="12.745968512s" podCreationTimestamp="2025-12-11 10:33:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:33:22.737180294 +0000 UTC m=+1320.761039327" watchObservedRunningTime="2025-12-11 10:33:22.745968512 +0000 UTC m=+1320.769827545" Dec 11 10:33:22 crc kubenswrapper[4953]: I1211 10:33:22.754124 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-6zplv" podStartSLOduration=12.75410922 podStartE2EDuration="12.75410922s" podCreationTimestamp="2025-12-11 10:33:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:33:22.753937675 +0000 UTC m=+1320.777796708" watchObservedRunningTime="2025-12-11 10:33:22.75410922 +0000 UTC m=+1320.777968253" Dec 11 10:33:23 crc kubenswrapper[4953]: I1211 10:33:23.710334 4953 generic.go:334] "Generic (PLEG): container finished" podID="f7e05c83-6a7a-453a-89d5-ba471aba22e8" containerID="c0ec3c53a1addac2b6eaec9e38e1601216e6fa7d2457dfa7d6b2ce8322a36b97" exitCode=0 Dec 11 10:33:23 crc kubenswrapper[4953]: I1211 10:33:23.710786 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-6zplv" event={"ID":"f7e05c83-6a7a-453a-89d5-ba471aba22e8","Type":"ContainerDied","Data":"c0ec3c53a1addac2b6eaec9e38e1601216e6fa7d2457dfa7d6b2ce8322a36b97"} Dec 11 10:33:23 crc kubenswrapper[4953]: I1211 10:33:23.714371 4953 generic.go:334] "Generic (PLEG): container finished" podID="d3193537-daf8-4c54-9200-4db57f86b98d" containerID="ff4df84b5455a06234056225db00cca3e71fae62243dad9881fe1f0298afdb96" exitCode=0 Dec 11 10:33:23 crc kubenswrapper[4953]: I1211 10:33:23.714431 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-qnwm6" event={"ID":"d3193537-daf8-4c54-9200-4db57f86b98d","Type":"ContainerDied","Data":"ff4df84b5455a06234056225db00cca3e71fae62243dad9881fe1f0298afdb96"} Dec 11 10:33:23 crc kubenswrapper[4953]: I1211 10:33:23.716284 4953 generic.go:334] "Generic (PLEG): container finished" podID="150b2b4b-1e20-4e44-a696-ccca1d850081" containerID="ce35d9c8db3f099f3512933e98a8fa5956b4c1181fcb2c19d2c45adeaab76074" exitCode=0 Dec 11 10:33:23 crc kubenswrapper[4953]: I1211 10:33:23.716470 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-3c8c-account-create-update-rrd7p" event={"ID":"150b2b4b-1e20-4e44-a696-ccca1d850081","Type":"ContainerDied","Data":"ce35d9c8db3f099f3512933e98a8fa5956b4c1181fcb2c19d2c45adeaab76074"} Dec 11 10:33:27 crc kubenswrapper[4953]: E1211 10:33:27.237065 4953 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod97b9ff8e_f944_48ee_803a_d6873a9db805.slice/crio-conmon-79cc47d9dc3c03e712eaad55e52c68d02d784451419037cdd7fbdbf61ac6149e.scope\": RecentStats: unable to find data in memory cache]" Dec 11 10:33:27 crc kubenswrapper[4953]: I1211 10:33:27.755296 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-qnwm6" event={"ID":"d3193537-daf8-4c54-9200-4db57f86b98d","Type":"ContainerDied","Data":"f31336391549862261ae0b4b609c96eaf75aa3461f2c8fb68866d38109d87746"} Dec 11 10:33:27 crc kubenswrapper[4953]: I1211 10:33:27.755354 4953 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f31336391549862261ae0b4b609c96eaf75aa3461f2c8fb68866d38109d87746" Dec 11 10:33:27 crc kubenswrapper[4953]: I1211 10:33:27.758313 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-242b-account-create-update-x7c2z" event={"ID":"8022e972-07c0-4d22-837f-d70700c0fc83","Type":"ContainerDied","Data":"033f27904802c53a28a9ce714a75f1fee78850181be6eb1535159007d2dee781"} Dec 11 10:33:27 crc kubenswrapper[4953]: I1211 10:33:27.758372 4953 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="033f27904802c53a28a9ce714a75f1fee78850181be6eb1535159007d2dee781" Dec 11 10:33:27 crc kubenswrapper[4953]: I1211 10:33:27.764274 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-af3a-account-create-update-jhm5l" event={"ID":"97b9ff8e-f944-48ee-803a-d6873a9db805","Type":"ContainerDied","Data":"5a738edda3946d4373a7481091699e3bb830aeffcc146d29c5ec27cab6fbc5f7"} Dec 11 10:33:27 crc kubenswrapper[4953]: I1211 10:33:27.764333 4953 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5a738edda3946d4373a7481091699e3bb830aeffcc146d29c5ec27cab6fbc5f7" Dec 11 10:33:27 crc kubenswrapper[4953]: I1211 10:33:27.767657 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-6vj2c" event={"ID":"d0e6f8ed-38a5-46f4-b408-54f0f8f0be59","Type":"ContainerDied","Data":"6dcfeec84a1867a4ef5c2a745d1223dfc0a5a9daebaec8268a31f5aef55f7320"} Dec 11 10:33:27 crc kubenswrapper[4953]: I1211 10:33:27.767728 4953 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6dcfeec84a1867a4ef5c2a745d1223dfc0a5a9daebaec8268a31f5aef55f7320" Dec 11 10:33:27 crc kubenswrapper[4953]: I1211 10:33:27.804472 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-242b-account-create-update-x7c2z" Dec 11 10:33:27 crc kubenswrapper[4953]: I1211 10:33:27.812174 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-af3a-account-create-update-jhm5l" Dec 11 10:33:27 crc kubenswrapper[4953]: I1211 10:33:27.820135 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-6vj2c" Dec 11 10:33:27 crc kubenswrapper[4953]: I1211 10:33:27.842015 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-qnwm6" Dec 11 10:33:27 crc kubenswrapper[4953]: I1211 10:33:27.911716 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-scfp6\" (UniqueName: \"kubernetes.io/projected/97b9ff8e-f944-48ee-803a-d6873a9db805-kube-api-access-scfp6\") pod \"97b9ff8e-f944-48ee-803a-d6873a9db805\" (UID: \"97b9ff8e-f944-48ee-803a-d6873a9db805\") " Dec 11 10:33:27 crc kubenswrapper[4953]: I1211 10:33:27.911789 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/97b9ff8e-f944-48ee-803a-d6873a9db805-operator-scripts\") pod \"97b9ff8e-f944-48ee-803a-d6873a9db805\" (UID: \"97b9ff8e-f944-48ee-803a-d6873a9db805\") " Dec 11 10:33:27 crc kubenswrapper[4953]: I1211 10:33:27.911868 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8022e972-07c0-4d22-837f-d70700c0fc83-operator-scripts\") pod \"8022e972-07c0-4d22-837f-d70700c0fc83\" (UID: \"8022e972-07c0-4d22-837f-d70700c0fc83\") " Dec 11 10:33:27 crc kubenswrapper[4953]: I1211 10:33:27.912213 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d0e6f8ed-38a5-46f4-b408-54f0f8f0be59-operator-scripts\") pod \"d0e6f8ed-38a5-46f4-b408-54f0f8f0be59\" (UID: \"d0e6f8ed-38a5-46f4-b408-54f0f8f0be59\") " Dec 11 10:33:27 crc kubenswrapper[4953]: I1211 10:33:27.912292 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gcggb\" (UniqueName: \"kubernetes.io/projected/d0e6f8ed-38a5-46f4-b408-54f0f8f0be59-kube-api-access-gcggb\") pod \"d0e6f8ed-38a5-46f4-b408-54f0f8f0be59\" (UID: \"d0e6f8ed-38a5-46f4-b408-54f0f8f0be59\") " Dec 11 10:33:27 crc kubenswrapper[4953]: I1211 10:33:27.912315 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t6dxn\" (UniqueName: \"kubernetes.io/projected/8022e972-07c0-4d22-837f-d70700c0fc83-kube-api-access-t6dxn\") pod \"8022e972-07c0-4d22-837f-d70700c0fc83\" (UID: \"8022e972-07c0-4d22-837f-d70700c0fc83\") " Dec 11 10:33:27 crc kubenswrapper[4953]: I1211 10:33:27.913972 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d0e6f8ed-38a5-46f4-b408-54f0f8f0be59-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d0e6f8ed-38a5-46f4-b408-54f0f8f0be59" (UID: "d0e6f8ed-38a5-46f4-b408-54f0f8f0be59"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:33:27 crc kubenswrapper[4953]: I1211 10:33:27.914017 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8022e972-07c0-4d22-837f-d70700c0fc83-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8022e972-07c0-4d22-837f-d70700c0fc83" (UID: "8022e972-07c0-4d22-837f-d70700c0fc83"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:33:27 crc kubenswrapper[4953]: I1211 10:33:27.914048 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97b9ff8e-f944-48ee-803a-d6873a9db805-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "97b9ff8e-f944-48ee-803a-d6873a9db805" (UID: "97b9ff8e-f944-48ee-803a-d6873a9db805"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:33:27 crc kubenswrapper[4953]: I1211 10:33:27.918142 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8022e972-07c0-4d22-837f-d70700c0fc83-kube-api-access-t6dxn" (OuterVolumeSpecName: "kube-api-access-t6dxn") pod "8022e972-07c0-4d22-837f-d70700c0fc83" (UID: "8022e972-07c0-4d22-837f-d70700c0fc83"). InnerVolumeSpecName "kube-api-access-t6dxn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:33:27 crc kubenswrapper[4953]: I1211 10:33:27.919488 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0e6f8ed-38a5-46f4-b408-54f0f8f0be59-kube-api-access-gcggb" (OuterVolumeSpecName: "kube-api-access-gcggb") pod "d0e6f8ed-38a5-46f4-b408-54f0f8f0be59" (UID: "d0e6f8ed-38a5-46f4-b408-54f0f8f0be59"). InnerVolumeSpecName "kube-api-access-gcggb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:33:27 crc kubenswrapper[4953]: I1211 10:33:27.924687 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97b9ff8e-f944-48ee-803a-d6873a9db805-kube-api-access-scfp6" (OuterVolumeSpecName: "kube-api-access-scfp6") pod "97b9ff8e-f944-48ee-803a-d6873a9db805" (UID: "97b9ff8e-f944-48ee-803a-d6873a9db805"). InnerVolumeSpecName "kube-api-access-scfp6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:33:28 crc kubenswrapper[4953]: I1211 10:33:28.013457 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lbb2d\" (UniqueName: \"kubernetes.io/projected/d3193537-daf8-4c54-9200-4db57f86b98d-kube-api-access-lbb2d\") pod \"d3193537-daf8-4c54-9200-4db57f86b98d\" (UID: \"d3193537-daf8-4c54-9200-4db57f86b98d\") " Dec 11 10:33:28 crc kubenswrapper[4953]: I1211 10:33:28.013901 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d3193537-daf8-4c54-9200-4db57f86b98d-operator-scripts\") pod \"d3193537-daf8-4c54-9200-4db57f86b98d\" (UID: \"d3193537-daf8-4c54-9200-4db57f86b98d\") " Dec 11 10:33:28 crc kubenswrapper[4953]: I1211 10:33:28.014286 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3193537-daf8-4c54-9200-4db57f86b98d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d3193537-daf8-4c54-9200-4db57f86b98d" (UID: "d3193537-daf8-4c54-9200-4db57f86b98d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:33:28 crc kubenswrapper[4953]: I1211 10:33:28.014325 4953 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8022e972-07c0-4d22-837f-d70700c0fc83-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 10:33:28 crc kubenswrapper[4953]: I1211 10:33:28.014342 4953 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d0e6f8ed-38a5-46f4-b408-54f0f8f0be59-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 10:33:28 crc kubenswrapper[4953]: I1211 10:33:28.014354 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gcggb\" (UniqueName: \"kubernetes.io/projected/d0e6f8ed-38a5-46f4-b408-54f0f8f0be59-kube-api-access-gcggb\") on node \"crc\" DevicePath \"\"" Dec 11 10:33:28 crc kubenswrapper[4953]: I1211 10:33:28.014366 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t6dxn\" (UniqueName: \"kubernetes.io/projected/8022e972-07c0-4d22-837f-d70700c0fc83-kube-api-access-t6dxn\") on node \"crc\" DevicePath \"\"" Dec 11 10:33:28 crc kubenswrapper[4953]: I1211 10:33:28.014374 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-scfp6\" (UniqueName: \"kubernetes.io/projected/97b9ff8e-f944-48ee-803a-d6873a9db805-kube-api-access-scfp6\") on node \"crc\" DevicePath \"\"" Dec 11 10:33:28 crc kubenswrapper[4953]: I1211 10:33:28.014383 4953 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/97b9ff8e-f944-48ee-803a-d6873a9db805-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 10:33:28 crc kubenswrapper[4953]: I1211 10:33:28.017918 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3193537-daf8-4c54-9200-4db57f86b98d-kube-api-access-lbb2d" (OuterVolumeSpecName: "kube-api-access-lbb2d") pod "d3193537-daf8-4c54-9200-4db57f86b98d" (UID: "d3193537-daf8-4c54-9200-4db57f86b98d"). InnerVolumeSpecName "kube-api-access-lbb2d". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:33:28 crc kubenswrapper[4953]: I1211 10:33:28.115667 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lbb2d\" (UniqueName: \"kubernetes.io/projected/d3193537-daf8-4c54-9200-4db57f86b98d-kube-api-access-lbb2d\") on node \"crc\" DevicePath \"\"" Dec 11 10:33:28 crc kubenswrapper[4953]: I1211 10:33:28.115704 4953 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d3193537-daf8-4c54-9200-4db57f86b98d-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 10:33:28 crc kubenswrapper[4953]: I1211 10:33:28.172159 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-3c8c-account-create-update-rrd7p" Dec 11 10:33:28 crc kubenswrapper[4953]: I1211 10:33:28.199988 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-6zplv" Dec 11 10:33:28 crc kubenswrapper[4953]: I1211 10:33:28.318194 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f7e05c83-6a7a-453a-89d5-ba471aba22e8-operator-scripts\") pod \"f7e05c83-6a7a-453a-89d5-ba471aba22e8\" (UID: \"f7e05c83-6a7a-453a-89d5-ba471aba22e8\") " Dec 11 10:33:28 crc kubenswrapper[4953]: I1211 10:33:28.318420 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pzz2m\" (UniqueName: \"kubernetes.io/projected/150b2b4b-1e20-4e44-a696-ccca1d850081-kube-api-access-pzz2m\") pod \"150b2b4b-1e20-4e44-a696-ccca1d850081\" (UID: \"150b2b4b-1e20-4e44-a696-ccca1d850081\") " Dec 11 10:33:28 crc kubenswrapper[4953]: I1211 10:33:28.318517 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bl9vv\" (UniqueName: \"kubernetes.io/projected/f7e05c83-6a7a-453a-89d5-ba471aba22e8-kube-api-access-bl9vv\") pod \"f7e05c83-6a7a-453a-89d5-ba471aba22e8\" (UID: \"f7e05c83-6a7a-453a-89d5-ba471aba22e8\") " Dec 11 10:33:28 crc kubenswrapper[4953]: I1211 10:33:28.318620 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/150b2b4b-1e20-4e44-a696-ccca1d850081-operator-scripts\") pod \"150b2b4b-1e20-4e44-a696-ccca1d850081\" (UID: \"150b2b4b-1e20-4e44-a696-ccca1d850081\") " Dec 11 10:33:28 crc kubenswrapper[4953]: I1211 10:33:28.319024 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7e05c83-6a7a-453a-89d5-ba471aba22e8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f7e05c83-6a7a-453a-89d5-ba471aba22e8" (UID: "f7e05c83-6a7a-453a-89d5-ba471aba22e8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:33:28 crc kubenswrapper[4953]: I1211 10:33:28.319309 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/150b2b4b-1e20-4e44-a696-ccca1d850081-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "150b2b4b-1e20-4e44-a696-ccca1d850081" (UID: "150b2b4b-1e20-4e44-a696-ccca1d850081"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:33:28 crc kubenswrapper[4953]: I1211 10:33:28.325243 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7e05c83-6a7a-453a-89d5-ba471aba22e8-kube-api-access-bl9vv" (OuterVolumeSpecName: "kube-api-access-bl9vv") pod "f7e05c83-6a7a-453a-89d5-ba471aba22e8" (UID: "f7e05c83-6a7a-453a-89d5-ba471aba22e8"). InnerVolumeSpecName "kube-api-access-bl9vv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:33:28 crc kubenswrapper[4953]: I1211 10:33:28.325631 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/150b2b4b-1e20-4e44-a696-ccca1d850081-kube-api-access-pzz2m" (OuterVolumeSpecName: "kube-api-access-pzz2m") pod "150b2b4b-1e20-4e44-a696-ccca1d850081" (UID: "150b2b4b-1e20-4e44-a696-ccca1d850081"). InnerVolumeSpecName "kube-api-access-pzz2m". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:33:28 crc kubenswrapper[4953]: I1211 10:33:28.420370 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pzz2m\" (UniqueName: \"kubernetes.io/projected/150b2b4b-1e20-4e44-a696-ccca1d850081-kube-api-access-pzz2m\") on node \"crc\" DevicePath \"\"" Dec 11 10:33:28 crc kubenswrapper[4953]: I1211 10:33:28.420405 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bl9vv\" (UniqueName: \"kubernetes.io/projected/f7e05c83-6a7a-453a-89d5-ba471aba22e8-kube-api-access-bl9vv\") on node \"crc\" DevicePath \"\"" Dec 11 10:33:28 crc kubenswrapper[4953]: I1211 10:33:28.420415 4953 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/150b2b4b-1e20-4e44-a696-ccca1d850081-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 10:33:28 crc kubenswrapper[4953]: I1211 10:33:28.420425 4953 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f7e05c83-6a7a-453a-89d5-ba471aba22e8-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 10:33:28 crc kubenswrapper[4953]: I1211 10:33:28.788427 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-pqxj6" event={"ID":"4318ee7e-5751-4dc5-becf-a06da8ab5a59","Type":"ContainerStarted","Data":"92e616415816ed37c9fe41bce3b5cf2b458cff13348e5f0516d84a6c59d4c830"} Dec 11 10:33:28 crc kubenswrapper[4953]: I1211 10:33:28.790766 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-6zplv" Dec 11 10:33:28 crc kubenswrapper[4953]: I1211 10:33:28.790829 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-6zplv" event={"ID":"f7e05c83-6a7a-453a-89d5-ba471aba22e8","Type":"ContainerDied","Data":"162dd7dc9c412895c99669ec2821f09d25ad249d539a3388fa7673fd10d930c6"} Dec 11 10:33:28 crc kubenswrapper[4953]: I1211 10:33:28.790886 4953 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="162dd7dc9c412895c99669ec2821f09d25ad249d539a3388fa7673fd10d930c6" Dec 11 10:33:28 crc kubenswrapper[4953]: I1211 10:33:28.809323 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-pqxj6" podStartSLOduration=12.61105197 podStartE2EDuration="18.809301663s" podCreationTimestamp="2025-12-11 10:33:10 +0000 UTC" firstStartedPulling="2025-12-11 10:33:22.009410675 +0000 UTC m=+1320.033269708" lastFinishedPulling="2025-12-11 10:33:28.207660368 +0000 UTC m=+1326.231519401" observedRunningTime="2025-12-11 10:33:28.803323221 +0000 UTC m=+1326.827182254" watchObservedRunningTime="2025-12-11 10:33:28.809301663 +0000 UTC m=+1326.833160686" Dec 11 10:33:28 crc kubenswrapper[4953]: I1211 10:33:28.812742 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7be1c768-78bb-476b-b51d-8e4fe80b8500","Type":"ContainerStarted","Data":"8271a6a07ac8401063b754218c3eb89ceb4f2d9d019082057eb897dcd5350656"} Dec 11 10:33:28 crc kubenswrapper[4953]: I1211 10:33:28.812835 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7be1c768-78bb-476b-b51d-8e4fe80b8500","Type":"ContainerStarted","Data":"47e0171f5c393def51346598fe0050490ca2584402ed6532e4a68c71c29d1284"} Dec 11 10:33:28 crc kubenswrapper[4953]: I1211 10:33:28.812853 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7be1c768-78bb-476b-b51d-8e4fe80b8500","Type":"ContainerStarted","Data":"bf1b66be16060aee36932d81a73465cd1174ad5e0ce2ac136fa9b17ea2beb026"} Dec 11 10:33:28 crc kubenswrapper[4953]: I1211 10:33:28.817222 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-af3a-account-create-update-jhm5l" Dec 11 10:33:28 crc kubenswrapper[4953]: I1211 10:33:28.816915 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-3c8c-account-create-update-rrd7p" event={"ID":"150b2b4b-1e20-4e44-a696-ccca1d850081","Type":"ContainerDied","Data":"4022da1840ac75a27124d6bd834cb6c84518a6e6aa677d7c88f2f65f6ae6cc45"} Dec 11 10:33:28 crc kubenswrapper[4953]: I1211 10:33:28.817567 4953 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4022da1840ac75a27124d6bd834cb6c84518a6e6aa677d7c88f2f65f6ae6cc45" Dec 11 10:33:28 crc kubenswrapper[4953]: I1211 10:33:28.817644 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-3c8c-account-create-update-rrd7p" Dec 11 10:33:28 crc kubenswrapper[4953]: I1211 10:33:28.818721 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-6vj2c" Dec 11 10:33:28 crc kubenswrapper[4953]: I1211 10:33:28.818766 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-242b-account-create-update-x7c2z" Dec 11 10:33:28 crc kubenswrapper[4953]: I1211 10:33:28.818878 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-qnwm6" Dec 11 10:33:29 crc kubenswrapper[4953]: I1211 10:33:29.828940 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7be1c768-78bb-476b-b51d-8e4fe80b8500","Type":"ContainerStarted","Data":"ee01036005d992c399d8891c4088b620c28089677482095eb23ddbcf5787ed0f"} Dec 11 10:33:30 crc kubenswrapper[4953]: I1211 10:33:30.846112 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7be1c768-78bb-476b-b51d-8e4fe80b8500","Type":"ContainerStarted","Data":"8981b379cfe002ec1ffbcd789bf3f9088d55241543514d305383406d070e9749"} Dec 11 10:33:30 crc kubenswrapper[4953]: I1211 10:33:30.846560 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7be1c768-78bb-476b-b51d-8e4fe80b8500","Type":"ContainerStarted","Data":"42ee56a6413b971f972dd83deea70f7f4ed0f5bd15d3d8739f47c3de625b36da"} Dec 11 10:33:31 crc kubenswrapper[4953]: I1211 10:33:31.856555 4953 generic.go:334] "Generic (PLEG): container finished" podID="4318ee7e-5751-4dc5-becf-a06da8ab5a59" containerID="92e616415816ed37c9fe41bce3b5cf2b458cff13348e5f0516d84a6c59d4c830" exitCode=0 Dec 11 10:33:31 crc kubenswrapper[4953]: I1211 10:33:31.856653 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-pqxj6" event={"ID":"4318ee7e-5751-4dc5-becf-a06da8ab5a59","Type":"ContainerDied","Data":"92e616415816ed37c9fe41bce3b5cf2b458cff13348e5f0516d84a6c59d4c830"} Dec 11 10:33:31 crc kubenswrapper[4953]: I1211 10:33:31.865800 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7be1c768-78bb-476b-b51d-8e4fe80b8500","Type":"ContainerStarted","Data":"510beded97d4416b8880cd56e6120af1f949d427769ab1ee2c169557d12d5494"} Dec 11 10:33:31 crc kubenswrapper[4953]: I1211 10:33:31.865848 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7be1c768-78bb-476b-b51d-8e4fe80b8500","Type":"ContainerStarted","Data":"3c65359d49ee68c46b25f7c48cca23725c2a07a228cfed6a3b8c90cef4f401ce"} Dec 11 10:33:31 crc kubenswrapper[4953]: I1211 10:33:31.865862 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7be1c768-78bb-476b-b51d-8e4fe80b8500","Type":"ContainerStarted","Data":"55455d29b2f9f09dccbeb1ee95244b733e578206c81bca651b8b08a2abc3da6f"} Dec 11 10:33:31 crc kubenswrapper[4953]: I1211 10:33:31.865873 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7be1c768-78bb-476b-b51d-8e4fe80b8500","Type":"ContainerStarted","Data":"c7fa20846bc15438ea48e549cb0457b5fdbbcd2598a4d940ee938fb4fb3a9db3"} Dec 11 10:33:31 crc kubenswrapper[4953]: I1211 10:33:31.865884 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7be1c768-78bb-476b-b51d-8e4fe80b8500","Type":"ContainerStarted","Data":"bc0e3f085ef80ef3d58ffae3ef2a52f5bf40447e1f3f4fae4ba935bd88ae1802"} Dec 11 10:33:32 crc kubenswrapper[4953]: I1211 10:33:32.911215 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=37.366939452 podStartE2EDuration="58.911195979s" podCreationTimestamp="2025-12-11 10:32:34 +0000 UTC" firstStartedPulling="2025-12-11 10:33:08.692477617 +0000 UTC m=+1306.716336650" lastFinishedPulling="2025-12-11 10:33:30.236734144 +0000 UTC m=+1328.260593177" observedRunningTime="2025-12-11 10:33:32.906552247 +0000 UTC m=+1330.930411280" watchObservedRunningTime="2025-12-11 10:33:32.911195979 +0000 UTC m=+1330.935055012" Dec 11 10:33:33 crc kubenswrapper[4953]: I1211 10:33:33.204355 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8467b54bcc-9dr7l"] Dec 11 10:33:33 crc kubenswrapper[4953]: E1211 10:33:33.205012 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="150b2b4b-1e20-4e44-a696-ccca1d850081" containerName="mariadb-account-create-update" Dec 11 10:33:33 crc kubenswrapper[4953]: I1211 10:33:33.205028 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="150b2b4b-1e20-4e44-a696-ccca1d850081" containerName="mariadb-account-create-update" Dec 11 10:33:33 crc kubenswrapper[4953]: E1211 10:33:33.205051 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0e6f8ed-38a5-46f4-b408-54f0f8f0be59" containerName="mariadb-database-create" Dec 11 10:33:33 crc kubenswrapper[4953]: I1211 10:33:33.205057 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0e6f8ed-38a5-46f4-b408-54f0f8f0be59" containerName="mariadb-database-create" Dec 11 10:33:33 crc kubenswrapper[4953]: E1211 10:33:33.205068 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7e05c83-6a7a-453a-89d5-ba471aba22e8" containerName="mariadb-database-create" Dec 11 10:33:33 crc kubenswrapper[4953]: I1211 10:33:33.205074 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7e05c83-6a7a-453a-89d5-ba471aba22e8" containerName="mariadb-database-create" Dec 11 10:33:33 crc kubenswrapper[4953]: E1211 10:33:33.205084 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3193537-daf8-4c54-9200-4db57f86b98d" containerName="mariadb-database-create" Dec 11 10:33:33 crc kubenswrapper[4953]: I1211 10:33:33.205089 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3193537-daf8-4c54-9200-4db57f86b98d" containerName="mariadb-database-create" Dec 11 10:33:33 crc kubenswrapper[4953]: E1211 10:33:33.205098 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97b9ff8e-f944-48ee-803a-d6873a9db805" containerName="mariadb-account-create-update" Dec 11 10:33:33 crc kubenswrapper[4953]: I1211 10:33:33.205104 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="97b9ff8e-f944-48ee-803a-d6873a9db805" containerName="mariadb-account-create-update" Dec 11 10:33:33 crc kubenswrapper[4953]: E1211 10:33:33.205117 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8022e972-07c0-4d22-837f-d70700c0fc83" containerName="mariadb-account-create-update" Dec 11 10:33:33 crc kubenswrapper[4953]: I1211 10:33:33.205123 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="8022e972-07c0-4d22-837f-d70700c0fc83" containerName="mariadb-account-create-update" Dec 11 10:33:33 crc kubenswrapper[4953]: I1211 10:33:33.205288 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7e05c83-6a7a-453a-89d5-ba471aba22e8" containerName="mariadb-database-create" Dec 11 10:33:33 crc kubenswrapper[4953]: I1211 10:33:33.205300 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="150b2b4b-1e20-4e44-a696-ccca1d850081" containerName="mariadb-account-create-update" Dec 11 10:33:33 crc kubenswrapper[4953]: I1211 10:33:33.205312 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0e6f8ed-38a5-46f4-b408-54f0f8f0be59" containerName="mariadb-database-create" Dec 11 10:33:33 crc kubenswrapper[4953]: I1211 10:33:33.205324 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="8022e972-07c0-4d22-837f-d70700c0fc83" containerName="mariadb-account-create-update" Dec 11 10:33:33 crc kubenswrapper[4953]: I1211 10:33:33.205335 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3193537-daf8-4c54-9200-4db57f86b98d" containerName="mariadb-database-create" Dec 11 10:33:33 crc kubenswrapper[4953]: I1211 10:33:33.205345 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="97b9ff8e-f944-48ee-803a-d6873a9db805" containerName="mariadb-account-create-update" Dec 11 10:33:33 crc kubenswrapper[4953]: I1211 10:33:33.206229 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8467b54bcc-9dr7l" Dec 11 10:33:33 crc kubenswrapper[4953]: I1211 10:33:33.208950 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Dec 11 10:33:33 crc kubenswrapper[4953]: I1211 10:33:33.226586 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8467b54bcc-9dr7l"] Dec 11 10:33:33 crc kubenswrapper[4953]: I1211 10:33:33.308191 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d6f147c9-83bc-40ae-aabd-f6879d935e40-dns-swift-storage-0\") pod \"dnsmasq-dns-8467b54bcc-9dr7l\" (UID: \"d6f147c9-83bc-40ae-aabd-f6879d935e40\") " pod="openstack/dnsmasq-dns-8467b54bcc-9dr7l" Dec 11 10:33:33 crc kubenswrapper[4953]: I1211 10:33:33.308267 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d6f147c9-83bc-40ae-aabd-f6879d935e40-dns-svc\") pod \"dnsmasq-dns-8467b54bcc-9dr7l\" (UID: \"d6f147c9-83bc-40ae-aabd-f6879d935e40\") " pod="openstack/dnsmasq-dns-8467b54bcc-9dr7l" Dec 11 10:33:33 crc kubenswrapper[4953]: I1211 10:33:33.308336 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fglbm\" (UniqueName: \"kubernetes.io/projected/d6f147c9-83bc-40ae-aabd-f6879d935e40-kube-api-access-fglbm\") pod \"dnsmasq-dns-8467b54bcc-9dr7l\" (UID: \"d6f147c9-83bc-40ae-aabd-f6879d935e40\") " pod="openstack/dnsmasq-dns-8467b54bcc-9dr7l" Dec 11 10:33:33 crc kubenswrapper[4953]: I1211 10:33:33.308382 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d6f147c9-83bc-40ae-aabd-f6879d935e40-ovsdbserver-sb\") pod \"dnsmasq-dns-8467b54bcc-9dr7l\" (UID: \"d6f147c9-83bc-40ae-aabd-f6879d935e40\") " pod="openstack/dnsmasq-dns-8467b54bcc-9dr7l" Dec 11 10:33:33 crc kubenswrapper[4953]: I1211 10:33:33.308414 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6f147c9-83bc-40ae-aabd-f6879d935e40-config\") pod \"dnsmasq-dns-8467b54bcc-9dr7l\" (UID: \"d6f147c9-83bc-40ae-aabd-f6879d935e40\") " pod="openstack/dnsmasq-dns-8467b54bcc-9dr7l" Dec 11 10:33:33 crc kubenswrapper[4953]: I1211 10:33:33.308434 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d6f147c9-83bc-40ae-aabd-f6879d935e40-ovsdbserver-nb\") pod \"dnsmasq-dns-8467b54bcc-9dr7l\" (UID: \"d6f147c9-83bc-40ae-aabd-f6879d935e40\") " pod="openstack/dnsmasq-dns-8467b54bcc-9dr7l" Dec 11 10:33:33 crc kubenswrapper[4953]: I1211 10:33:33.312143 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-pqxj6" Dec 11 10:33:33 crc kubenswrapper[4953]: I1211 10:33:33.409206 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4318ee7e-5751-4dc5-becf-a06da8ab5a59-combined-ca-bundle\") pod \"4318ee7e-5751-4dc5-becf-a06da8ab5a59\" (UID: \"4318ee7e-5751-4dc5-becf-a06da8ab5a59\") " Dec 11 10:33:33 crc kubenswrapper[4953]: I1211 10:33:33.409933 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4l6j8\" (UniqueName: \"kubernetes.io/projected/4318ee7e-5751-4dc5-becf-a06da8ab5a59-kube-api-access-4l6j8\") pod \"4318ee7e-5751-4dc5-becf-a06da8ab5a59\" (UID: \"4318ee7e-5751-4dc5-becf-a06da8ab5a59\") " Dec 11 10:33:33 crc kubenswrapper[4953]: I1211 10:33:33.410197 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4318ee7e-5751-4dc5-becf-a06da8ab5a59-config-data\") pod \"4318ee7e-5751-4dc5-becf-a06da8ab5a59\" (UID: \"4318ee7e-5751-4dc5-becf-a06da8ab5a59\") " Dec 11 10:33:33 crc kubenswrapper[4953]: I1211 10:33:33.410430 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d6f147c9-83bc-40ae-aabd-f6879d935e40-dns-swift-storage-0\") pod \"dnsmasq-dns-8467b54bcc-9dr7l\" (UID: \"d6f147c9-83bc-40ae-aabd-f6879d935e40\") " pod="openstack/dnsmasq-dns-8467b54bcc-9dr7l" Dec 11 10:33:33 crc kubenswrapper[4953]: I1211 10:33:33.410484 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d6f147c9-83bc-40ae-aabd-f6879d935e40-dns-svc\") pod \"dnsmasq-dns-8467b54bcc-9dr7l\" (UID: \"d6f147c9-83bc-40ae-aabd-f6879d935e40\") " pod="openstack/dnsmasq-dns-8467b54bcc-9dr7l" Dec 11 10:33:33 crc kubenswrapper[4953]: I1211 10:33:33.410545 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fglbm\" (UniqueName: \"kubernetes.io/projected/d6f147c9-83bc-40ae-aabd-f6879d935e40-kube-api-access-fglbm\") pod \"dnsmasq-dns-8467b54bcc-9dr7l\" (UID: \"d6f147c9-83bc-40ae-aabd-f6879d935e40\") " pod="openstack/dnsmasq-dns-8467b54bcc-9dr7l" Dec 11 10:33:33 crc kubenswrapper[4953]: I1211 10:33:33.410615 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d6f147c9-83bc-40ae-aabd-f6879d935e40-ovsdbserver-sb\") pod \"dnsmasq-dns-8467b54bcc-9dr7l\" (UID: \"d6f147c9-83bc-40ae-aabd-f6879d935e40\") " pod="openstack/dnsmasq-dns-8467b54bcc-9dr7l" Dec 11 10:33:33 crc kubenswrapper[4953]: I1211 10:33:33.410692 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6f147c9-83bc-40ae-aabd-f6879d935e40-config\") pod \"dnsmasq-dns-8467b54bcc-9dr7l\" (UID: \"d6f147c9-83bc-40ae-aabd-f6879d935e40\") " pod="openstack/dnsmasq-dns-8467b54bcc-9dr7l" Dec 11 10:33:33 crc kubenswrapper[4953]: I1211 10:33:33.410722 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d6f147c9-83bc-40ae-aabd-f6879d935e40-ovsdbserver-nb\") pod \"dnsmasq-dns-8467b54bcc-9dr7l\" (UID: \"d6f147c9-83bc-40ae-aabd-f6879d935e40\") " pod="openstack/dnsmasq-dns-8467b54bcc-9dr7l" Dec 11 10:33:33 crc kubenswrapper[4953]: I1211 10:33:33.411776 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d6f147c9-83bc-40ae-aabd-f6879d935e40-ovsdbserver-nb\") pod \"dnsmasq-dns-8467b54bcc-9dr7l\" (UID: \"d6f147c9-83bc-40ae-aabd-f6879d935e40\") " pod="openstack/dnsmasq-dns-8467b54bcc-9dr7l" Dec 11 10:33:33 crc kubenswrapper[4953]: I1211 10:33:33.411983 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d6f147c9-83bc-40ae-aabd-f6879d935e40-dns-swift-storage-0\") pod \"dnsmasq-dns-8467b54bcc-9dr7l\" (UID: \"d6f147c9-83bc-40ae-aabd-f6879d935e40\") " pod="openstack/dnsmasq-dns-8467b54bcc-9dr7l" Dec 11 10:33:33 crc kubenswrapper[4953]: I1211 10:33:33.412060 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d6f147c9-83bc-40ae-aabd-f6879d935e40-ovsdbserver-sb\") pod \"dnsmasq-dns-8467b54bcc-9dr7l\" (UID: \"d6f147c9-83bc-40ae-aabd-f6879d935e40\") " pod="openstack/dnsmasq-dns-8467b54bcc-9dr7l" Dec 11 10:33:33 crc kubenswrapper[4953]: I1211 10:33:33.412307 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d6f147c9-83bc-40ae-aabd-f6879d935e40-dns-svc\") pod \"dnsmasq-dns-8467b54bcc-9dr7l\" (UID: \"d6f147c9-83bc-40ae-aabd-f6879d935e40\") " pod="openstack/dnsmasq-dns-8467b54bcc-9dr7l" Dec 11 10:33:33 crc kubenswrapper[4953]: I1211 10:33:33.412445 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6f147c9-83bc-40ae-aabd-f6879d935e40-config\") pod \"dnsmasq-dns-8467b54bcc-9dr7l\" (UID: \"d6f147c9-83bc-40ae-aabd-f6879d935e40\") " pod="openstack/dnsmasq-dns-8467b54bcc-9dr7l" Dec 11 10:33:33 crc kubenswrapper[4953]: I1211 10:33:33.417249 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4318ee7e-5751-4dc5-becf-a06da8ab5a59-kube-api-access-4l6j8" (OuterVolumeSpecName: "kube-api-access-4l6j8") pod "4318ee7e-5751-4dc5-becf-a06da8ab5a59" (UID: "4318ee7e-5751-4dc5-becf-a06da8ab5a59"). InnerVolumeSpecName "kube-api-access-4l6j8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:33:33 crc kubenswrapper[4953]: I1211 10:33:33.434455 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fglbm\" (UniqueName: \"kubernetes.io/projected/d6f147c9-83bc-40ae-aabd-f6879d935e40-kube-api-access-fglbm\") pod \"dnsmasq-dns-8467b54bcc-9dr7l\" (UID: \"d6f147c9-83bc-40ae-aabd-f6879d935e40\") " pod="openstack/dnsmasq-dns-8467b54bcc-9dr7l" Dec 11 10:33:33 crc kubenswrapper[4953]: I1211 10:33:33.436665 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4318ee7e-5751-4dc5-becf-a06da8ab5a59-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4318ee7e-5751-4dc5-becf-a06da8ab5a59" (UID: "4318ee7e-5751-4dc5-becf-a06da8ab5a59"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:33:33 crc kubenswrapper[4953]: I1211 10:33:33.458352 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4318ee7e-5751-4dc5-becf-a06da8ab5a59-config-data" (OuterVolumeSpecName: "config-data") pod "4318ee7e-5751-4dc5-becf-a06da8ab5a59" (UID: "4318ee7e-5751-4dc5-becf-a06da8ab5a59"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:33:33 crc kubenswrapper[4953]: I1211 10:33:33.512652 4953 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4318ee7e-5751-4dc5-becf-a06da8ab5a59-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 10:33:33 crc kubenswrapper[4953]: I1211 10:33:33.512788 4953 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4318ee7e-5751-4dc5-becf-a06da8ab5a59-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 10:33:33 crc kubenswrapper[4953]: I1211 10:33:33.513466 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4l6j8\" (UniqueName: \"kubernetes.io/projected/4318ee7e-5751-4dc5-becf-a06da8ab5a59-kube-api-access-4l6j8\") on node \"crc\" DevicePath \"\"" Dec 11 10:33:33 crc kubenswrapper[4953]: I1211 10:33:33.608373 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8467b54bcc-9dr7l" Dec 11 10:33:33 crc kubenswrapper[4953]: I1211 10:33:33.881221 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-pqxj6" event={"ID":"4318ee7e-5751-4dc5-becf-a06da8ab5a59","Type":"ContainerDied","Data":"3762c695691421869ac42b8dc7842adda1e790c56789e96b8122a75e3dccaa8f"} Dec 11 10:33:33 crc kubenswrapper[4953]: I1211 10:33:33.881624 4953 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3762c695691421869ac42b8dc7842adda1e790c56789e96b8122a75e3dccaa8f" Dec 11 10:33:33 crc kubenswrapper[4953]: I1211 10:33:33.881259 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-pqxj6" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.030586 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8467b54bcc-9dr7l"] Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.065975 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-h8dgc"] Dec 11 10:33:34 crc kubenswrapper[4953]: E1211 10:33:34.066430 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4318ee7e-5751-4dc5-becf-a06da8ab5a59" containerName="keystone-db-sync" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.066452 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="4318ee7e-5751-4dc5-becf-a06da8ab5a59" containerName="keystone-db-sync" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.066641 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="4318ee7e-5751-4dc5-becf-a06da8ab5a59" containerName="keystone-db-sync" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.067259 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-h8dgc" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.071589 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.071624 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-v2jcr" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.071673 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.071593 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.081892 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.086169 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-58647bbf65-5xnbq"] Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.088113 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58647bbf65-5xnbq" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.094115 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-h8dgc"] Dec 11 10:33:34 crc kubenswrapper[4953]: W1211 10:33:34.117380 4953 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd6f147c9_83bc_40ae_aabd_f6879d935e40.slice/crio-04ca47272b0639c505d18e8bd04983d21c2d11332164d2705a70e47ffd4912c7 WatchSource:0}: Error finding container 04ca47272b0639c505d18e8bd04983d21c2d11332164d2705a70e47ffd4912c7: Status 404 returned error can't find the container with id 04ca47272b0639c505d18e8bd04983d21c2d11332164d2705a70e47ffd4912c7 Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.151220 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58647bbf65-5xnbq"] Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.167611 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8467b54bcc-9dr7l"] Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.245654 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aabd22c8-8148-45fc-8d7f-af29844bc4f7-combined-ca-bundle\") pod \"keystone-bootstrap-h8dgc\" (UID: \"aabd22c8-8148-45fc-8d7f-af29844bc4f7\") " pod="openstack/keystone-bootstrap-h8dgc" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.245732 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/aabd22c8-8148-45fc-8d7f-af29844bc4f7-credential-keys\") pod \"keystone-bootstrap-h8dgc\" (UID: \"aabd22c8-8148-45fc-8d7f-af29844bc4f7\") " pod="openstack/keystone-bootstrap-h8dgc" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.245771 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgqhz\" (UniqueName: \"kubernetes.io/projected/aabd22c8-8148-45fc-8d7f-af29844bc4f7-kube-api-access-bgqhz\") pod \"keystone-bootstrap-h8dgc\" (UID: \"aabd22c8-8148-45fc-8d7f-af29844bc4f7\") " pod="openstack/keystone-bootstrap-h8dgc" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.245795 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/183cc7ed-0993-42e6-9219-bc93a27cdedd-config\") pod \"dnsmasq-dns-58647bbf65-5xnbq\" (UID: \"183cc7ed-0993-42e6-9219-bc93a27cdedd\") " pod="openstack/dnsmasq-dns-58647bbf65-5xnbq" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.245818 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aabd22c8-8148-45fc-8d7f-af29844bc4f7-scripts\") pod \"keystone-bootstrap-h8dgc\" (UID: \"aabd22c8-8148-45fc-8d7f-af29844bc4f7\") " pod="openstack/keystone-bootstrap-h8dgc" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.245849 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/183cc7ed-0993-42e6-9219-bc93a27cdedd-ovsdbserver-sb\") pod \"dnsmasq-dns-58647bbf65-5xnbq\" (UID: \"183cc7ed-0993-42e6-9219-bc93a27cdedd\") " pod="openstack/dnsmasq-dns-58647bbf65-5xnbq" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.245873 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aabd22c8-8148-45fc-8d7f-af29844bc4f7-config-data\") pod \"keystone-bootstrap-h8dgc\" (UID: \"aabd22c8-8148-45fc-8d7f-af29844bc4f7\") " pod="openstack/keystone-bootstrap-h8dgc" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.245908 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dfck\" (UniqueName: \"kubernetes.io/projected/183cc7ed-0993-42e6-9219-bc93a27cdedd-kube-api-access-9dfck\") pod \"dnsmasq-dns-58647bbf65-5xnbq\" (UID: \"183cc7ed-0993-42e6-9219-bc93a27cdedd\") " pod="openstack/dnsmasq-dns-58647bbf65-5xnbq" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.245954 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/183cc7ed-0993-42e6-9219-bc93a27cdedd-ovsdbserver-nb\") pod \"dnsmasq-dns-58647bbf65-5xnbq\" (UID: \"183cc7ed-0993-42e6-9219-bc93a27cdedd\") " pod="openstack/dnsmasq-dns-58647bbf65-5xnbq" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.245975 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/aabd22c8-8148-45fc-8d7f-af29844bc4f7-fernet-keys\") pod \"keystone-bootstrap-h8dgc\" (UID: \"aabd22c8-8148-45fc-8d7f-af29844bc4f7\") " pod="openstack/keystone-bootstrap-h8dgc" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.245990 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/183cc7ed-0993-42e6-9219-bc93a27cdedd-dns-swift-storage-0\") pod \"dnsmasq-dns-58647bbf65-5xnbq\" (UID: \"183cc7ed-0993-42e6-9219-bc93a27cdedd\") " pod="openstack/dnsmasq-dns-58647bbf65-5xnbq" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.246008 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/183cc7ed-0993-42e6-9219-bc93a27cdedd-dns-svc\") pod \"dnsmasq-dns-58647bbf65-5xnbq\" (UID: \"183cc7ed-0993-42e6-9219-bc93a27cdedd\") " pod="openstack/dnsmasq-dns-58647bbf65-5xnbq" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.264649 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-gnwcw"] Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.266187 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-gnwcw" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.269916 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.270269 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-vqh69" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.273399 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.313787 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.316630 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.321710 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.322298 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.327641 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-gnwcw"] Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.348160 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/183cc7ed-0993-42e6-9219-bc93a27cdedd-ovsdbserver-sb\") pod \"dnsmasq-dns-58647bbf65-5xnbq\" (UID: \"183cc7ed-0993-42e6-9219-bc93a27cdedd\") " pod="openstack/dnsmasq-dns-58647bbf65-5xnbq" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.348221 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aabd22c8-8148-45fc-8d7f-af29844bc4f7-config-data\") pod \"keystone-bootstrap-h8dgc\" (UID: \"aabd22c8-8148-45fc-8d7f-af29844bc4f7\") " pod="openstack/keystone-bootstrap-h8dgc" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.348264 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbr2k\" (UniqueName: \"kubernetes.io/projected/5fe16c5e-1161-4e0d-83d4-9f07a2643a6a-kube-api-access-cbr2k\") pod \"neutron-db-sync-gnwcw\" (UID: \"5fe16c5e-1161-4e0d-83d4-9f07a2643a6a\") " pod="openstack/neutron-db-sync-gnwcw" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.348307 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d8b6f34-cdf2-4285-ba3e-3a14621430e5-config-data\") pod \"ceilometer-0\" (UID: \"6d8b6f34-cdf2-4285-ba3e-3a14621430e5\") " pod="openstack/ceilometer-0" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.348334 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9dfck\" (UniqueName: \"kubernetes.io/projected/183cc7ed-0993-42e6-9219-bc93a27cdedd-kube-api-access-9dfck\") pod \"dnsmasq-dns-58647bbf65-5xnbq\" (UID: \"183cc7ed-0993-42e6-9219-bc93a27cdedd\") " pod="openstack/dnsmasq-dns-58647bbf65-5xnbq" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.348373 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fe16c5e-1161-4e0d-83d4-9f07a2643a6a-combined-ca-bundle\") pod \"neutron-db-sync-gnwcw\" (UID: \"5fe16c5e-1161-4e0d-83d4-9f07a2643a6a\") " pod="openstack/neutron-db-sync-gnwcw" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.348411 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6d8b6f34-cdf2-4285-ba3e-3a14621430e5-run-httpd\") pod \"ceilometer-0\" (UID: \"6d8b6f34-cdf2-4285-ba3e-3a14621430e5\") " pod="openstack/ceilometer-0" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.348439 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6d8b6f34-cdf2-4285-ba3e-3a14621430e5-log-httpd\") pod \"ceilometer-0\" (UID: \"6d8b6f34-cdf2-4285-ba3e-3a14621430e5\") " pod="openstack/ceilometer-0" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.348472 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/183cc7ed-0993-42e6-9219-bc93a27cdedd-ovsdbserver-nb\") pod \"dnsmasq-dns-58647bbf65-5xnbq\" (UID: \"183cc7ed-0993-42e6-9219-bc93a27cdedd\") " pod="openstack/dnsmasq-dns-58647bbf65-5xnbq" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.348513 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmxmt\" (UniqueName: \"kubernetes.io/projected/6d8b6f34-cdf2-4285-ba3e-3a14621430e5-kube-api-access-cmxmt\") pod \"ceilometer-0\" (UID: \"6d8b6f34-cdf2-4285-ba3e-3a14621430e5\") " pod="openstack/ceilometer-0" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.348541 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/aabd22c8-8148-45fc-8d7f-af29844bc4f7-fernet-keys\") pod \"keystone-bootstrap-h8dgc\" (UID: \"aabd22c8-8148-45fc-8d7f-af29844bc4f7\") " pod="openstack/keystone-bootstrap-h8dgc" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.348564 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/183cc7ed-0993-42e6-9219-bc93a27cdedd-dns-swift-storage-0\") pod \"dnsmasq-dns-58647bbf65-5xnbq\" (UID: \"183cc7ed-0993-42e6-9219-bc93a27cdedd\") " pod="openstack/dnsmasq-dns-58647bbf65-5xnbq" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.348612 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/183cc7ed-0993-42e6-9219-bc93a27cdedd-dns-svc\") pod \"dnsmasq-dns-58647bbf65-5xnbq\" (UID: \"183cc7ed-0993-42e6-9219-bc93a27cdedd\") " pod="openstack/dnsmasq-dns-58647bbf65-5xnbq" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.348641 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6d8b6f34-cdf2-4285-ba3e-3a14621430e5-scripts\") pod \"ceilometer-0\" (UID: \"6d8b6f34-cdf2-4285-ba3e-3a14621430e5\") " pod="openstack/ceilometer-0" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.348663 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5fe16c5e-1161-4e0d-83d4-9f07a2643a6a-config\") pod \"neutron-db-sync-gnwcw\" (UID: \"5fe16c5e-1161-4e0d-83d4-9f07a2643a6a\") " pod="openstack/neutron-db-sync-gnwcw" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.348696 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6d8b6f34-cdf2-4285-ba3e-3a14621430e5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6d8b6f34-cdf2-4285-ba3e-3a14621430e5\") " pod="openstack/ceilometer-0" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.348723 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aabd22c8-8148-45fc-8d7f-af29844bc4f7-combined-ca-bundle\") pod \"keystone-bootstrap-h8dgc\" (UID: \"aabd22c8-8148-45fc-8d7f-af29844bc4f7\") " pod="openstack/keystone-bootstrap-h8dgc" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.348758 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/aabd22c8-8148-45fc-8d7f-af29844bc4f7-credential-keys\") pod \"keystone-bootstrap-h8dgc\" (UID: \"aabd22c8-8148-45fc-8d7f-af29844bc4f7\") " pod="openstack/keystone-bootstrap-h8dgc" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.348811 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bgqhz\" (UniqueName: \"kubernetes.io/projected/aabd22c8-8148-45fc-8d7f-af29844bc4f7-kube-api-access-bgqhz\") pod \"keystone-bootstrap-h8dgc\" (UID: \"aabd22c8-8148-45fc-8d7f-af29844bc4f7\") " pod="openstack/keystone-bootstrap-h8dgc" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.348843 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/183cc7ed-0993-42e6-9219-bc93a27cdedd-config\") pod \"dnsmasq-dns-58647bbf65-5xnbq\" (UID: \"183cc7ed-0993-42e6-9219-bc93a27cdedd\") " pod="openstack/dnsmasq-dns-58647bbf65-5xnbq" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.348891 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aabd22c8-8148-45fc-8d7f-af29844bc4f7-scripts\") pod \"keystone-bootstrap-h8dgc\" (UID: \"aabd22c8-8148-45fc-8d7f-af29844bc4f7\") " pod="openstack/keystone-bootstrap-h8dgc" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.348920 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d8b6f34-cdf2-4285-ba3e-3a14621430e5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6d8b6f34-cdf2-4285-ba3e-3a14621430e5\") " pod="openstack/ceilometer-0" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.349256 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.349331 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/183cc7ed-0993-42e6-9219-bc93a27cdedd-ovsdbserver-sb\") pod \"dnsmasq-dns-58647bbf65-5xnbq\" (UID: \"183cc7ed-0993-42e6-9219-bc93a27cdedd\") " pod="openstack/dnsmasq-dns-58647bbf65-5xnbq" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.350197 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/183cc7ed-0993-42e6-9219-bc93a27cdedd-config\") pod \"dnsmasq-dns-58647bbf65-5xnbq\" (UID: \"183cc7ed-0993-42e6-9219-bc93a27cdedd\") " pod="openstack/dnsmasq-dns-58647bbf65-5xnbq" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.351061 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/183cc7ed-0993-42e6-9219-bc93a27cdedd-ovsdbserver-nb\") pod \"dnsmasq-dns-58647bbf65-5xnbq\" (UID: \"183cc7ed-0993-42e6-9219-bc93a27cdedd\") " pod="openstack/dnsmasq-dns-58647bbf65-5xnbq" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.351233 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/183cc7ed-0993-42e6-9219-bc93a27cdedd-dns-svc\") pod \"dnsmasq-dns-58647bbf65-5xnbq\" (UID: \"183cc7ed-0993-42e6-9219-bc93a27cdedd\") " pod="openstack/dnsmasq-dns-58647bbf65-5xnbq" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.351519 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/183cc7ed-0993-42e6-9219-bc93a27cdedd-dns-swift-storage-0\") pod \"dnsmasq-dns-58647bbf65-5xnbq\" (UID: \"183cc7ed-0993-42e6-9219-bc93a27cdedd\") " pod="openstack/dnsmasq-dns-58647bbf65-5xnbq" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.372936 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aabd22c8-8148-45fc-8d7f-af29844bc4f7-scripts\") pod \"keystone-bootstrap-h8dgc\" (UID: \"aabd22c8-8148-45fc-8d7f-af29844bc4f7\") " pod="openstack/keystone-bootstrap-h8dgc" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.372962 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aabd22c8-8148-45fc-8d7f-af29844bc4f7-config-data\") pod \"keystone-bootstrap-h8dgc\" (UID: \"aabd22c8-8148-45fc-8d7f-af29844bc4f7\") " pod="openstack/keystone-bootstrap-h8dgc" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.380885 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/aabd22c8-8148-45fc-8d7f-af29844bc4f7-fernet-keys\") pod \"keystone-bootstrap-h8dgc\" (UID: \"aabd22c8-8148-45fc-8d7f-af29844bc4f7\") " pod="openstack/keystone-bootstrap-h8dgc" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.391835 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/aabd22c8-8148-45fc-8d7f-af29844bc4f7-credential-keys\") pod \"keystone-bootstrap-h8dgc\" (UID: \"aabd22c8-8148-45fc-8d7f-af29844bc4f7\") " pod="openstack/keystone-bootstrap-h8dgc" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.399323 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aabd22c8-8148-45fc-8d7f-af29844bc4f7-combined-ca-bundle\") pod \"keystone-bootstrap-h8dgc\" (UID: \"aabd22c8-8148-45fc-8d7f-af29844bc4f7\") " pod="openstack/keystone-bootstrap-h8dgc" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.403241 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9dfck\" (UniqueName: \"kubernetes.io/projected/183cc7ed-0993-42e6-9219-bc93a27cdedd-kube-api-access-9dfck\") pod \"dnsmasq-dns-58647bbf65-5xnbq\" (UID: \"183cc7ed-0993-42e6-9219-bc93a27cdedd\") " pod="openstack/dnsmasq-dns-58647bbf65-5xnbq" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.409139 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bgqhz\" (UniqueName: \"kubernetes.io/projected/aabd22c8-8148-45fc-8d7f-af29844bc4f7-kube-api-access-bgqhz\") pod \"keystone-bootstrap-h8dgc\" (UID: \"aabd22c8-8148-45fc-8d7f-af29844bc4f7\") " pod="openstack/keystone-bootstrap-h8dgc" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.415477 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-fzjwm"] Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.416879 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-fzjwm" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.418440 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58647bbf65-5xnbq" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.425282 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-2dmsf" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.425306 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.425357 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.425806 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-fzjwm"] Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.452381 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6d8b6f34-cdf2-4285-ba3e-3a14621430e5-run-httpd\") pod \"ceilometer-0\" (UID: \"6d8b6f34-cdf2-4285-ba3e-3a14621430e5\") " pod="openstack/ceilometer-0" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.452425 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6d8b6f34-cdf2-4285-ba3e-3a14621430e5-log-httpd\") pod \"ceilometer-0\" (UID: \"6d8b6f34-cdf2-4285-ba3e-3a14621430e5\") " pod="openstack/ceilometer-0" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.452454 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cmxmt\" (UniqueName: \"kubernetes.io/projected/6d8b6f34-cdf2-4285-ba3e-3a14621430e5-kube-api-access-cmxmt\") pod \"ceilometer-0\" (UID: \"6d8b6f34-cdf2-4285-ba3e-3a14621430e5\") " pod="openstack/ceilometer-0" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.452485 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6d8b6f34-cdf2-4285-ba3e-3a14621430e5-scripts\") pod \"ceilometer-0\" (UID: \"6d8b6f34-cdf2-4285-ba3e-3a14621430e5\") " pod="openstack/ceilometer-0" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.452507 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5fe16c5e-1161-4e0d-83d4-9f07a2643a6a-config\") pod \"neutron-db-sync-gnwcw\" (UID: \"5fe16c5e-1161-4e0d-83d4-9f07a2643a6a\") " pod="openstack/neutron-db-sync-gnwcw" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.452522 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6d8b6f34-cdf2-4285-ba3e-3a14621430e5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6d8b6f34-cdf2-4285-ba3e-3a14621430e5\") " pod="openstack/ceilometer-0" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.452602 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d8b6f34-cdf2-4285-ba3e-3a14621430e5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6d8b6f34-cdf2-4285-ba3e-3a14621430e5\") " pod="openstack/ceilometer-0" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.452641 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbr2k\" (UniqueName: \"kubernetes.io/projected/5fe16c5e-1161-4e0d-83d4-9f07a2643a6a-kube-api-access-cbr2k\") pod \"neutron-db-sync-gnwcw\" (UID: \"5fe16c5e-1161-4e0d-83d4-9f07a2643a6a\") " pod="openstack/neutron-db-sync-gnwcw" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.452669 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d8b6f34-cdf2-4285-ba3e-3a14621430e5-config-data\") pod \"ceilometer-0\" (UID: \"6d8b6f34-cdf2-4285-ba3e-3a14621430e5\") " pod="openstack/ceilometer-0" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.452696 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fe16c5e-1161-4e0d-83d4-9f07a2643a6a-combined-ca-bundle\") pod \"neutron-db-sync-gnwcw\" (UID: \"5fe16c5e-1161-4e0d-83d4-9f07a2643a6a\") " pod="openstack/neutron-db-sync-gnwcw" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.456241 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6d8b6f34-cdf2-4285-ba3e-3a14621430e5-run-httpd\") pod \"ceilometer-0\" (UID: \"6d8b6f34-cdf2-4285-ba3e-3a14621430e5\") " pod="openstack/ceilometer-0" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.456282 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6d8b6f34-cdf2-4285-ba3e-3a14621430e5-log-httpd\") pod \"ceilometer-0\" (UID: \"6d8b6f34-cdf2-4285-ba3e-3a14621430e5\") " pod="openstack/ceilometer-0" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.470796 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6d8b6f34-cdf2-4285-ba3e-3a14621430e5-scripts\") pod \"ceilometer-0\" (UID: \"6d8b6f34-cdf2-4285-ba3e-3a14621430e5\") " pod="openstack/ceilometer-0" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.472501 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d8b6f34-cdf2-4285-ba3e-3a14621430e5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6d8b6f34-cdf2-4285-ba3e-3a14621430e5\") " pod="openstack/ceilometer-0" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.488231 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6d8b6f34-cdf2-4285-ba3e-3a14621430e5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6d8b6f34-cdf2-4285-ba3e-3a14621430e5\") " pod="openstack/ceilometer-0" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.489234 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/5fe16c5e-1161-4e0d-83d4-9f07a2643a6a-config\") pod \"neutron-db-sync-gnwcw\" (UID: \"5fe16c5e-1161-4e0d-83d4-9f07a2643a6a\") " pod="openstack/neutron-db-sync-gnwcw" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.489551 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d8b6f34-cdf2-4285-ba3e-3a14621430e5-config-data\") pod \"ceilometer-0\" (UID: \"6d8b6f34-cdf2-4285-ba3e-3a14621430e5\") " pod="openstack/ceilometer-0" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.489742 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fe16c5e-1161-4e0d-83d4-9f07a2643a6a-combined-ca-bundle\") pod \"neutron-db-sync-gnwcw\" (UID: \"5fe16c5e-1161-4e0d-83d4-9f07a2643a6a\") " pod="openstack/neutron-db-sync-gnwcw" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.500178 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cmxmt\" (UniqueName: \"kubernetes.io/projected/6d8b6f34-cdf2-4285-ba3e-3a14621430e5-kube-api-access-cmxmt\") pod \"ceilometer-0\" (UID: \"6d8b6f34-cdf2-4285-ba3e-3a14621430e5\") " pod="openstack/ceilometer-0" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.501366 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbr2k\" (UniqueName: \"kubernetes.io/projected/5fe16c5e-1161-4e0d-83d4-9f07a2643a6a-kube-api-access-cbr2k\") pod \"neutron-db-sync-gnwcw\" (UID: \"5fe16c5e-1161-4e0d-83d4-9f07a2643a6a\") " pod="openstack/neutron-db-sync-gnwcw" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.528614 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-vj92k"] Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.529533 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-vj92k"] Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.529762 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-vj92k" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.535916 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-tl9gl" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.536252 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.536348 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58647bbf65-5xnbq"] Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.546156 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-kv78b"] Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.572564 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-kv78b" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.576663 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ad3e4c7f-96eb-40bd-bcdd-7e3771908b6a-etc-machine-id\") pod \"cinder-db-sync-fzjwm\" (UID: \"ad3e4c7f-96eb-40bd-bcdd-7e3771908b6a\") " pod="openstack/cinder-db-sync-fzjwm" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.576849 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdjkq\" (UniqueName: \"kubernetes.io/projected/ad3e4c7f-96eb-40bd-bcdd-7e3771908b6a-kube-api-access-mdjkq\") pod \"cinder-db-sync-fzjwm\" (UID: \"ad3e4c7f-96eb-40bd-bcdd-7e3771908b6a\") " pod="openstack/cinder-db-sync-fzjwm" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.577252 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.580337 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ad3e4c7f-96eb-40bd-bcdd-7e3771908b6a-db-sync-config-data\") pod \"cinder-db-sync-fzjwm\" (UID: \"ad3e4c7f-96eb-40bd-bcdd-7e3771908b6a\") " pod="openstack/cinder-db-sync-fzjwm" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.578456 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-sjbdx" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.582974 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad3e4c7f-96eb-40bd-bcdd-7e3771908b6a-combined-ca-bundle\") pod \"cinder-db-sync-fzjwm\" (UID: \"ad3e4c7f-96eb-40bd-bcdd-7e3771908b6a\") " pod="openstack/cinder-db-sync-fzjwm" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.583023 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ad3e4c7f-96eb-40bd-bcdd-7e3771908b6a-scripts\") pod \"cinder-db-sync-fzjwm\" (UID: \"ad3e4c7f-96eb-40bd-bcdd-7e3771908b6a\") " pod="openstack/cinder-db-sync-fzjwm" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.578521 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.583044 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad3e4c7f-96eb-40bd-bcdd-7e3771908b6a-config-data\") pod \"cinder-db-sync-fzjwm\" (UID: \"ad3e4c7f-96eb-40bd-bcdd-7e3771908b6a\") " pod="openstack/cinder-db-sync-fzjwm" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.607634 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-kv78b"] Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.632068 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-fd458c8cc-bk9wq"] Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.634123 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fd458c8cc-bk9wq" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.648092 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-gnwcw" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.652728 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-fd458c8cc-bk9wq"] Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.684564 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad3e4c7f-96eb-40bd-bcdd-7e3771908b6a-combined-ca-bundle\") pod \"cinder-db-sync-fzjwm\" (UID: \"ad3e4c7f-96eb-40bd-bcdd-7e3771908b6a\") " pod="openstack/cinder-db-sync-fzjwm" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.684712 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ad3e4c7f-96eb-40bd-bcdd-7e3771908b6a-scripts\") pod \"cinder-db-sync-fzjwm\" (UID: \"ad3e4c7f-96eb-40bd-bcdd-7e3771908b6a\") " pod="openstack/cinder-db-sync-fzjwm" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.684745 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad3e4c7f-96eb-40bd-bcdd-7e3771908b6a-config-data\") pod \"cinder-db-sync-fzjwm\" (UID: \"ad3e4c7f-96eb-40bd-bcdd-7e3771908b6a\") " pod="openstack/cinder-db-sync-fzjwm" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.684780 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6eaf3f1e-6dc5-4283-9fce-0955a2f18821-ovsdbserver-nb\") pod \"dnsmasq-dns-fd458c8cc-bk9wq\" (UID: \"6eaf3f1e-6dc5-4283-9fce-0955a2f18821\") " pod="openstack/dnsmasq-dns-fd458c8cc-bk9wq" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.684821 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ad3e4c7f-96eb-40bd-bcdd-7e3771908b6a-etc-machine-id\") pod \"cinder-db-sync-fzjwm\" (UID: \"ad3e4c7f-96eb-40bd-bcdd-7e3771908b6a\") " pod="openstack/cinder-db-sync-fzjwm" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.684848 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c46c2893-6218-455e-a4ee-cf1b4cda45b7-db-sync-config-data\") pod \"barbican-db-sync-vj92k\" (UID: \"c46c2893-6218-455e-a4ee-cf1b4cda45b7\") " pod="openstack/barbican-db-sync-vj92k" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.684891 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24b23531-3ad1-4b46-88f6-e930d79b6556-config-data\") pod \"placement-db-sync-kv78b\" (UID: \"24b23531-3ad1-4b46-88f6-e930d79b6556\") " pod="openstack/placement-db-sync-kv78b" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.684939 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6eaf3f1e-6dc5-4283-9fce-0955a2f18821-dns-swift-storage-0\") pod \"dnsmasq-dns-fd458c8cc-bk9wq\" (UID: \"6eaf3f1e-6dc5-4283-9fce-0955a2f18821\") " pod="openstack/dnsmasq-dns-fd458c8cc-bk9wq" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.684963 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c46c2893-6218-455e-a4ee-cf1b4cda45b7-combined-ca-bundle\") pod \"barbican-db-sync-vj92k\" (UID: \"c46c2893-6218-455e-a4ee-cf1b4cda45b7\") " pod="openstack/barbican-db-sync-vj92k" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.684986 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mdjkq\" (UniqueName: \"kubernetes.io/projected/ad3e4c7f-96eb-40bd-bcdd-7e3771908b6a-kube-api-access-mdjkq\") pod \"cinder-db-sync-fzjwm\" (UID: \"ad3e4c7f-96eb-40bd-bcdd-7e3771908b6a\") " pod="openstack/cinder-db-sync-fzjwm" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.685041 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6eaf3f1e-6dc5-4283-9fce-0955a2f18821-ovsdbserver-sb\") pod \"dnsmasq-dns-fd458c8cc-bk9wq\" (UID: \"6eaf3f1e-6dc5-4283-9fce-0955a2f18821\") " pod="openstack/dnsmasq-dns-fd458c8cc-bk9wq" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.685119 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/24b23531-3ad1-4b46-88f6-e930d79b6556-scripts\") pod \"placement-db-sync-kv78b\" (UID: \"24b23531-3ad1-4b46-88f6-e930d79b6556\") " pod="openstack/placement-db-sync-kv78b" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.685146 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/24b23531-3ad1-4b46-88f6-e930d79b6556-logs\") pod \"placement-db-sync-kv78b\" (UID: \"24b23531-3ad1-4b46-88f6-e930d79b6556\") " pod="openstack/placement-db-sync-kv78b" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.685196 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24b23531-3ad1-4b46-88f6-e930d79b6556-combined-ca-bundle\") pod \"placement-db-sync-kv78b\" (UID: \"24b23531-3ad1-4b46-88f6-e930d79b6556\") " pod="openstack/placement-db-sync-kv78b" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.685216 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6eaf3f1e-6dc5-4283-9fce-0955a2f18821-config\") pod \"dnsmasq-dns-fd458c8cc-bk9wq\" (UID: \"6eaf3f1e-6dc5-4283-9fce-0955a2f18821\") " pod="openstack/dnsmasq-dns-fd458c8cc-bk9wq" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.685232 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6eaf3f1e-6dc5-4283-9fce-0955a2f18821-dns-svc\") pod \"dnsmasq-dns-fd458c8cc-bk9wq\" (UID: \"6eaf3f1e-6dc5-4283-9fce-0955a2f18821\") " pod="openstack/dnsmasq-dns-fd458c8cc-bk9wq" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.685285 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ad3e4c7f-96eb-40bd-bcdd-7e3771908b6a-db-sync-config-data\") pod \"cinder-db-sync-fzjwm\" (UID: \"ad3e4c7f-96eb-40bd-bcdd-7e3771908b6a\") " pod="openstack/cinder-db-sync-fzjwm" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.685194 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ad3e4c7f-96eb-40bd-bcdd-7e3771908b6a-etc-machine-id\") pod \"cinder-db-sync-fzjwm\" (UID: \"ad3e4c7f-96eb-40bd-bcdd-7e3771908b6a\") " pod="openstack/cinder-db-sync-fzjwm" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.685540 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgbpp\" (UniqueName: \"kubernetes.io/projected/24b23531-3ad1-4b46-88f6-e930d79b6556-kube-api-access-hgbpp\") pod \"placement-db-sync-kv78b\" (UID: \"24b23531-3ad1-4b46-88f6-e930d79b6556\") " pod="openstack/placement-db-sync-kv78b" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.685599 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7wqz\" (UniqueName: \"kubernetes.io/projected/6eaf3f1e-6dc5-4283-9fce-0955a2f18821-kube-api-access-r7wqz\") pod \"dnsmasq-dns-fd458c8cc-bk9wq\" (UID: \"6eaf3f1e-6dc5-4283-9fce-0955a2f18821\") " pod="openstack/dnsmasq-dns-fd458c8cc-bk9wq" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.685664 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9d7f\" (UniqueName: \"kubernetes.io/projected/c46c2893-6218-455e-a4ee-cf1b4cda45b7-kube-api-access-g9d7f\") pod \"barbican-db-sync-vj92k\" (UID: \"c46c2893-6218-455e-a4ee-cf1b4cda45b7\") " pod="openstack/barbican-db-sync-vj92k" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.688877 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad3e4c7f-96eb-40bd-bcdd-7e3771908b6a-combined-ca-bundle\") pod \"cinder-db-sync-fzjwm\" (UID: \"ad3e4c7f-96eb-40bd-bcdd-7e3771908b6a\") " pod="openstack/cinder-db-sync-fzjwm" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.689942 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ad3e4c7f-96eb-40bd-bcdd-7e3771908b6a-db-sync-config-data\") pod \"cinder-db-sync-fzjwm\" (UID: \"ad3e4c7f-96eb-40bd-bcdd-7e3771908b6a\") " pod="openstack/cinder-db-sync-fzjwm" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.693453 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad3e4c7f-96eb-40bd-bcdd-7e3771908b6a-config-data\") pod \"cinder-db-sync-fzjwm\" (UID: \"ad3e4c7f-96eb-40bd-bcdd-7e3771908b6a\") " pod="openstack/cinder-db-sync-fzjwm" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.697655 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-h8dgc" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.712623 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ad3e4c7f-96eb-40bd-bcdd-7e3771908b6a-scripts\") pod \"cinder-db-sync-fzjwm\" (UID: \"ad3e4c7f-96eb-40bd-bcdd-7e3771908b6a\") " pod="openstack/cinder-db-sync-fzjwm" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.712698 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mdjkq\" (UniqueName: \"kubernetes.io/projected/ad3e4c7f-96eb-40bd-bcdd-7e3771908b6a-kube-api-access-mdjkq\") pod \"cinder-db-sync-fzjwm\" (UID: \"ad3e4c7f-96eb-40bd-bcdd-7e3771908b6a\") " pod="openstack/cinder-db-sync-fzjwm" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.849086 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.851146 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6eaf3f1e-6dc5-4283-9fce-0955a2f18821-ovsdbserver-nb\") pod \"dnsmasq-dns-fd458c8cc-bk9wq\" (UID: \"6eaf3f1e-6dc5-4283-9fce-0955a2f18821\") " pod="openstack/dnsmasq-dns-fd458c8cc-bk9wq" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.851196 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c46c2893-6218-455e-a4ee-cf1b4cda45b7-db-sync-config-data\") pod \"barbican-db-sync-vj92k\" (UID: \"c46c2893-6218-455e-a4ee-cf1b4cda45b7\") " pod="openstack/barbican-db-sync-vj92k" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.851224 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24b23531-3ad1-4b46-88f6-e930d79b6556-config-data\") pod \"placement-db-sync-kv78b\" (UID: \"24b23531-3ad1-4b46-88f6-e930d79b6556\") " pod="openstack/placement-db-sync-kv78b" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.851254 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6eaf3f1e-6dc5-4283-9fce-0955a2f18821-dns-swift-storage-0\") pod \"dnsmasq-dns-fd458c8cc-bk9wq\" (UID: \"6eaf3f1e-6dc5-4283-9fce-0955a2f18821\") " pod="openstack/dnsmasq-dns-fd458c8cc-bk9wq" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.851278 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c46c2893-6218-455e-a4ee-cf1b4cda45b7-combined-ca-bundle\") pod \"barbican-db-sync-vj92k\" (UID: \"c46c2893-6218-455e-a4ee-cf1b4cda45b7\") " pod="openstack/barbican-db-sync-vj92k" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.851314 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6eaf3f1e-6dc5-4283-9fce-0955a2f18821-ovsdbserver-sb\") pod \"dnsmasq-dns-fd458c8cc-bk9wq\" (UID: \"6eaf3f1e-6dc5-4283-9fce-0955a2f18821\") " pod="openstack/dnsmasq-dns-fd458c8cc-bk9wq" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.851347 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/24b23531-3ad1-4b46-88f6-e930d79b6556-scripts\") pod \"placement-db-sync-kv78b\" (UID: \"24b23531-3ad1-4b46-88f6-e930d79b6556\") " pod="openstack/placement-db-sync-kv78b" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.851368 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/24b23531-3ad1-4b46-88f6-e930d79b6556-logs\") pod \"placement-db-sync-kv78b\" (UID: \"24b23531-3ad1-4b46-88f6-e930d79b6556\") " pod="openstack/placement-db-sync-kv78b" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.851384 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6eaf3f1e-6dc5-4283-9fce-0955a2f18821-dns-svc\") pod \"dnsmasq-dns-fd458c8cc-bk9wq\" (UID: \"6eaf3f1e-6dc5-4283-9fce-0955a2f18821\") " pod="openstack/dnsmasq-dns-fd458c8cc-bk9wq" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.851399 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24b23531-3ad1-4b46-88f6-e930d79b6556-combined-ca-bundle\") pod \"placement-db-sync-kv78b\" (UID: \"24b23531-3ad1-4b46-88f6-e930d79b6556\") " pod="openstack/placement-db-sync-kv78b" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.851413 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6eaf3f1e-6dc5-4283-9fce-0955a2f18821-config\") pod \"dnsmasq-dns-fd458c8cc-bk9wq\" (UID: \"6eaf3f1e-6dc5-4283-9fce-0955a2f18821\") " pod="openstack/dnsmasq-dns-fd458c8cc-bk9wq" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.851441 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7wqz\" (UniqueName: \"kubernetes.io/projected/6eaf3f1e-6dc5-4283-9fce-0955a2f18821-kube-api-access-r7wqz\") pod \"dnsmasq-dns-fd458c8cc-bk9wq\" (UID: \"6eaf3f1e-6dc5-4283-9fce-0955a2f18821\") " pod="openstack/dnsmasq-dns-fd458c8cc-bk9wq" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.851459 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hgbpp\" (UniqueName: \"kubernetes.io/projected/24b23531-3ad1-4b46-88f6-e930d79b6556-kube-api-access-hgbpp\") pod \"placement-db-sync-kv78b\" (UID: \"24b23531-3ad1-4b46-88f6-e930d79b6556\") " pod="openstack/placement-db-sync-kv78b" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.851479 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g9d7f\" (UniqueName: \"kubernetes.io/projected/c46c2893-6218-455e-a4ee-cf1b4cda45b7-kube-api-access-g9d7f\") pod \"barbican-db-sync-vj92k\" (UID: \"c46c2893-6218-455e-a4ee-cf1b4cda45b7\") " pod="openstack/barbican-db-sync-vj92k" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.852733 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/24b23531-3ad1-4b46-88f6-e930d79b6556-logs\") pod \"placement-db-sync-kv78b\" (UID: \"24b23531-3ad1-4b46-88f6-e930d79b6556\") " pod="openstack/placement-db-sync-kv78b" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.853458 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6eaf3f1e-6dc5-4283-9fce-0955a2f18821-dns-svc\") pod \"dnsmasq-dns-fd458c8cc-bk9wq\" (UID: \"6eaf3f1e-6dc5-4283-9fce-0955a2f18821\") " pod="openstack/dnsmasq-dns-fd458c8cc-bk9wq" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.853851 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6eaf3f1e-6dc5-4283-9fce-0955a2f18821-ovsdbserver-sb\") pod \"dnsmasq-dns-fd458c8cc-bk9wq\" (UID: \"6eaf3f1e-6dc5-4283-9fce-0955a2f18821\") " pod="openstack/dnsmasq-dns-fd458c8cc-bk9wq" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.857166 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6eaf3f1e-6dc5-4283-9fce-0955a2f18821-ovsdbserver-nb\") pod \"dnsmasq-dns-fd458c8cc-bk9wq\" (UID: \"6eaf3f1e-6dc5-4283-9fce-0955a2f18821\") " pod="openstack/dnsmasq-dns-fd458c8cc-bk9wq" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.858010 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6eaf3f1e-6dc5-4283-9fce-0955a2f18821-dns-swift-storage-0\") pod \"dnsmasq-dns-fd458c8cc-bk9wq\" (UID: \"6eaf3f1e-6dc5-4283-9fce-0955a2f18821\") " pod="openstack/dnsmasq-dns-fd458c8cc-bk9wq" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.858264 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24b23531-3ad1-4b46-88f6-e930d79b6556-combined-ca-bundle\") pod \"placement-db-sync-kv78b\" (UID: \"24b23531-3ad1-4b46-88f6-e930d79b6556\") " pod="openstack/placement-db-sync-kv78b" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.858737 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6eaf3f1e-6dc5-4283-9fce-0955a2f18821-config\") pod \"dnsmasq-dns-fd458c8cc-bk9wq\" (UID: \"6eaf3f1e-6dc5-4283-9fce-0955a2f18821\") " pod="openstack/dnsmasq-dns-fd458c8cc-bk9wq" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.860905 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/24b23531-3ad1-4b46-88f6-e930d79b6556-scripts\") pod \"placement-db-sync-kv78b\" (UID: \"24b23531-3ad1-4b46-88f6-e930d79b6556\") " pod="openstack/placement-db-sync-kv78b" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.865592 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24b23531-3ad1-4b46-88f6-e930d79b6556-config-data\") pod \"placement-db-sync-kv78b\" (UID: \"24b23531-3ad1-4b46-88f6-e930d79b6556\") " pod="openstack/placement-db-sync-kv78b" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.874234 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c46c2893-6218-455e-a4ee-cf1b4cda45b7-combined-ca-bundle\") pod \"barbican-db-sync-vj92k\" (UID: \"c46c2893-6218-455e-a4ee-cf1b4cda45b7\") " pod="openstack/barbican-db-sync-vj92k" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.876407 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c46c2893-6218-455e-a4ee-cf1b4cda45b7-db-sync-config-data\") pod \"barbican-db-sync-vj92k\" (UID: \"c46c2893-6218-455e-a4ee-cf1b4cda45b7\") " pod="openstack/barbican-db-sync-vj92k" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.884661 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9d7f\" (UniqueName: \"kubernetes.io/projected/c46c2893-6218-455e-a4ee-cf1b4cda45b7-kube-api-access-g9d7f\") pod \"barbican-db-sync-vj92k\" (UID: \"c46c2893-6218-455e-a4ee-cf1b4cda45b7\") " pod="openstack/barbican-db-sync-vj92k" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.885856 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7wqz\" (UniqueName: \"kubernetes.io/projected/6eaf3f1e-6dc5-4283-9fce-0955a2f18821-kube-api-access-r7wqz\") pod \"dnsmasq-dns-fd458c8cc-bk9wq\" (UID: \"6eaf3f1e-6dc5-4283-9fce-0955a2f18821\") " pod="openstack/dnsmasq-dns-fd458c8cc-bk9wq" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.889000 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hgbpp\" (UniqueName: \"kubernetes.io/projected/24b23531-3ad1-4b46-88f6-e930d79b6556-kube-api-access-hgbpp\") pod \"placement-db-sync-kv78b\" (UID: \"24b23531-3ad1-4b46-88f6-e930d79b6556\") " pod="openstack/placement-db-sync-kv78b" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.889465 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-vj92k" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.891356 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-fzjwm" Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.902374 4953 generic.go:334] "Generic (PLEG): container finished" podID="d6f147c9-83bc-40ae-aabd-f6879d935e40" containerID="3aa53c9c19d93284c07c2540ca4b29164a57f91f0062913a1ba6fc4b3255c628" exitCode=0 Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.902458 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8467b54bcc-9dr7l" event={"ID":"d6f147c9-83bc-40ae-aabd-f6879d935e40","Type":"ContainerDied","Data":"3aa53c9c19d93284c07c2540ca4b29164a57f91f0062913a1ba6fc4b3255c628"} Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.902501 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8467b54bcc-9dr7l" event={"ID":"d6f147c9-83bc-40ae-aabd-f6879d935e40","Type":"ContainerStarted","Data":"04ca47272b0639c505d18e8bd04983d21c2d11332164d2705a70e47ffd4912c7"} Dec 11 10:33:34 crc kubenswrapper[4953]: I1211 10:33:34.907933 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-kv78b" Dec 11 10:33:35 crc kubenswrapper[4953]: I1211 10:33:35.070941 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fd458c8cc-bk9wq" Dec 11 10:33:35 crc kubenswrapper[4953]: I1211 10:33:35.158779 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58647bbf65-5xnbq"] Dec 11 10:33:36 crc kubenswrapper[4953]: I1211 10:33:35.674943 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8467b54bcc-9dr7l" Dec 11 10:33:36 crc kubenswrapper[4953]: I1211 10:33:35.771082 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-gnwcw"] Dec 11 10:33:36 crc kubenswrapper[4953]: I1211 10:33:35.832727 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d6f147c9-83bc-40ae-aabd-f6879d935e40-dns-svc\") pod \"d6f147c9-83bc-40ae-aabd-f6879d935e40\" (UID: \"d6f147c9-83bc-40ae-aabd-f6879d935e40\") " Dec 11 10:33:36 crc kubenswrapper[4953]: I1211 10:33:35.832820 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6f147c9-83bc-40ae-aabd-f6879d935e40-config\") pod \"d6f147c9-83bc-40ae-aabd-f6879d935e40\" (UID: \"d6f147c9-83bc-40ae-aabd-f6879d935e40\") " Dec 11 10:33:36 crc kubenswrapper[4953]: I1211 10:33:35.832952 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d6f147c9-83bc-40ae-aabd-f6879d935e40-dns-swift-storage-0\") pod \"d6f147c9-83bc-40ae-aabd-f6879d935e40\" (UID: \"d6f147c9-83bc-40ae-aabd-f6879d935e40\") " Dec 11 10:33:36 crc kubenswrapper[4953]: I1211 10:33:35.832988 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fglbm\" (UniqueName: \"kubernetes.io/projected/d6f147c9-83bc-40ae-aabd-f6879d935e40-kube-api-access-fglbm\") pod \"d6f147c9-83bc-40ae-aabd-f6879d935e40\" (UID: \"d6f147c9-83bc-40ae-aabd-f6879d935e40\") " Dec 11 10:33:36 crc kubenswrapper[4953]: I1211 10:33:35.833108 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d6f147c9-83bc-40ae-aabd-f6879d935e40-ovsdbserver-nb\") pod \"d6f147c9-83bc-40ae-aabd-f6879d935e40\" (UID: \"d6f147c9-83bc-40ae-aabd-f6879d935e40\") " Dec 11 10:33:36 crc kubenswrapper[4953]: I1211 10:33:35.833154 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d6f147c9-83bc-40ae-aabd-f6879d935e40-ovsdbserver-sb\") pod \"d6f147c9-83bc-40ae-aabd-f6879d935e40\" (UID: \"d6f147c9-83bc-40ae-aabd-f6879d935e40\") " Dec 11 10:33:36 crc kubenswrapper[4953]: I1211 10:33:35.849552 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6f147c9-83bc-40ae-aabd-f6879d935e40-kube-api-access-fglbm" (OuterVolumeSpecName: "kube-api-access-fglbm") pod "d6f147c9-83bc-40ae-aabd-f6879d935e40" (UID: "d6f147c9-83bc-40ae-aabd-f6879d935e40"). InnerVolumeSpecName "kube-api-access-fglbm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:33:36 crc kubenswrapper[4953]: I1211 10:33:35.858838 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-h8dgc"] Dec 11 10:33:36 crc kubenswrapper[4953]: I1211 10:33:35.870054 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d6f147c9-83bc-40ae-aabd-f6879d935e40-config" (OuterVolumeSpecName: "config") pod "d6f147c9-83bc-40ae-aabd-f6879d935e40" (UID: "d6f147c9-83bc-40ae-aabd-f6879d935e40"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:33:36 crc kubenswrapper[4953]: I1211 10:33:35.871316 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d6f147c9-83bc-40ae-aabd-f6879d935e40-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d6f147c9-83bc-40ae-aabd-f6879d935e40" (UID: "d6f147c9-83bc-40ae-aabd-f6879d935e40"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:33:36 crc kubenswrapper[4953]: I1211 10:33:35.881076 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d6f147c9-83bc-40ae-aabd-f6879d935e40-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "d6f147c9-83bc-40ae-aabd-f6879d935e40" (UID: "d6f147c9-83bc-40ae-aabd-f6879d935e40"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:33:36 crc kubenswrapper[4953]: I1211 10:33:35.881641 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d6f147c9-83bc-40ae-aabd-f6879d935e40-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d6f147c9-83bc-40ae-aabd-f6879d935e40" (UID: "d6f147c9-83bc-40ae-aabd-f6879d935e40"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:33:36 crc kubenswrapper[4953]: I1211 10:33:35.915486 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-h8dgc" event={"ID":"aabd22c8-8148-45fc-8d7f-af29844bc4f7","Type":"ContainerStarted","Data":"ce04d6c5c10d0810356c611fab2e6b9c3fe1701753ac451f410b3c59124766cc"} Dec 11 10:33:36 crc kubenswrapper[4953]: I1211 10:33:35.917537 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-gnwcw" event={"ID":"5fe16c5e-1161-4e0d-83d4-9f07a2643a6a","Type":"ContainerStarted","Data":"f792e563eede022e098794aebad84314cdba8dec72df04d548c4bae8f94c6ba4"} Dec 11 10:33:36 crc kubenswrapper[4953]: I1211 10:33:35.921607 4953 generic.go:334] "Generic (PLEG): container finished" podID="183cc7ed-0993-42e6-9219-bc93a27cdedd" containerID="7b7f05d44031b022d8935020e159d9c53faaf837774802647b2449f37bfae0ac" exitCode=0 Dec 11 10:33:36 crc kubenswrapper[4953]: I1211 10:33:35.921673 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58647bbf65-5xnbq" event={"ID":"183cc7ed-0993-42e6-9219-bc93a27cdedd","Type":"ContainerDied","Data":"7b7f05d44031b022d8935020e159d9c53faaf837774802647b2449f37bfae0ac"} Dec 11 10:33:36 crc kubenswrapper[4953]: I1211 10:33:35.921701 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58647bbf65-5xnbq" event={"ID":"183cc7ed-0993-42e6-9219-bc93a27cdedd","Type":"ContainerStarted","Data":"9afb5b07ac89d320ebcfe6ae8124b58c7e4158b8817b54176cb8446179fde7b2"} Dec 11 10:33:36 crc kubenswrapper[4953]: I1211 10:33:35.926678 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d6f147c9-83bc-40ae-aabd-f6879d935e40-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d6f147c9-83bc-40ae-aabd-f6879d935e40" (UID: "d6f147c9-83bc-40ae-aabd-f6879d935e40"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:33:36 crc kubenswrapper[4953]: I1211 10:33:35.926760 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8467b54bcc-9dr7l" event={"ID":"d6f147c9-83bc-40ae-aabd-f6879d935e40","Type":"ContainerDied","Data":"04ca47272b0639c505d18e8bd04983d21c2d11332164d2705a70e47ffd4912c7"} Dec 11 10:33:36 crc kubenswrapper[4953]: I1211 10:33:35.926806 4953 scope.go:117] "RemoveContainer" containerID="3aa53c9c19d93284c07c2540ca4b29164a57f91f0062913a1ba6fc4b3255c628" Dec 11 10:33:36 crc kubenswrapper[4953]: I1211 10:33:35.926817 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8467b54bcc-9dr7l" Dec 11 10:33:36 crc kubenswrapper[4953]: I1211 10:33:35.936106 4953 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d6f147c9-83bc-40ae-aabd-f6879d935e40-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 11 10:33:36 crc kubenswrapper[4953]: I1211 10:33:35.936141 4953 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d6f147c9-83bc-40ae-aabd-f6879d935e40-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 11 10:33:36 crc kubenswrapper[4953]: I1211 10:33:35.936168 4953 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d6f147c9-83bc-40ae-aabd-f6879d935e40-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 11 10:33:36 crc kubenswrapper[4953]: I1211 10:33:35.936181 4953 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6f147c9-83bc-40ae-aabd-f6879d935e40-config\") on node \"crc\" DevicePath \"\"" Dec 11 10:33:36 crc kubenswrapper[4953]: I1211 10:33:35.936194 4953 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d6f147c9-83bc-40ae-aabd-f6879d935e40-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 11 10:33:36 crc kubenswrapper[4953]: I1211 10:33:35.936205 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fglbm\" (UniqueName: \"kubernetes.io/projected/d6f147c9-83bc-40ae-aabd-f6879d935e40-kube-api-access-fglbm\") on node \"crc\" DevicePath \"\"" Dec 11 10:33:36 crc kubenswrapper[4953]: I1211 10:33:36.007715 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8467b54bcc-9dr7l"] Dec 11 10:33:36 crc kubenswrapper[4953]: I1211 10:33:36.026508 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8467b54bcc-9dr7l"] Dec 11 10:33:36 crc kubenswrapper[4953]: I1211 10:33:36.496046 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d6f147c9-83bc-40ae-aabd-f6879d935e40" path="/var/lib/kubelet/pods/d6f147c9-83bc-40ae-aabd-f6879d935e40/volumes" Dec 11 10:33:36 crc kubenswrapper[4953]: I1211 10:33:36.924147 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-fd458c8cc-bk9wq"] Dec 11 10:33:36 crc kubenswrapper[4953]: I1211 10:33:36.932351 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 11 10:33:36 crc kubenswrapper[4953]: I1211 10:33:36.975555 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-fzjwm"] Dec 11 10:33:36 crc kubenswrapper[4953]: I1211 10:33:36.990286 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fd458c8cc-bk9wq" event={"ID":"6eaf3f1e-6dc5-4283-9fce-0955a2f18821","Type":"ContainerStarted","Data":"1d53ea0dee0bae03e762f92b51484520e356039aee01693272f8971adde2fb36"} Dec 11 10:33:36 crc kubenswrapper[4953]: I1211 10:33:36.995725 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-kv78b"] Dec 11 10:33:37 crc kubenswrapper[4953]: I1211 10:33:37.017677 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-h8dgc" event={"ID":"aabd22c8-8148-45fc-8d7f-af29844bc4f7","Type":"ContainerStarted","Data":"034d6a32090ba212ff6b84ead3f44683fd598c601b40628759991ada8819812b"} Dec 11 10:33:37 crc kubenswrapper[4953]: I1211 10:33:37.055439 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-gnwcw" event={"ID":"5fe16c5e-1161-4e0d-83d4-9f07a2643a6a","Type":"ContainerStarted","Data":"5de87eeb054b473acfa2ae00d395cdca8c1df68037366bf5b76babf98bca8bea"} Dec 11 10:33:37 crc kubenswrapper[4953]: I1211 10:33:37.064836 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-vj92k"] Dec 11 10:33:37 crc kubenswrapper[4953]: I1211 10:33:37.067108 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58647bbf65-5xnbq" Dec 11 10:33:37 crc kubenswrapper[4953]: I1211 10:33:37.085122 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-h8dgc" podStartSLOduration=3.08506801 podStartE2EDuration="3.08506801s" podCreationTimestamp="2025-12-11 10:33:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:33:37.053290961 +0000 UTC m=+1335.077150004" watchObservedRunningTime="2025-12-11 10:33:37.08506801 +0000 UTC m=+1335.108927043" Dec 11 10:33:37 crc kubenswrapper[4953]: I1211 10:33:37.110964 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-gnwcw" podStartSLOduration=3.110943968 podStartE2EDuration="3.110943968s" podCreationTimestamp="2025-12-11 10:33:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:33:37.09131899 +0000 UTC m=+1335.115178013" watchObservedRunningTime="2025-12-11 10:33:37.110943968 +0000 UTC m=+1335.134803001" Dec 11 10:33:37 crc kubenswrapper[4953]: I1211 10:33:37.193550 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/183cc7ed-0993-42e6-9219-bc93a27cdedd-ovsdbserver-nb\") pod \"183cc7ed-0993-42e6-9219-bc93a27cdedd\" (UID: \"183cc7ed-0993-42e6-9219-bc93a27cdedd\") " Dec 11 10:33:37 crc kubenswrapper[4953]: I1211 10:33:37.193666 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9dfck\" (UniqueName: \"kubernetes.io/projected/183cc7ed-0993-42e6-9219-bc93a27cdedd-kube-api-access-9dfck\") pod \"183cc7ed-0993-42e6-9219-bc93a27cdedd\" (UID: \"183cc7ed-0993-42e6-9219-bc93a27cdedd\") " Dec 11 10:33:37 crc kubenswrapper[4953]: I1211 10:33:37.193720 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/183cc7ed-0993-42e6-9219-bc93a27cdedd-dns-swift-storage-0\") pod \"183cc7ed-0993-42e6-9219-bc93a27cdedd\" (UID: \"183cc7ed-0993-42e6-9219-bc93a27cdedd\") " Dec 11 10:33:37 crc kubenswrapper[4953]: I1211 10:33:37.193782 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/183cc7ed-0993-42e6-9219-bc93a27cdedd-config\") pod \"183cc7ed-0993-42e6-9219-bc93a27cdedd\" (UID: \"183cc7ed-0993-42e6-9219-bc93a27cdedd\") " Dec 11 10:33:37 crc kubenswrapper[4953]: I1211 10:33:37.193795 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 11 10:33:37 crc kubenswrapper[4953]: I1211 10:33:37.193857 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/183cc7ed-0993-42e6-9219-bc93a27cdedd-ovsdbserver-sb\") pod \"183cc7ed-0993-42e6-9219-bc93a27cdedd\" (UID: \"183cc7ed-0993-42e6-9219-bc93a27cdedd\") " Dec 11 10:33:37 crc kubenswrapper[4953]: I1211 10:33:37.194030 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/183cc7ed-0993-42e6-9219-bc93a27cdedd-dns-svc\") pod \"183cc7ed-0993-42e6-9219-bc93a27cdedd\" (UID: \"183cc7ed-0993-42e6-9219-bc93a27cdedd\") " Dec 11 10:33:37 crc kubenswrapper[4953]: I1211 10:33:37.231873 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/183cc7ed-0993-42e6-9219-bc93a27cdedd-kube-api-access-9dfck" (OuterVolumeSpecName: "kube-api-access-9dfck") pod "183cc7ed-0993-42e6-9219-bc93a27cdedd" (UID: "183cc7ed-0993-42e6-9219-bc93a27cdedd"). InnerVolumeSpecName "kube-api-access-9dfck". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:33:37 crc kubenswrapper[4953]: I1211 10:33:37.257373 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/183cc7ed-0993-42e6-9219-bc93a27cdedd-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "183cc7ed-0993-42e6-9219-bc93a27cdedd" (UID: "183cc7ed-0993-42e6-9219-bc93a27cdedd"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:33:37 crc kubenswrapper[4953]: I1211 10:33:37.259171 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/183cc7ed-0993-42e6-9219-bc93a27cdedd-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "183cc7ed-0993-42e6-9219-bc93a27cdedd" (UID: "183cc7ed-0993-42e6-9219-bc93a27cdedd"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:33:37 crc kubenswrapper[4953]: I1211 10:33:37.279841 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/183cc7ed-0993-42e6-9219-bc93a27cdedd-config" (OuterVolumeSpecName: "config") pod "183cc7ed-0993-42e6-9219-bc93a27cdedd" (UID: "183cc7ed-0993-42e6-9219-bc93a27cdedd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:33:37 crc kubenswrapper[4953]: I1211 10:33:37.295280 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/183cc7ed-0993-42e6-9219-bc93a27cdedd-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "183cc7ed-0993-42e6-9219-bc93a27cdedd" (UID: "183cc7ed-0993-42e6-9219-bc93a27cdedd"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:33:37 crc kubenswrapper[4953]: I1211 10:33:37.297122 4953 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/183cc7ed-0993-42e6-9219-bc93a27cdedd-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 11 10:33:37 crc kubenswrapper[4953]: I1211 10:33:37.297155 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9dfck\" (UniqueName: \"kubernetes.io/projected/183cc7ed-0993-42e6-9219-bc93a27cdedd-kube-api-access-9dfck\") on node \"crc\" DevicePath \"\"" Dec 11 10:33:37 crc kubenswrapper[4953]: I1211 10:33:37.297176 4953 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/183cc7ed-0993-42e6-9219-bc93a27cdedd-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 11 10:33:37 crc kubenswrapper[4953]: I1211 10:33:37.297191 4953 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/183cc7ed-0993-42e6-9219-bc93a27cdedd-config\") on node \"crc\" DevicePath \"\"" Dec 11 10:33:37 crc kubenswrapper[4953]: I1211 10:33:37.297203 4953 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/183cc7ed-0993-42e6-9219-bc93a27cdedd-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 11 10:33:37 crc kubenswrapper[4953]: I1211 10:33:37.339556 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/183cc7ed-0993-42e6-9219-bc93a27cdedd-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "183cc7ed-0993-42e6-9219-bc93a27cdedd" (UID: "183cc7ed-0993-42e6-9219-bc93a27cdedd"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:33:37 crc kubenswrapper[4953]: I1211 10:33:37.398494 4953 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/183cc7ed-0993-42e6-9219-bc93a27cdedd-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 11 10:33:37 crc kubenswrapper[4953]: E1211 10:33:37.829558 4953 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod97b9ff8e_f944_48ee_803a_d6873a9db805.slice/crio-conmon-79cc47d9dc3c03e712eaad55e52c68d02d784451419037cdd7fbdbf61ac6149e.scope\": RecentStats: unable to find data in memory cache]" Dec 11 10:33:38 crc kubenswrapper[4953]: I1211 10:33:38.085173 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-kv78b" event={"ID":"24b23531-3ad1-4b46-88f6-e930d79b6556","Type":"ContainerStarted","Data":"d05ea9371e00f9a76783f9ecab654baa090e90f55b4c9d6bb28a1cdf5b7a8447"} Dec 11 10:33:38 crc kubenswrapper[4953]: I1211 10:33:38.092234 4953 generic.go:334] "Generic (PLEG): container finished" podID="6eaf3f1e-6dc5-4283-9fce-0955a2f18821" containerID="cb3388d4f6392a8a1aa9d14419f6a8c47b8b33077fa0d041d34378312b05ba2f" exitCode=0 Dec 11 10:33:38 crc kubenswrapper[4953]: I1211 10:33:38.092298 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fd458c8cc-bk9wq" event={"ID":"6eaf3f1e-6dc5-4283-9fce-0955a2f18821","Type":"ContainerDied","Data":"cb3388d4f6392a8a1aa9d14419f6a8c47b8b33077fa0d041d34378312b05ba2f"} Dec 11 10:33:38 crc kubenswrapper[4953]: I1211 10:33:38.098457 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-vzw7v" event={"ID":"f099a9d1-d895-4fdc-84cc-28df6fb24db0","Type":"ContainerStarted","Data":"176a6175d3f004b18252df1170910cef32683d1646cc584d7160b5f2877d0bf5"} Dec 11 10:33:38 crc kubenswrapper[4953]: I1211 10:33:38.101846 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-fzjwm" event={"ID":"ad3e4c7f-96eb-40bd-bcdd-7e3771908b6a","Type":"ContainerStarted","Data":"d27bba439c4b635543631228acc1c2063a31e300590d4a5bdfc9e830e563ac53"} Dec 11 10:33:38 crc kubenswrapper[4953]: I1211 10:33:38.104585 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58647bbf65-5xnbq" event={"ID":"183cc7ed-0993-42e6-9219-bc93a27cdedd","Type":"ContainerDied","Data":"9afb5b07ac89d320ebcfe6ae8124b58c7e4158b8817b54176cb8446179fde7b2"} Dec 11 10:33:38 crc kubenswrapper[4953]: I1211 10:33:38.104630 4953 scope.go:117] "RemoveContainer" containerID="7b7f05d44031b022d8935020e159d9c53faaf837774802647b2449f37bfae0ac" Dec 11 10:33:38 crc kubenswrapper[4953]: I1211 10:33:38.104771 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58647bbf65-5xnbq" Dec 11 10:33:38 crc kubenswrapper[4953]: I1211 10:33:38.113952 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6d8b6f34-cdf2-4285-ba3e-3a14621430e5","Type":"ContainerStarted","Data":"8c58aebaa8fb1834f5a0d0ccb48322178bcefbe5e68940f2e37715801d0ffcfb"} Dec 11 10:33:38 crc kubenswrapper[4953]: I1211 10:33:38.127324 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-vj92k" event={"ID":"c46c2893-6218-455e-a4ee-cf1b4cda45b7","Type":"ContainerStarted","Data":"56029ec7e2a42a192f9680e4e3663d1f9e8c1f15c36559368f015fe019823fc3"} Dec 11 10:33:38 crc kubenswrapper[4953]: I1211 10:33:38.164747 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-vzw7v" podStartSLOduration=3.380170477 podStartE2EDuration="35.164728542s" podCreationTimestamp="2025-12-11 10:33:03 +0000 UTC" firstStartedPulling="2025-12-11 10:33:04.535874852 +0000 UTC m=+1302.559733885" lastFinishedPulling="2025-12-11 10:33:36.320432917 +0000 UTC m=+1334.344291950" observedRunningTime="2025-12-11 10:33:38.136091469 +0000 UTC m=+1336.159950502" watchObservedRunningTime="2025-12-11 10:33:38.164728542 +0000 UTC m=+1336.188587575" Dec 11 10:33:38 crc kubenswrapper[4953]: I1211 10:33:38.453557 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58647bbf65-5xnbq"] Dec 11 10:33:38 crc kubenswrapper[4953]: I1211 10:33:38.465825 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-58647bbf65-5xnbq"] Dec 11 10:33:38 crc kubenswrapper[4953]: I1211 10:33:38.488948 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="183cc7ed-0993-42e6-9219-bc93a27cdedd" path="/var/lib/kubelet/pods/183cc7ed-0993-42e6-9219-bc93a27cdedd/volumes" Dec 11 10:33:39 crc kubenswrapper[4953]: I1211 10:33:39.205238 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fd458c8cc-bk9wq" event={"ID":"6eaf3f1e-6dc5-4283-9fce-0955a2f18821","Type":"ContainerStarted","Data":"d3bb9df50f5cdaf964d561c82928b11bff5098e78bf6642e181807a69bac3a00"} Dec 11 10:33:39 crc kubenswrapper[4953]: I1211 10:33:39.205597 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-fd458c8cc-bk9wq" Dec 11 10:33:39 crc kubenswrapper[4953]: I1211 10:33:39.233797 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-fd458c8cc-bk9wq" podStartSLOduration=5.233772902 podStartE2EDuration="5.233772902s" podCreationTimestamp="2025-12-11 10:33:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:33:39.224535521 +0000 UTC m=+1337.248394554" watchObservedRunningTime="2025-12-11 10:33:39.233772902 +0000 UTC m=+1337.257631935" Dec 11 10:33:42 crc kubenswrapper[4953]: I1211 10:33:42.249510 4953 generic.go:334] "Generic (PLEG): container finished" podID="aabd22c8-8148-45fc-8d7f-af29844bc4f7" containerID="034d6a32090ba212ff6b84ead3f44683fd598c601b40628759991ada8819812b" exitCode=0 Dec 11 10:33:42 crc kubenswrapper[4953]: I1211 10:33:42.249546 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-h8dgc" event={"ID":"aabd22c8-8148-45fc-8d7f-af29844bc4f7","Type":"ContainerDied","Data":"034d6a32090ba212ff6b84ead3f44683fd598c601b40628759991ada8819812b"} Dec 11 10:33:44 crc kubenswrapper[4953]: I1211 10:33:44.713307 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-h8dgc" Dec 11 10:33:44 crc kubenswrapper[4953]: I1211 10:33:44.838523 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/aabd22c8-8148-45fc-8d7f-af29844bc4f7-credential-keys\") pod \"aabd22c8-8148-45fc-8d7f-af29844bc4f7\" (UID: \"aabd22c8-8148-45fc-8d7f-af29844bc4f7\") " Dec 11 10:33:44 crc kubenswrapper[4953]: I1211 10:33:44.838957 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bgqhz\" (UniqueName: \"kubernetes.io/projected/aabd22c8-8148-45fc-8d7f-af29844bc4f7-kube-api-access-bgqhz\") pod \"aabd22c8-8148-45fc-8d7f-af29844bc4f7\" (UID: \"aabd22c8-8148-45fc-8d7f-af29844bc4f7\") " Dec 11 10:33:44 crc kubenswrapper[4953]: I1211 10:33:44.839013 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aabd22c8-8148-45fc-8d7f-af29844bc4f7-scripts\") pod \"aabd22c8-8148-45fc-8d7f-af29844bc4f7\" (UID: \"aabd22c8-8148-45fc-8d7f-af29844bc4f7\") " Dec 11 10:33:44 crc kubenswrapper[4953]: I1211 10:33:44.839071 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aabd22c8-8148-45fc-8d7f-af29844bc4f7-config-data\") pod \"aabd22c8-8148-45fc-8d7f-af29844bc4f7\" (UID: \"aabd22c8-8148-45fc-8d7f-af29844bc4f7\") " Dec 11 10:33:44 crc kubenswrapper[4953]: I1211 10:33:44.839198 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/aabd22c8-8148-45fc-8d7f-af29844bc4f7-fernet-keys\") pod \"aabd22c8-8148-45fc-8d7f-af29844bc4f7\" (UID: \"aabd22c8-8148-45fc-8d7f-af29844bc4f7\") " Dec 11 10:33:44 crc kubenswrapper[4953]: I1211 10:33:44.839244 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aabd22c8-8148-45fc-8d7f-af29844bc4f7-combined-ca-bundle\") pod \"aabd22c8-8148-45fc-8d7f-af29844bc4f7\" (UID: \"aabd22c8-8148-45fc-8d7f-af29844bc4f7\") " Dec 11 10:33:44 crc kubenswrapper[4953]: I1211 10:33:44.845298 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aabd22c8-8148-45fc-8d7f-af29844bc4f7-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "aabd22c8-8148-45fc-8d7f-af29844bc4f7" (UID: "aabd22c8-8148-45fc-8d7f-af29844bc4f7"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:33:44 crc kubenswrapper[4953]: I1211 10:33:44.847837 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aabd22c8-8148-45fc-8d7f-af29844bc4f7-scripts" (OuterVolumeSpecName: "scripts") pod "aabd22c8-8148-45fc-8d7f-af29844bc4f7" (UID: "aabd22c8-8148-45fc-8d7f-af29844bc4f7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:33:44 crc kubenswrapper[4953]: I1211 10:33:44.854340 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aabd22c8-8148-45fc-8d7f-af29844bc4f7-kube-api-access-bgqhz" (OuterVolumeSpecName: "kube-api-access-bgqhz") pod "aabd22c8-8148-45fc-8d7f-af29844bc4f7" (UID: "aabd22c8-8148-45fc-8d7f-af29844bc4f7"). InnerVolumeSpecName "kube-api-access-bgqhz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:33:44 crc kubenswrapper[4953]: I1211 10:33:44.855813 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aabd22c8-8148-45fc-8d7f-af29844bc4f7-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "aabd22c8-8148-45fc-8d7f-af29844bc4f7" (UID: "aabd22c8-8148-45fc-8d7f-af29844bc4f7"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:33:44 crc kubenswrapper[4953]: I1211 10:33:44.870870 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aabd22c8-8148-45fc-8d7f-af29844bc4f7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "aabd22c8-8148-45fc-8d7f-af29844bc4f7" (UID: "aabd22c8-8148-45fc-8d7f-af29844bc4f7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:33:44 crc kubenswrapper[4953]: I1211 10:33:44.873947 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aabd22c8-8148-45fc-8d7f-af29844bc4f7-config-data" (OuterVolumeSpecName: "config-data") pod "aabd22c8-8148-45fc-8d7f-af29844bc4f7" (UID: "aabd22c8-8148-45fc-8d7f-af29844bc4f7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:33:44 crc kubenswrapper[4953]: I1211 10:33:44.941690 4953 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aabd22c8-8148-45fc-8d7f-af29844bc4f7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 10:33:44 crc kubenswrapper[4953]: I1211 10:33:44.941765 4953 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/aabd22c8-8148-45fc-8d7f-af29844bc4f7-credential-keys\") on node \"crc\" DevicePath \"\"" Dec 11 10:33:44 crc kubenswrapper[4953]: I1211 10:33:44.941776 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bgqhz\" (UniqueName: \"kubernetes.io/projected/aabd22c8-8148-45fc-8d7f-af29844bc4f7-kube-api-access-bgqhz\") on node \"crc\" DevicePath \"\"" Dec 11 10:33:44 crc kubenswrapper[4953]: I1211 10:33:44.941788 4953 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aabd22c8-8148-45fc-8d7f-af29844bc4f7-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 10:33:44 crc kubenswrapper[4953]: I1211 10:33:44.941797 4953 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aabd22c8-8148-45fc-8d7f-af29844bc4f7-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 10:33:44 crc kubenswrapper[4953]: I1211 10:33:44.941805 4953 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/aabd22c8-8148-45fc-8d7f-af29844bc4f7-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 11 10:33:45 crc kubenswrapper[4953]: I1211 10:33:45.069116 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-fd458c8cc-bk9wq" Dec 11 10:33:45 crc kubenswrapper[4953]: I1211 10:33:45.138164 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6cb545bd4c-9t76k"] Dec 11 10:33:45 crc kubenswrapper[4953]: I1211 10:33:45.138491 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6cb545bd4c-9t76k" podUID="2c1cb581-9d65-4d31-857c-de67900c05bf" containerName="dnsmasq-dns" containerID="cri-o://185b3b07d33dd1056fb432b491574d0038ab5e253a6ff737dfbfcf3db6f243a8" gracePeriod=10 Dec 11 10:33:45 crc kubenswrapper[4953]: I1211 10:33:45.203817 4953 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6cb545bd4c-9t76k" podUID="2c1cb581-9d65-4d31-857c-de67900c05bf" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.115:5353: connect: connection refused" Dec 11 10:33:45 crc kubenswrapper[4953]: I1211 10:33:45.279747 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-h8dgc" event={"ID":"aabd22c8-8148-45fc-8d7f-af29844bc4f7","Type":"ContainerDied","Data":"ce04d6c5c10d0810356c611fab2e6b9c3fe1701753ac451f410b3c59124766cc"} Dec 11 10:33:45 crc kubenswrapper[4953]: I1211 10:33:45.279801 4953 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ce04d6c5c10d0810356c611fab2e6b9c3fe1701753ac451f410b3c59124766cc" Dec 11 10:33:45 crc kubenswrapper[4953]: I1211 10:33:45.279860 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-h8dgc" Dec 11 10:33:45 crc kubenswrapper[4953]: I1211 10:33:45.282685 4953 generic.go:334] "Generic (PLEG): container finished" podID="2c1cb581-9d65-4d31-857c-de67900c05bf" containerID="185b3b07d33dd1056fb432b491574d0038ab5e253a6ff737dfbfcf3db6f243a8" exitCode=0 Dec 11 10:33:45 crc kubenswrapper[4953]: I1211 10:33:45.282731 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cb545bd4c-9t76k" event={"ID":"2c1cb581-9d65-4d31-857c-de67900c05bf","Type":"ContainerDied","Data":"185b3b07d33dd1056fb432b491574d0038ab5e253a6ff737dfbfcf3db6f243a8"} Dec 11 10:33:45 crc kubenswrapper[4953]: I1211 10:33:45.799073 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-h8dgc"] Dec 11 10:33:45 crc kubenswrapper[4953]: I1211 10:33:45.805831 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-h8dgc"] Dec 11 10:33:45 crc kubenswrapper[4953]: I1211 10:33:45.901923 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-n8dxg"] Dec 11 10:33:45 crc kubenswrapper[4953]: E1211 10:33:45.902734 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6f147c9-83bc-40ae-aabd-f6879d935e40" containerName="init" Dec 11 10:33:45 crc kubenswrapper[4953]: I1211 10:33:45.902759 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6f147c9-83bc-40ae-aabd-f6879d935e40" containerName="init" Dec 11 10:33:45 crc kubenswrapper[4953]: E1211 10:33:45.902779 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aabd22c8-8148-45fc-8d7f-af29844bc4f7" containerName="keystone-bootstrap" Dec 11 10:33:45 crc kubenswrapper[4953]: I1211 10:33:45.902790 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="aabd22c8-8148-45fc-8d7f-af29844bc4f7" containerName="keystone-bootstrap" Dec 11 10:33:45 crc kubenswrapper[4953]: E1211 10:33:45.902805 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="183cc7ed-0993-42e6-9219-bc93a27cdedd" containerName="init" Dec 11 10:33:45 crc kubenswrapper[4953]: I1211 10:33:45.902813 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="183cc7ed-0993-42e6-9219-bc93a27cdedd" containerName="init" Dec 11 10:33:45 crc kubenswrapper[4953]: I1211 10:33:45.903025 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6f147c9-83bc-40ae-aabd-f6879d935e40" containerName="init" Dec 11 10:33:45 crc kubenswrapper[4953]: I1211 10:33:45.903075 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="183cc7ed-0993-42e6-9219-bc93a27cdedd" containerName="init" Dec 11 10:33:45 crc kubenswrapper[4953]: I1211 10:33:45.903101 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="aabd22c8-8148-45fc-8d7f-af29844bc4f7" containerName="keystone-bootstrap" Dec 11 10:33:45 crc kubenswrapper[4953]: I1211 10:33:45.904004 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-n8dxg" Dec 11 10:33:45 crc kubenswrapper[4953]: I1211 10:33:45.905999 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 11 10:33:45 crc kubenswrapper[4953]: I1211 10:33:45.911004 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 11 10:33:45 crc kubenswrapper[4953]: I1211 10:33:45.911062 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-v2jcr" Dec 11 10:33:45 crc kubenswrapper[4953]: I1211 10:33:45.911019 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Dec 11 10:33:45 crc kubenswrapper[4953]: I1211 10:33:45.911260 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 11 10:33:45 crc kubenswrapper[4953]: I1211 10:33:45.917757 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-n8dxg"] Dec 11 10:33:45 crc kubenswrapper[4953]: I1211 10:33:45.974891 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3079f9fc-3d3e-4647-a889-fae4277437fc-fernet-keys\") pod \"keystone-bootstrap-n8dxg\" (UID: \"3079f9fc-3d3e-4647-a889-fae4277437fc\") " pod="openstack/keystone-bootstrap-n8dxg" Dec 11 10:33:45 crc kubenswrapper[4953]: I1211 10:33:45.974956 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3079f9fc-3d3e-4647-a889-fae4277437fc-combined-ca-bundle\") pod \"keystone-bootstrap-n8dxg\" (UID: \"3079f9fc-3d3e-4647-a889-fae4277437fc\") " pod="openstack/keystone-bootstrap-n8dxg" Dec 11 10:33:45 crc kubenswrapper[4953]: I1211 10:33:45.975323 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7v82r\" (UniqueName: \"kubernetes.io/projected/3079f9fc-3d3e-4647-a889-fae4277437fc-kube-api-access-7v82r\") pod \"keystone-bootstrap-n8dxg\" (UID: \"3079f9fc-3d3e-4647-a889-fae4277437fc\") " pod="openstack/keystone-bootstrap-n8dxg" Dec 11 10:33:45 crc kubenswrapper[4953]: I1211 10:33:45.975399 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3079f9fc-3d3e-4647-a889-fae4277437fc-config-data\") pod \"keystone-bootstrap-n8dxg\" (UID: \"3079f9fc-3d3e-4647-a889-fae4277437fc\") " pod="openstack/keystone-bootstrap-n8dxg" Dec 11 10:33:45 crc kubenswrapper[4953]: I1211 10:33:45.975484 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3079f9fc-3d3e-4647-a889-fae4277437fc-credential-keys\") pod \"keystone-bootstrap-n8dxg\" (UID: \"3079f9fc-3d3e-4647-a889-fae4277437fc\") " pod="openstack/keystone-bootstrap-n8dxg" Dec 11 10:33:45 crc kubenswrapper[4953]: I1211 10:33:45.975510 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3079f9fc-3d3e-4647-a889-fae4277437fc-scripts\") pod \"keystone-bootstrap-n8dxg\" (UID: \"3079f9fc-3d3e-4647-a889-fae4277437fc\") " pod="openstack/keystone-bootstrap-n8dxg" Dec 11 10:33:46 crc kubenswrapper[4953]: I1211 10:33:46.077011 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7v82r\" (UniqueName: \"kubernetes.io/projected/3079f9fc-3d3e-4647-a889-fae4277437fc-kube-api-access-7v82r\") pod \"keystone-bootstrap-n8dxg\" (UID: \"3079f9fc-3d3e-4647-a889-fae4277437fc\") " pod="openstack/keystone-bootstrap-n8dxg" Dec 11 10:33:46 crc kubenswrapper[4953]: I1211 10:33:46.077065 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3079f9fc-3d3e-4647-a889-fae4277437fc-config-data\") pod \"keystone-bootstrap-n8dxg\" (UID: \"3079f9fc-3d3e-4647-a889-fae4277437fc\") " pod="openstack/keystone-bootstrap-n8dxg" Dec 11 10:33:46 crc kubenswrapper[4953]: I1211 10:33:46.077116 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3079f9fc-3d3e-4647-a889-fae4277437fc-credential-keys\") pod \"keystone-bootstrap-n8dxg\" (UID: \"3079f9fc-3d3e-4647-a889-fae4277437fc\") " pod="openstack/keystone-bootstrap-n8dxg" Dec 11 10:33:46 crc kubenswrapper[4953]: I1211 10:33:46.077139 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3079f9fc-3d3e-4647-a889-fae4277437fc-scripts\") pod \"keystone-bootstrap-n8dxg\" (UID: \"3079f9fc-3d3e-4647-a889-fae4277437fc\") " pod="openstack/keystone-bootstrap-n8dxg" Dec 11 10:33:46 crc kubenswrapper[4953]: I1211 10:33:46.077171 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3079f9fc-3d3e-4647-a889-fae4277437fc-fernet-keys\") pod \"keystone-bootstrap-n8dxg\" (UID: \"3079f9fc-3d3e-4647-a889-fae4277437fc\") " pod="openstack/keystone-bootstrap-n8dxg" Dec 11 10:33:46 crc kubenswrapper[4953]: I1211 10:33:46.077227 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3079f9fc-3d3e-4647-a889-fae4277437fc-combined-ca-bundle\") pod \"keystone-bootstrap-n8dxg\" (UID: \"3079f9fc-3d3e-4647-a889-fae4277437fc\") " pod="openstack/keystone-bootstrap-n8dxg" Dec 11 10:33:46 crc kubenswrapper[4953]: I1211 10:33:46.093392 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3079f9fc-3d3e-4647-a889-fae4277437fc-scripts\") pod \"keystone-bootstrap-n8dxg\" (UID: \"3079f9fc-3d3e-4647-a889-fae4277437fc\") " pod="openstack/keystone-bootstrap-n8dxg" Dec 11 10:33:46 crc kubenswrapper[4953]: I1211 10:33:46.093616 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3079f9fc-3d3e-4647-a889-fae4277437fc-combined-ca-bundle\") pod \"keystone-bootstrap-n8dxg\" (UID: \"3079f9fc-3d3e-4647-a889-fae4277437fc\") " pod="openstack/keystone-bootstrap-n8dxg" Dec 11 10:33:46 crc kubenswrapper[4953]: I1211 10:33:46.098102 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3079f9fc-3d3e-4647-a889-fae4277437fc-credential-keys\") pod \"keystone-bootstrap-n8dxg\" (UID: \"3079f9fc-3d3e-4647-a889-fae4277437fc\") " pod="openstack/keystone-bootstrap-n8dxg" Dec 11 10:33:46 crc kubenswrapper[4953]: I1211 10:33:46.099273 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3079f9fc-3d3e-4647-a889-fae4277437fc-fernet-keys\") pod \"keystone-bootstrap-n8dxg\" (UID: \"3079f9fc-3d3e-4647-a889-fae4277437fc\") " pod="openstack/keystone-bootstrap-n8dxg" Dec 11 10:33:46 crc kubenswrapper[4953]: I1211 10:33:46.102970 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7v82r\" (UniqueName: \"kubernetes.io/projected/3079f9fc-3d3e-4647-a889-fae4277437fc-kube-api-access-7v82r\") pod \"keystone-bootstrap-n8dxg\" (UID: \"3079f9fc-3d3e-4647-a889-fae4277437fc\") " pod="openstack/keystone-bootstrap-n8dxg" Dec 11 10:33:46 crc kubenswrapper[4953]: I1211 10:33:46.104290 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3079f9fc-3d3e-4647-a889-fae4277437fc-config-data\") pod \"keystone-bootstrap-n8dxg\" (UID: \"3079f9fc-3d3e-4647-a889-fae4277437fc\") " pod="openstack/keystone-bootstrap-n8dxg" Dec 11 10:33:46 crc kubenswrapper[4953]: I1211 10:33:46.223740 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-n8dxg" Dec 11 10:33:46 crc kubenswrapper[4953]: I1211 10:33:46.482894 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aabd22c8-8148-45fc-8d7f-af29844bc4f7" path="/var/lib/kubelet/pods/aabd22c8-8148-45fc-8d7f-af29844bc4f7/volumes" Dec 11 10:33:48 crc kubenswrapper[4953]: E1211 10:33:48.076319 4953 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod97b9ff8e_f944_48ee_803a_d6873a9db805.slice/crio-conmon-79cc47d9dc3c03e712eaad55e52c68d02d784451419037cdd7fbdbf61ac6149e.scope\": RecentStats: unable to find data in memory cache]" Dec 11 10:33:48 crc kubenswrapper[4953]: I1211 10:33:48.193750 4953 patch_prober.go:28] interesting pod/machine-config-daemon-q2898 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 10:33:48 crc kubenswrapper[4953]: I1211 10:33:48.193810 4953 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 10:33:50 crc kubenswrapper[4953]: I1211 10:33:50.325738 4953 generic.go:334] "Generic (PLEG): container finished" podID="f099a9d1-d895-4fdc-84cc-28df6fb24db0" containerID="176a6175d3f004b18252df1170910cef32683d1646cc584d7160b5f2877d0bf5" exitCode=0 Dec 11 10:33:50 crc kubenswrapper[4953]: I1211 10:33:50.325824 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-vzw7v" event={"ID":"f099a9d1-d895-4fdc-84cc-28df6fb24db0","Type":"ContainerDied","Data":"176a6175d3f004b18252df1170910cef32683d1646cc584d7160b5f2877d0bf5"} Dec 11 10:33:55 crc kubenswrapper[4953]: I1211 10:33:55.203261 4953 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6cb545bd4c-9t76k" podUID="2c1cb581-9d65-4d31-857c-de67900c05bf" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.115:5353: i/o timeout" Dec 11 10:33:55 crc kubenswrapper[4953]: E1211 10:33:55.235108 4953 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api@sha256:fe32d3ea620f0c7ecfdde9bbf28417fde03bc18c6f60b1408fa8da24d8188f16" Dec 11 10:33:55 crc kubenswrapper[4953]: E1211 10:33:55.235358 4953 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api@sha256:fe32d3ea620f0c7ecfdde9bbf28417fde03bc18c6f60b1408fa8da24d8188f16,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g9d7f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-vj92k_openstack(c46c2893-6218-455e-a4ee-cf1b4cda45b7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 11 10:33:55 crc kubenswrapper[4953]: E1211 10:33:55.236608 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-vj92k" podUID="c46c2893-6218-455e-a4ee-cf1b4cda45b7" Dec 11 10:33:55 crc kubenswrapper[4953]: E1211 10:33:55.375895 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api@sha256:fe32d3ea620f0c7ecfdde9bbf28417fde03bc18c6f60b1408fa8da24d8188f16\\\"\"" pod="openstack/barbican-db-sync-vj92k" podUID="c46c2893-6218-455e-a4ee-cf1b4cda45b7" Dec 11 10:33:58 crc kubenswrapper[4953]: E1211 10:33:58.285910 4953 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod97b9ff8e_f944_48ee_803a_d6873a9db805.slice/crio-conmon-79cc47d9dc3c03e712eaad55e52c68d02d784451419037cdd7fbdbf61ac6149e.scope\": RecentStats: unable to find data in memory cache]" Dec 11 10:33:58 crc kubenswrapper[4953]: E1211 10:33:58.858287 4953 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:b59b7445e581cc720038107e421371c86c5765b2967e77d884ef29b1d9fd0f49" Dec 11 10:33:58 crc kubenswrapper[4953]: E1211 10:33:58.858695 4953 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:b59b7445e581cc720038107e421371c86c5765b2967e77d884ef29b1d9fd0f49,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mdjkq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-fzjwm_openstack(ad3e4c7f-96eb-40bd-bcdd-7e3771908b6a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 11 10:33:58 crc kubenswrapper[4953]: E1211 10:33:58.860508 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-fzjwm" podUID="ad3e4c7f-96eb-40bd-bcdd-7e3771908b6a" Dec 11 10:33:59 crc kubenswrapper[4953]: I1211 10:33:59.122890 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-vzw7v" Dec 11 10:33:59 crc kubenswrapper[4953]: I1211 10:33:59.162207 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cb545bd4c-9t76k" Dec 11 10:33:59 crc kubenswrapper[4953]: I1211 10:33:59.225516 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcrts\" (UniqueName: \"kubernetes.io/projected/f099a9d1-d895-4fdc-84cc-28df6fb24db0-kube-api-access-xcrts\") pod \"f099a9d1-d895-4fdc-84cc-28df6fb24db0\" (UID: \"f099a9d1-d895-4fdc-84cc-28df6fb24db0\") " Dec 11 10:33:59 crc kubenswrapper[4953]: I1211 10:33:59.225752 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2c1cb581-9d65-4d31-857c-de67900c05bf-ovsdbserver-nb\") pod \"2c1cb581-9d65-4d31-857c-de67900c05bf\" (UID: \"2c1cb581-9d65-4d31-857c-de67900c05bf\") " Dec 11 10:33:59 crc kubenswrapper[4953]: I1211 10:33:59.225886 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2c1cb581-9d65-4d31-857c-de67900c05bf-dns-svc\") pod \"2c1cb581-9d65-4d31-857c-de67900c05bf\" (UID: \"2c1cb581-9d65-4d31-857c-de67900c05bf\") " Dec 11 10:33:59 crc kubenswrapper[4953]: I1211 10:33:59.226012 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f099a9d1-d895-4fdc-84cc-28df6fb24db0-combined-ca-bundle\") pod \"f099a9d1-d895-4fdc-84cc-28df6fb24db0\" (UID: \"f099a9d1-d895-4fdc-84cc-28df6fb24db0\") " Dec 11 10:33:59 crc kubenswrapper[4953]: I1211 10:33:59.226076 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dg52l\" (UniqueName: \"kubernetes.io/projected/2c1cb581-9d65-4d31-857c-de67900c05bf-kube-api-access-dg52l\") pod \"2c1cb581-9d65-4d31-857c-de67900c05bf\" (UID: \"2c1cb581-9d65-4d31-857c-de67900c05bf\") " Dec 11 10:33:59 crc kubenswrapper[4953]: I1211 10:33:59.226164 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c1cb581-9d65-4d31-857c-de67900c05bf-config\") pod \"2c1cb581-9d65-4d31-857c-de67900c05bf\" (UID: \"2c1cb581-9d65-4d31-857c-de67900c05bf\") " Dec 11 10:33:59 crc kubenswrapper[4953]: I1211 10:33:59.226291 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2c1cb581-9d65-4d31-857c-de67900c05bf-ovsdbserver-sb\") pod \"2c1cb581-9d65-4d31-857c-de67900c05bf\" (UID: \"2c1cb581-9d65-4d31-857c-de67900c05bf\") " Dec 11 10:33:59 crc kubenswrapper[4953]: I1211 10:33:59.226368 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f099a9d1-d895-4fdc-84cc-28df6fb24db0-db-sync-config-data\") pod \"f099a9d1-d895-4fdc-84cc-28df6fb24db0\" (UID: \"f099a9d1-d895-4fdc-84cc-28df6fb24db0\") " Dec 11 10:33:59 crc kubenswrapper[4953]: I1211 10:33:59.226461 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f099a9d1-d895-4fdc-84cc-28df6fb24db0-config-data\") pod \"f099a9d1-d895-4fdc-84cc-28df6fb24db0\" (UID: \"f099a9d1-d895-4fdc-84cc-28df6fb24db0\") " Dec 11 10:33:59 crc kubenswrapper[4953]: I1211 10:33:59.238281 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f099a9d1-d895-4fdc-84cc-28df6fb24db0-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "f099a9d1-d895-4fdc-84cc-28df6fb24db0" (UID: "f099a9d1-d895-4fdc-84cc-28df6fb24db0"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:33:59 crc kubenswrapper[4953]: I1211 10:33:59.239257 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c1cb581-9d65-4d31-857c-de67900c05bf-kube-api-access-dg52l" (OuterVolumeSpecName: "kube-api-access-dg52l") pod "2c1cb581-9d65-4d31-857c-de67900c05bf" (UID: "2c1cb581-9d65-4d31-857c-de67900c05bf"). InnerVolumeSpecName "kube-api-access-dg52l". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:33:59 crc kubenswrapper[4953]: I1211 10:33:59.239680 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f099a9d1-d895-4fdc-84cc-28df6fb24db0-kube-api-access-xcrts" (OuterVolumeSpecName: "kube-api-access-xcrts") pod "f099a9d1-d895-4fdc-84cc-28df6fb24db0" (UID: "f099a9d1-d895-4fdc-84cc-28df6fb24db0"). InnerVolumeSpecName "kube-api-access-xcrts". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:33:59 crc kubenswrapper[4953]: I1211 10:33:59.290996 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f099a9d1-d895-4fdc-84cc-28df6fb24db0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f099a9d1-d895-4fdc-84cc-28df6fb24db0" (UID: "f099a9d1-d895-4fdc-84cc-28df6fb24db0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:33:59 crc kubenswrapper[4953]: I1211 10:33:59.313901 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-n8dxg"] Dec 11 10:33:59 crc kubenswrapper[4953]: I1211 10:33:59.315239 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c1cb581-9d65-4d31-857c-de67900c05bf-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2c1cb581-9d65-4d31-857c-de67900c05bf" (UID: "2c1cb581-9d65-4d31-857c-de67900c05bf"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:33:59 crc kubenswrapper[4953]: I1211 10:33:59.328739 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcrts\" (UniqueName: \"kubernetes.io/projected/f099a9d1-d895-4fdc-84cc-28df6fb24db0-kube-api-access-xcrts\") on node \"crc\" DevicePath \"\"" Dec 11 10:33:59 crc kubenswrapper[4953]: I1211 10:33:59.328782 4953 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2c1cb581-9d65-4d31-857c-de67900c05bf-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 11 10:33:59 crc kubenswrapper[4953]: I1211 10:33:59.328795 4953 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f099a9d1-d895-4fdc-84cc-28df6fb24db0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 10:33:59 crc kubenswrapper[4953]: I1211 10:33:59.328808 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dg52l\" (UniqueName: \"kubernetes.io/projected/2c1cb581-9d65-4d31-857c-de67900c05bf-kube-api-access-dg52l\") on node \"crc\" DevicePath \"\"" Dec 11 10:33:59 crc kubenswrapper[4953]: I1211 10:33:59.328821 4953 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f099a9d1-d895-4fdc-84cc-28df6fb24db0-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 10:33:59 crc kubenswrapper[4953]: I1211 10:33:59.333191 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c1cb581-9d65-4d31-857c-de67900c05bf-config" (OuterVolumeSpecName: "config") pod "2c1cb581-9d65-4d31-857c-de67900c05bf" (UID: "2c1cb581-9d65-4d31-857c-de67900c05bf"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:33:59 crc kubenswrapper[4953]: I1211 10:33:59.342510 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f099a9d1-d895-4fdc-84cc-28df6fb24db0-config-data" (OuterVolumeSpecName: "config-data") pod "f099a9d1-d895-4fdc-84cc-28df6fb24db0" (UID: "f099a9d1-d895-4fdc-84cc-28df6fb24db0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:33:59 crc kubenswrapper[4953]: I1211 10:33:59.359203 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c1cb581-9d65-4d31-857c-de67900c05bf-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2c1cb581-9d65-4d31-857c-de67900c05bf" (UID: "2c1cb581-9d65-4d31-857c-de67900c05bf"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:33:59 crc kubenswrapper[4953]: I1211 10:33:59.364773 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c1cb581-9d65-4d31-857c-de67900c05bf-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2c1cb581-9d65-4d31-857c-de67900c05bf" (UID: "2c1cb581-9d65-4d31-857c-de67900c05bf"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:33:59 crc kubenswrapper[4953]: I1211 10:33:59.427212 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cb545bd4c-9t76k" event={"ID":"2c1cb581-9d65-4d31-857c-de67900c05bf","Type":"ContainerDied","Data":"b9fd3aba719059ec59d37f0c01deef4872a40b2b70e03a344d5f97701fbc31f1"} Dec 11 10:33:59 crc kubenswrapper[4953]: I1211 10:33:59.427293 4953 scope.go:117] "RemoveContainer" containerID="185b3b07d33dd1056fb432b491574d0038ab5e253a6ff737dfbfcf3db6f243a8" Dec 11 10:33:59 crc kubenswrapper[4953]: I1211 10:33:59.427482 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cb545bd4c-9t76k" Dec 11 10:33:59 crc kubenswrapper[4953]: I1211 10:33:59.434939 4953 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2c1cb581-9d65-4d31-857c-de67900c05bf-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 11 10:33:59 crc kubenswrapper[4953]: I1211 10:33:59.435031 4953 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c1cb581-9d65-4d31-857c-de67900c05bf-config\") on node \"crc\" DevicePath \"\"" Dec 11 10:33:59 crc kubenswrapper[4953]: I1211 10:33:59.435053 4953 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2c1cb581-9d65-4d31-857c-de67900c05bf-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 11 10:33:59 crc kubenswrapper[4953]: I1211 10:33:59.435063 4953 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f099a9d1-d895-4fdc-84cc-28df6fb24db0-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 10:33:59 crc kubenswrapper[4953]: I1211 10:33:59.436019 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-kv78b" event={"ID":"24b23531-3ad1-4b46-88f6-e930d79b6556","Type":"ContainerStarted","Data":"7ed21a9491280858f26ed7624cc4024f4c01f69b9e05e955a2aacf00f9ef4ee2"} Dec 11 10:33:59 crc kubenswrapper[4953]: I1211 10:33:59.445521 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-n8dxg" event={"ID":"3079f9fc-3d3e-4647-a889-fae4277437fc","Type":"ContainerStarted","Data":"44e177a5c1d7e6f8e71f7ef0d9ee1a2caf526ec0cf4e74e43c7a25c9084f32dc"} Dec 11 10:33:59 crc kubenswrapper[4953]: I1211 10:33:59.447808 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-vzw7v" event={"ID":"f099a9d1-d895-4fdc-84cc-28df6fb24db0","Type":"ContainerDied","Data":"d12f8c3eb4f6a86e0bb66dfb59a68462dca1bce85c82ba932d6cca1c44c8a5b7"} Dec 11 10:33:59 crc kubenswrapper[4953]: I1211 10:33:59.447843 4953 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d12f8c3eb4f6a86e0bb66dfb59a68462dca1bce85c82ba932d6cca1c44c8a5b7" Dec 11 10:33:59 crc kubenswrapper[4953]: I1211 10:33:59.447927 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-vzw7v" Dec 11 10:33:59 crc kubenswrapper[4953]: I1211 10:33:59.465346 4953 scope.go:117] "RemoveContainer" containerID="da84b1e37c527f96dd5fde42e9efc32cf460d1be64b21ab30e46f0d9ae377d84" Dec 11 10:33:59 crc kubenswrapper[4953]: I1211 10:33:59.469516 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6d8b6f34-cdf2-4285-ba3e-3a14621430e5","Type":"ContainerStarted","Data":"b063d78bdb8872cd1f53adf7805e16b4e8338bef9d8bd7c26023226e68457eb4"} Dec 11 10:33:59 crc kubenswrapper[4953]: E1211 10:33:59.470268 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:b59b7445e581cc720038107e421371c86c5765b2967e77d884ef29b1d9fd0f49\\\"\"" pod="openstack/cinder-db-sync-fzjwm" podUID="ad3e4c7f-96eb-40bd-bcdd-7e3771908b6a" Dec 11 10:33:59 crc kubenswrapper[4953]: I1211 10:33:59.485934 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-kv78b" podStartSLOduration=3.747767824 podStartE2EDuration="25.485916503s" podCreationTimestamp="2025-12-11 10:33:34 +0000 UTC" firstStartedPulling="2025-12-11 10:33:37.052445645 +0000 UTC m=+1335.076304668" lastFinishedPulling="2025-12-11 10:33:58.790594314 +0000 UTC m=+1356.814453347" observedRunningTime="2025-12-11 10:33:59.461775358 +0000 UTC m=+1357.485634391" watchObservedRunningTime="2025-12-11 10:33:59.485916503 +0000 UTC m=+1357.509775536" Dec 11 10:33:59 crc kubenswrapper[4953]: I1211 10:33:59.496223 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6cb545bd4c-9t76k"] Dec 11 10:33:59 crc kubenswrapper[4953]: I1211 10:33:59.510694 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6cb545bd4c-9t76k"] Dec 11 10:34:00 crc kubenswrapper[4953]: I1211 10:34:00.213721 4953 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6cb545bd4c-9t76k" podUID="2c1cb581-9d65-4d31-857c-de67900c05bf" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.115:5353: i/o timeout" Dec 11 10:34:00 crc kubenswrapper[4953]: I1211 10:34:00.489904 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c1cb581-9d65-4d31-857c-de67900c05bf" path="/var/lib/kubelet/pods/2c1cb581-9d65-4d31-857c-de67900c05bf/volumes" Dec 11 10:34:00 crc kubenswrapper[4953]: I1211 10:34:00.490588 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-n8dxg" event={"ID":"3079f9fc-3d3e-4647-a889-fae4277437fc","Type":"ContainerStarted","Data":"b01cdea6f093563c7ea84139090a75d43cc46e5f226de5ff7edc4ba180c43ab7"} Dec 11 10:34:00 crc kubenswrapper[4953]: I1211 10:34:00.537589 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-n8dxg" podStartSLOduration=15.537555993 podStartE2EDuration="15.537555993s" podCreationTimestamp="2025-12-11 10:33:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:34:00.511819048 +0000 UTC m=+1358.535678091" watchObservedRunningTime="2025-12-11 10:34:00.537555993 +0000 UTC m=+1358.561415026" Dec 11 10:34:00 crc kubenswrapper[4953]: I1211 10:34:00.540396 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5dc4fcdbc-s9gvv"] Dec 11 10:34:00 crc kubenswrapper[4953]: E1211 10:34:00.540727 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c1cb581-9d65-4d31-857c-de67900c05bf" containerName="dnsmasq-dns" Dec 11 10:34:00 crc kubenswrapper[4953]: I1211 10:34:00.540743 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c1cb581-9d65-4d31-857c-de67900c05bf" containerName="dnsmasq-dns" Dec 11 10:34:00 crc kubenswrapper[4953]: E1211 10:34:00.540772 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c1cb581-9d65-4d31-857c-de67900c05bf" containerName="init" Dec 11 10:34:00 crc kubenswrapper[4953]: I1211 10:34:00.540779 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c1cb581-9d65-4d31-857c-de67900c05bf" containerName="init" Dec 11 10:34:00 crc kubenswrapper[4953]: E1211 10:34:00.540803 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f099a9d1-d895-4fdc-84cc-28df6fb24db0" containerName="glance-db-sync" Dec 11 10:34:00 crc kubenswrapper[4953]: I1211 10:34:00.540809 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="f099a9d1-d895-4fdc-84cc-28df6fb24db0" containerName="glance-db-sync" Dec 11 10:34:00 crc kubenswrapper[4953]: I1211 10:34:00.540969 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="f099a9d1-d895-4fdc-84cc-28df6fb24db0" containerName="glance-db-sync" Dec 11 10:34:00 crc kubenswrapper[4953]: I1211 10:34:00.540982 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c1cb581-9d65-4d31-857c-de67900c05bf" containerName="dnsmasq-dns" Dec 11 10:34:00 crc kubenswrapper[4953]: I1211 10:34:00.541817 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5dc4fcdbc-s9gvv" Dec 11 10:34:00 crc kubenswrapper[4953]: I1211 10:34:00.578180 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5dc4fcdbc-s9gvv"] Dec 11 10:34:00 crc kubenswrapper[4953]: I1211 10:34:00.672117 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f3b40d9-353f-44c2-b0fe-6e0f62e8ddbf-config\") pod \"dnsmasq-dns-5dc4fcdbc-s9gvv\" (UID: \"6f3b40d9-353f-44c2-b0fe-6e0f62e8ddbf\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-s9gvv" Dec 11 10:34:00 crc kubenswrapper[4953]: I1211 10:34:00.672176 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6f3b40d9-353f-44c2-b0fe-6e0f62e8ddbf-ovsdbserver-sb\") pod \"dnsmasq-dns-5dc4fcdbc-s9gvv\" (UID: \"6f3b40d9-353f-44c2-b0fe-6e0f62e8ddbf\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-s9gvv" Dec 11 10:34:00 crc kubenswrapper[4953]: I1211 10:34:00.672238 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rpwhn\" (UniqueName: \"kubernetes.io/projected/6f3b40d9-353f-44c2-b0fe-6e0f62e8ddbf-kube-api-access-rpwhn\") pod \"dnsmasq-dns-5dc4fcdbc-s9gvv\" (UID: \"6f3b40d9-353f-44c2-b0fe-6e0f62e8ddbf\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-s9gvv" Dec 11 10:34:00 crc kubenswrapper[4953]: I1211 10:34:00.672267 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6f3b40d9-353f-44c2-b0fe-6e0f62e8ddbf-dns-swift-storage-0\") pod \"dnsmasq-dns-5dc4fcdbc-s9gvv\" (UID: \"6f3b40d9-353f-44c2-b0fe-6e0f62e8ddbf\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-s9gvv" Dec 11 10:34:00 crc kubenswrapper[4953]: I1211 10:34:00.672357 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6f3b40d9-353f-44c2-b0fe-6e0f62e8ddbf-dns-svc\") pod \"dnsmasq-dns-5dc4fcdbc-s9gvv\" (UID: \"6f3b40d9-353f-44c2-b0fe-6e0f62e8ddbf\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-s9gvv" Dec 11 10:34:00 crc kubenswrapper[4953]: I1211 10:34:00.672386 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6f3b40d9-353f-44c2-b0fe-6e0f62e8ddbf-ovsdbserver-nb\") pod \"dnsmasq-dns-5dc4fcdbc-s9gvv\" (UID: \"6f3b40d9-353f-44c2-b0fe-6e0f62e8ddbf\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-s9gvv" Dec 11 10:34:00 crc kubenswrapper[4953]: I1211 10:34:00.773898 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6f3b40d9-353f-44c2-b0fe-6e0f62e8ddbf-ovsdbserver-nb\") pod \"dnsmasq-dns-5dc4fcdbc-s9gvv\" (UID: \"6f3b40d9-353f-44c2-b0fe-6e0f62e8ddbf\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-s9gvv" Dec 11 10:34:00 crc kubenswrapper[4953]: I1211 10:34:00.773958 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f3b40d9-353f-44c2-b0fe-6e0f62e8ddbf-config\") pod \"dnsmasq-dns-5dc4fcdbc-s9gvv\" (UID: \"6f3b40d9-353f-44c2-b0fe-6e0f62e8ddbf\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-s9gvv" Dec 11 10:34:00 crc kubenswrapper[4953]: I1211 10:34:00.773985 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6f3b40d9-353f-44c2-b0fe-6e0f62e8ddbf-ovsdbserver-sb\") pod \"dnsmasq-dns-5dc4fcdbc-s9gvv\" (UID: \"6f3b40d9-353f-44c2-b0fe-6e0f62e8ddbf\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-s9gvv" Dec 11 10:34:00 crc kubenswrapper[4953]: I1211 10:34:00.774025 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rpwhn\" (UniqueName: \"kubernetes.io/projected/6f3b40d9-353f-44c2-b0fe-6e0f62e8ddbf-kube-api-access-rpwhn\") pod \"dnsmasq-dns-5dc4fcdbc-s9gvv\" (UID: \"6f3b40d9-353f-44c2-b0fe-6e0f62e8ddbf\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-s9gvv" Dec 11 10:34:00 crc kubenswrapper[4953]: I1211 10:34:00.774053 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6f3b40d9-353f-44c2-b0fe-6e0f62e8ddbf-dns-swift-storage-0\") pod \"dnsmasq-dns-5dc4fcdbc-s9gvv\" (UID: \"6f3b40d9-353f-44c2-b0fe-6e0f62e8ddbf\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-s9gvv" Dec 11 10:34:00 crc kubenswrapper[4953]: I1211 10:34:00.774133 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6f3b40d9-353f-44c2-b0fe-6e0f62e8ddbf-dns-svc\") pod \"dnsmasq-dns-5dc4fcdbc-s9gvv\" (UID: \"6f3b40d9-353f-44c2-b0fe-6e0f62e8ddbf\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-s9gvv" Dec 11 10:34:00 crc kubenswrapper[4953]: I1211 10:34:00.775531 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6f3b40d9-353f-44c2-b0fe-6e0f62e8ddbf-dns-svc\") pod \"dnsmasq-dns-5dc4fcdbc-s9gvv\" (UID: \"6f3b40d9-353f-44c2-b0fe-6e0f62e8ddbf\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-s9gvv" Dec 11 10:34:00 crc kubenswrapper[4953]: I1211 10:34:00.775614 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6f3b40d9-353f-44c2-b0fe-6e0f62e8ddbf-ovsdbserver-sb\") pod \"dnsmasq-dns-5dc4fcdbc-s9gvv\" (UID: \"6f3b40d9-353f-44c2-b0fe-6e0f62e8ddbf\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-s9gvv" Dec 11 10:34:00 crc kubenswrapper[4953]: I1211 10:34:00.775629 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6f3b40d9-353f-44c2-b0fe-6e0f62e8ddbf-ovsdbserver-nb\") pod \"dnsmasq-dns-5dc4fcdbc-s9gvv\" (UID: \"6f3b40d9-353f-44c2-b0fe-6e0f62e8ddbf\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-s9gvv" Dec 11 10:34:00 crc kubenswrapper[4953]: I1211 10:34:00.776284 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f3b40d9-353f-44c2-b0fe-6e0f62e8ddbf-config\") pod \"dnsmasq-dns-5dc4fcdbc-s9gvv\" (UID: \"6f3b40d9-353f-44c2-b0fe-6e0f62e8ddbf\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-s9gvv" Dec 11 10:34:00 crc kubenswrapper[4953]: I1211 10:34:00.776392 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6f3b40d9-353f-44c2-b0fe-6e0f62e8ddbf-dns-swift-storage-0\") pod \"dnsmasq-dns-5dc4fcdbc-s9gvv\" (UID: \"6f3b40d9-353f-44c2-b0fe-6e0f62e8ddbf\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-s9gvv" Dec 11 10:34:00 crc kubenswrapper[4953]: I1211 10:34:00.795450 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rpwhn\" (UniqueName: \"kubernetes.io/projected/6f3b40d9-353f-44c2-b0fe-6e0f62e8ddbf-kube-api-access-rpwhn\") pod \"dnsmasq-dns-5dc4fcdbc-s9gvv\" (UID: \"6f3b40d9-353f-44c2-b0fe-6e0f62e8ddbf\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-s9gvv" Dec 11 10:34:00 crc kubenswrapper[4953]: I1211 10:34:00.871941 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5dc4fcdbc-s9gvv" Dec 11 10:34:01 crc kubenswrapper[4953]: I1211 10:34:01.434971 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5dc4fcdbc-s9gvv"] Dec 11 10:34:01 crc kubenswrapper[4953]: I1211 10:34:01.494179 4953 generic.go:334] "Generic (PLEG): container finished" podID="5fe16c5e-1161-4e0d-83d4-9f07a2643a6a" containerID="5de87eeb054b473acfa2ae00d395cdca8c1df68037366bf5b76babf98bca8bea" exitCode=0 Dec 11 10:34:01 crc kubenswrapper[4953]: I1211 10:34:01.494316 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-gnwcw" event={"ID":"5fe16c5e-1161-4e0d-83d4-9f07a2643a6a","Type":"ContainerDied","Data":"5de87eeb054b473acfa2ae00d395cdca8c1df68037366bf5b76babf98bca8bea"} Dec 11 10:34:01 crc kubenswrapper[4953]: I1211 10:34:01.558517 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Dec 11 10:34:01 crc kubenswrapper[4953]: I1211 10:34:01.561050 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 11 10:34:01 crc kubenswrapper[4953]: I1211 10:34:01.566459 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Dec 11 10:34:01 crc kubenswrapper[4953]: I1211 10:34:01.566919 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-bpvgf" Dec 11 10:34:01 crc kubenswrapper[4953]: I1211 10:34:01.566920 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Dec 11 10:34:01 crc kubenswrapper[4953]: I1211 10:34:01.572710 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 11 10:34:01 crc kubenswrapper[4953]: I1211 10:34:01.731974 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"7734dd1c-9884-47a8-86fb-7f7dbf0c1af3\") " pod="openstack/glance-default-external-api-0" Dec 11 10:34:01 crc kubenswrapper[4953]: I1211 10:34:01.732434 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7734dd1c-9884-47a8-86fb-7f7dbf0c1af3-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"7734dd1c-9884-47a8-86fb-7f7dbf0c1af3\") " pod="openstack/glance-default-external-api-0" Dec 11 10:34:01 crc kubenswrapper[4953]: I1211 10:34:01.732557 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7734dd1c-9884-47a8-86fb-7f7dbf0c1af3-config-data\") pod \"glance-default-external-api-0\" (UID: \"7734dd1c-9884-47a8-86fb-7f7dbf0c1af3\") " pod="openstack/glance-default-external-api-0" Dec 11 10:34:01 crc kubenswrapper[4953]: I1211 10:34:01.732681 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7734dd1c-9884-47a8-86fb-7f7dbf0c1af3-scripts\") pod \"glance-default-external-api-0\" (UID: \"7734dd1c-9884-47a8-86fb-7f7dbf0c1af3\") " pod="openstack/glance-default-external-api-0" Dec 11 10:34:01 crc kubenswrapper[4953]: I1211 10:34:01.732791 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7734dd1c-9884-47a8-86fb-7f7dbf0c1af3-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"7734dd1c-9884-47a8-86fb-7f7dbf0c1af3\") " pod="openstack/glance-default-external-api-0" Dec 11 10:34:01 crc kubenswrapper[4953]: I1211 10:34:01.732946 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hb7nt\" (UniqueName: \"kubernetes.io/projected/7734dd1c-9884-47a8-86fb-7f7dbf0c1af3-kube-api-access-hb7nt\") pod \"glance-default-external-api-0\" (UID: \"7734dd1c-9884-47a8-86fb-7f7dbf0c1af3\") " pod="openstack/glance-default-external-api-0" Dec 11 10:34:01 crc kubenswrapper[4953]: I1211 10:34:01.733047 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7734dd1c-9884-47a8-86fb-7f7dbf0c1af3-logs\") pod \"glance-default-external-api-0\" (UID: \"7734dd1c-9884-47a8-86fb-7f7dbf0c1af3\") " pod="openstack/glance-default-external-api-0" Dec 11 10:34:01 crc kubenswrapper[4953]: I1211 10:34:01.764158 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 11 10:34:01 crc kubenswrapper[4953]: I1211 10:34:01.765898 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 11 10:34:01 crc kubenswrapper[4953]: I1211 10:34:01.768397 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Dec 11 10:34:01 crc kubenswrapper[4953]: I1211 10:34:01.809626 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 11 10:34:01 crc kubenswrapper[4953]: I1211 10:34:01.834927 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"7734dd1c-9884-47a8-86fb-7f7dbf0c1af3\") " pod="openstack/glance-default-external-api-0" Dec 11 10:34:01 crc kubenswrapper[4953]: I1211 10:34:01.835009 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7734dd1c-9884-47a8-86fb-7f7dbf0c1af3-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"7734dd1c-9884-47a8-86fb-7f7dbf0c1af3\") " pod="openstack/glance-default-external-api-0" Dec 11 10:34:01 crc kubenswrapper[4953]: I1211 10:34:01.835058 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7734dd1c-9884-47a8-86fb-7f7dbf0c1af3-config-data\") pod \"glance-default-external-api-0\" (UID: \"7734dd1c-9884-47a8-86fb-7f7dbf0c1af3\") " pod="openstack/glance-default-external-api-0" Dec 11 10:34:01 crc kubenswrapper[4953]: I1211 10:34:01.835082 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7734dd1c-9884-47a8-86fb-7f7dbf0c1af3-scripts\") pod \"glance-default-external-api-0\" (UID: \"7734dd1c-9884-47a8-86fb-7f7dbf0c1af3\") " pod="openstack/glance-default-external-api-0" Dec 11 10:34:01 crc kubenswrapper[4953]: I1211 10:34:01.835112 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7734dd1c-9884-47a8-86fb-7f7dbf0c1af3-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"7734dd1c-9884-47a8-86fb-7f7dbf0c1af3\") " pod="openstack/glance-default-external-api-0" Dec 11 10:34:01 crc kubenswrapper[4953]: I1211 10:34:01.835146 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hb7nt\" (UniqueName: \"kubernetes.io/projected/7734dd1c-9884-47a8-86fb-7f7dbf0c1af3-kube-api-access-hb7nt\") pod \"glance-default-external-api-0\" (UID: \"7734dd1c-9884-47a8-86fb-7f7dbf0c1af3\") " pod="openstack/glance-default-external-api-0" Dec 11 10:34:01 crc kubenswrapper[4953]: I1211 10:34:01.835167 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7734dd1c-9884-47a8-86fb-7f7dbf0c1af3-logs\") pod \"glance-default-external-api-0\" (UID: \"7734dd1c-9884-47a8-86fb-7f7dbf0c1af3\") " pod="openstack/glance-default-external-api-0" Dec 11 10:34:01 crc kubenswrapper[4953]: I1211 10:34:01.835384 4953 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"7734dd1c-9884-47a8-86fb-7f7dbf0c1af3\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/glance-default-external-api-0" Dec 11 10:34:01 crc kubenswrapper[4953]: I1211 10:34:01.835774 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7734dd1c-9884-47a8-86fb-7f7dbf0c1af3-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"7734dd1c-9884-47a8-86fb-7f7dbf0c1af3\") " pod="openstack/glance-default-external-api-0" Dec 11 10:34:01 crc kubenswrapper[4953]: I1211 10:34:01.835789 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7734dd1c-9884-47a8-86fb-7f7dbf0c1af3-logs\") pod \"glance-default-external-api-0\" (UID: \"7734dd1c-9884-47a8-86fb-7f7dbf0c1af3\") " pod="openstack/glance-default-external-api-0" Dec 11 10:34:01 crc kubenswrapper[4953]: I1211 10:34:01.841137 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7734dd1c-9884-47a8-86fb-7f7dbf0c1af3-config-data\") pod \"glance-default-external-api-0\" (UID: \"7734dd1c-9884-47a8-86fb-7f7dbf0c1af3\") " pod="openstack/glance-default-external-api-0" Dec 11 10:34:01 crc kubenswrapper[4953]: I1211 10:34:01.841300 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7734dd1c-9884-47a8-86fb-7f7dbf0c1af3-scripts\") pod \"glance-default-external-api-0\" (UID: \"7734dd1c-9884-47a8-86fb-7f7dbf0c1af3\") " pod="openstack/glance-default-external-api-0" Dec 11 10:34:01 crc kubenswrapper[4953]: I1211 10:34:01.841872 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7734dd1c-9884-47a8-86fb-7f7dbf0c1af3-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"7734dd1c-9884-47a8-86fb-7f7dbf0c1af3\") " pod="openstack/glance-default-external-api-0" Dec 11 10:34:01 crc kubenswrapper[4953]: I1211 10:34:01.879380 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hb7nt\" (UniqueName: \"kubernetes.io/projected/7734dd1c-9884-47a8-86fb-7f7dbf0c1af3-kube-api-access-hb7nt\") pod \"glance-default-external-api-0\" (UID: \"7734dd1c-9884-47a8-86fb-7f7dbf0c1af3\") " pod="openstack/glance-default-external-api-0" Dec 11 10:34:01 crc kubenswrapper[4953]: I1211 10:34:01.883453 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"7734dd1c-9884-47a8-86fb-7f7dbf0c1af3\") " pod="openstack/glance-default-external-api-0" Dec 11 10:34:01 crc kubenswrapper[4953]: I1211 10:34:01.896796 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 11 10:34:01 crc kubenswrapper[4953]: I1211 10:34:01.936960 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e05f528-a644-4eb2-878d-65ca7558e66b-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"1e05f528-a644-4eb2-878d-65ca7558e66b\") " pod="openstack/glance-default-internal-api-0" Dec 11 10:34:01 crc kubenswrapper[4953]: I1211 10:34:01.937034 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1e05f528-a644-4eb2-878d-65ca7558e66b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"1e05f528-a644-4eb2-878d-65ca7558e66b\") " pod="openstack/glance-default-internal-api-0" Dec 11 10:34:01 crc kubenswrapper[4953]: I1211 10:34:01.937175 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1e05f528-a644-4eb2-878d-65ca7558e66b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"1e05f528-a644-4eb2-878d-65ca7558e66b\") " pod="openstack/glance-default-internal-api-0" Dec 11 10:34:01 crc kubenswrapper[4953]: I1211 10:34:01.937291 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"1e05f528-a644-4eb2-878d-65ca7558e66b\") " pod="openstack/glance-default-internal-api-0" Dec 11 10:34:01 crc kubenswrapper[4953]: I1211 10:34:01.937342 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxs6l\" (UniqueName: \"kubernetes.io/projected/1e05f528-a644-4eb2-878d-65ca7558e66b-kube-api-access-dxs6l\") pod \"glance-default-internal-api-0\" (UID: \"1e05f528-a644-4eb2-878d-65ca7558e66b\") " pod="openstack/glance-default-internal-api-0" Dec 11 10:34:01 crc kubenswrapper[4953]: I1211 10:34:01.937366 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e05f528-a644-4eb2-878d-65ca7558e66b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"1e05f528-a644-4eb2-878d-65ca7558e66b\") " pod="openstack/glance-default-internal-api-0" Dec 11 10:34:01 crc kubenswrapper[4953]: I1211 10:34:01.937462 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e05f528-a644-4eb2-878d-65ca7558e66b-logs\") pod \"glance-default-internal-api-0\" (UID: \"1e05f528-a644-4eb2-878d-65ca7558e66b\") " pod="openstack/glance-default-internal-api-0" Dec 11 10:34:02 crc kubenswrapper[4953]: I1211 10:34:02.039427 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"1e05f528-a644-4eb2-878d-65ca7558e66b\") " pod="openstack/glance-default-internal-api-0" Dec 11 10:34:02 crc kubenswrapper[4953]: I1211 10:34:02.040053 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxs6l\" (UniqueName: \"kubernetes.io/projected/1e05f528-a644-4eb2-878d-65ca7558e66b-kube-api-access-dxs6l\") pod \"glance-default-internal-api-0\" (UID: \"1e05f528-a644-4eb2-878d-65ca7558e66b\") " pod="openstack/glance-default-internal-api-0" Dec 11 10:34:02 crc kubenswrapper[4953]: I1211 10:34:02.040067 4953 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"1e05f528-a644-4eb2-878d-65ca7558e66b\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/glance-default-internal-api-0" Dec 11 10:34:02 crc kubenswrapper[4953]: I1211 10:34:02.040085 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e05f528-a644-4eb2-878d-65ca7558e66b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"1e05f528-a644-4eb2-878d-65ca7558e66b\") " pod="openstack/glance-default-internal-api-0" Dec 11 10:34:02 crc kubenswrapper[4953]: I1211 10:34:02.041620 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e05f528-a644-4eb2-878d-65ca7558e66b-logs\") pod \"glance-default-internal-api-0\" (UID: \"1e05f528-a644-4eb2-878d-65ca7558e66b\") " pod="openstack/glance-default-internal-api-0" Dec 11 10:34:02 crc kubenswrapper[4953]: I1211 10:34:02.042525 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e05f528-a644-4eb2-878d-65ca7558e66b-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"1e05f528-a644-4eb2-878d-65ca7558e66b\") " pod="openstack/glance-default-internal-api-0" Dec 11 10:34:02 crc kubenswrapper[4953]: I1211 10:34:02.042854 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1e05f528-a644-4eb2-878d-65ca7558e66b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"1e05f528-a644-4eb2-878d-65ca7558e66b\") " pod="openstack/glance-default-internal-api-0" Dec 11 10:34:02 crc kubenswrapper[4953]: I1211 10:34:02.044897 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e05f528-a644-4eb2-878d-65ca7558e66b-logs\") pod \"glance-default-internal-api-0\" (UID: \"1e05f528-a644-4eb2-878d-65ca7558e66b\") " pod="openstack/glance-default-internal-api-0" Dec 11 10:34:02 crc kubenswrapper[4953]: I1211 10:34:02.051068 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e05f528-a644-4eb2-878d-65ca7558e66b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"1e05f528-a644-4eb2-878d-65ca7558e66b\") " pod="openstack/glance-default-internal-api-0" Dec 11 10:34:02 crc kubenswrapper[4953]: I1211 10:34:02.059990 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1e05f528-a644-4eb2-878d-65ca7558e66b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"1e05f528-a644-4eb2-878d-65ca7558e66b\") " pod="openstack/glance-default-internal-api-0" Dec 11 10:34:02 crc kubenswrapper[4953]: I1211 10:34:02.060148 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1e05f528-a644-4eb2-878d-65ca7558e66b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"1e05f528-a644-4eb2-878d-65ca7558e66b\") " pod="openstack/glance-default-internal-api-0" Dec 11 10:34:02 crc kubenswrapper[4953]: I1211 10:34:02.060849 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxs6l\" (UniqueName: \"kubernetes.io/projected/1e05f528-a644-4eb2-878d-65ca7558e66b-kube-api-access-dxs6l\") pod \"glance-default-internal-api-0\" (UID: \"1e05f528-a644-4eb2-878d-65ca7558e66b\") " pod="openstack/glance-default-internal-api-0" Dec 11 10:34:02 crc kubenswrapper[4953]: I1211 10:34:02.064083 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1e05f528-a644-4eb2-878d-65ca7558e66b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"1e05f528-a644-4eb2-878d-65ca7558e66b\") " pod="openstack/glance-default-internal-api-0" Dec 11 10:34:02 crc kubenswrapper[4953]: I1211 10:34:02.095880 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e05f528-a644-4eb2-878d-65ca7558e66b-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"1e05f528-a644-4eb2-878d-65ca7558e66b\") " pod="openstack/glance-default-internal-api-0" Dec 11 10:34:02 crc kubenswrapper[4953]: I1211 10:34:02.119922 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"1e05f528-a644-4eb2-878d-65ca7558e66b\") " pod="openstack/glance-default-internal-api-0" Dec 11 10:34:02 crc kubenswrapper[4953]: I1211 10:34:02.401780 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 11 10:34:02 crc kubenswrapper[4953]: I1211 10:34:02.519035 4953 generic.go:334] "Generic (PLEG): container finished" podID="24b23531-3ad1-4b46-88f6-e930d79b6556" containerID="7ed21a9491280858f26ed7624cc4024f4c01f69b9e05e955a2aacf00f9ef4ee2" exitCode=0 Dec 11 10:34:02 crc kubenswrapper[4953]: I1211 10:34:02.519415 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-kv78b" event={"ID":"24b23531-3ad1-4b46-88f6-e930d79b6556","Type":"ContainerDied","Data":"7ed21a9491280858f26ed7624cc4024f4c01f69b9e05e955a2aacf00f9ef4ee2"} Dec 11 10:34:02 crc kubenswrapper[4953]: I1211 10:34:02.527552 4953 generic.go:334] "Generic (PLEG): container finished" podID="6f3b40d9-353f-44c2-b0fe-6e0f62e8ddbf" containerID="d3a2f32fa13b9b070accf6e1045a8eb80ad869e00a70f3ba2130840107e102ac" exitCode=0 Dec 11 10:34:02 crc kubenswrapper[4953]: I1211 10:34:02.527640 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5dc4fcdbc-s9gvv" event={"ID":"6f3b40d9-353f-44c2-b0fe-6e0f62e8ddbf","Type":"ContainerDied","Data":"d3a2f32fa13b9b070accf6e1045a8eb80ad869e00a70f3ba2130840107e102ac"} Dec 11 10:34:02 crc kubenswrapper[4953]: I1211 10:34:02.527689 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5dc4fcdbc-s9gvv" event={"ID":"6f3b40d9-353f-44c2-b0fe-6e0f62e8ddbf","Type":"ContainerStarted","Data":"e3992bade22e56713ef83740f5d162cb0dc36001ba79303fdb04b8d12f67f889"} Dec 11 10:34:02 crc kubenswrapper[4953]: I1211 10:34:02.532548 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6d8b6f34-cdf2-4285-ba3e-3a14621430e5","Type":"ContainerStarted","Data":"cc48d3199b1b2a4c8feaf882b069ef69dca86cf1af56abcb97105af58084c882"} Dec 11 10:34:02 crc kubenswrapper[4953]: I1211 10:34:02.593689 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 11 10:34:02 crc kubenswrapper[4953]: W1211 10:34:02.604822 4953 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7734dd1c_9884_47a8_86fb_7f7dbf0c1af3.slice/crio-994c5967d28e5e38feb9a29a80b2012a41602362de777cc160827bfecf4f649e WatchSource:0}: Error finding container 994c5967d28e5e38feb9a29a80b2012a41602362de777cc160827bfecf4f649e: Status 404 returned error can't find the container with id 994c5967d28e5e38feb9a29a80b2012a41602362de777cc160827bfecf4f649e Dec 11 10:34:03 crc kubenswrapper[4953]: I1211 10:34:03.082610 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 11 10:34:03 crc kubenswrapper[4953]: I1211 10:34:03.105209 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-gnwcw" Dec 11 10:34:03 crc kubenswrapper[4953]: I1211 10:34:03.212047 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5fe16c5e-1161-4e0d-83d4-9f07a2643a6a-config\") pod \"5fe16c5e-1161-4e0d-83d4-9f07a2643a6a\" (UID: \"5fe16c5e-1161-4e0d-83d4-9f07a2643a6a\") " Dec 11 10:34:03 crc kubenswrapper[4953]: I1211 10:34:03.212214 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fe16c5e-1161-4e0d-83d4-9f07a2643a6a-combined-ca-bundle\") pod \"5fe16c5e-1161-4e0d-83d4-9f07a2643a6a\" (UID: \"5fe16c5e-1161-4e0d-83d4-9f07a2643a6a\") " Dec 11 10:34:03 crc kubenswrapper[4953]: I1211 10:34:03.212287 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cbr2k\" (UniqueName: \"kubernetes.io/projected/5fe16c5e-1161-4e0d-83d4-9f07a2643a6a-kube-api-access-cbr2k\") pod \"5fe16c5e-1161-4e0d-83d4-9f07a2643a6a\" (UID: \"5fe16c5e-1161-4e0d-83d4-9f07a2643a6a\") " Dec 11 10:34:03 crc kubenswrapper[4953]: I1211 10:34:03.220730 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe16c5e-1161-4e0d-83d4-9f07a2643a6a-kube-api-access-cbr2k" (OuterVolumeSpecName: "kube-api-access-cbr2k") pod "5fe16c5e-1161-4e0d-83d4-9f07a2643a6a" (UID: "5fe16c5e-1161-4e0d-83d4-9f07a2643a6a"). InnerVolumeSpecName "kube-api-access-cbr2k". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:34:03 crc kubenswrapper[4953]: I1211 10:34:03.246100 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe16c5e-1161-4e0d-83d4-9f07a2643a6a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5fe16c5e-1161-4e0d-83d4-9f07a2643a6a" (UID: "5fe16c5e-1161-4e0d-83d4-9f07a2643a6a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:34:03 crc kubenswrapper[4953]: I1211 10:34:03.259543 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe16c5e-1161-4e0d-83d4-9f07a2643a6a-config" (OuterVolumeSpecName: "config") pod "5fe16c5e-1161-4e0d-83d4-9f07a2643a6a" (UID: "5fe16c5e-1161-4e0d-83d4-9f07a2643a6a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:34:03 crc kubenswrapper[4953]: I1211 10:34:03.314948 4953 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fe16c5e-1161-4e0d-83d4-9f07a2643a6a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 10:34:03 crc kubenswrapper[4953]: I1211 10:34:03.314981 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cbr2k\" (UniqueName: \"kubernetes.io/projected/5fe16c5e-1161-4e0d-83d4-9f07a2643a6a-kube-api-access-cbr2k\") on node \"crc\" DevicePath \"\"" Dec 11 10:34:03 crc kubenswrapper[4953]: I1211 10:34:03.314991 4953 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/5fe16c5e-1161-4e0d-83d4-9f07a2643a6a-config\") on node \"crc\" DevicePath \"\"" Dec 11 10:34:03 crc kubenswrapper[4953]: I1211 10:34:03.571125 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5dc4fcdbc-s9gvv" event={"ID":"6f3b40d9-353f-44c2-b0fe-6e0f62e8ddbf","Type":"ContainerStarted","Data":"dc8f357c0cc6139f74b3dd83b06fac66a1b420c470e862e678caa306efab7940"} Dec 11 10:34:03 crc kubenswrapper[4953]: I1211 10:34:03.577121 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7734dd1c-9884-47a8-86fb-7f7dbf0c1af3","Type":"ContainerStarted","Data":"8b938ad3bfb7e53a045b18b3f364af08cdb2463b187560c5f2a84f9baa4e9479"} Dec 11 10:34:03 crc kubenswrapper[4953]: I1211 10:34:03.577168 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7734dd1c-9884-47a8-86fb-7f7dbf0c1af3","Type":"ContainerStarted","Data":"994c5967d28e5e38feb9a29a80b2012a41602362de777cc160827bfecf4f649e"} Dec 11 10:34:03 crc kubenswrapper[4953]: I1211 10:34:03.581045 4953 generic.go:334] "Generic (PLEG): container finished" podID="3079f9fc-3d3e-4647-a889-fae4277437fc" containerID="b01cdea6f093563c7ea84139090a75d43cc46e5f226de5ff7edc4ba180c43ab7" exitCode=0 Dec 11 10:34:03 crc kubenswrapper[4953]: I1211 10:34:03.581143 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-n8dxg" event={"ID":"3079f9fc-3d3e-4647-a889-fae4277437fc","Type":"ContainerDied","Data":"b01cdea6f093563c7ea84139090a75d43cc46e5f226de5ff7edc4ba180c43ab7"} Dec 11 10:34:03 crc kubenswrapper[4953]: I1211 10:34:03.584434 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-gnwcw" event={"ID":"5fe16c5e-1161-4e0d-83d4-9f07a2643a6a","Type":"ContainerDied","Data":"f792e563eede022e098794aebad84314cdba8dec72df04d548c4bae8f94c6ba4"} Dec 11 10:34:03 crc kubenswrapper[4953]: I1211 10:34:03.584461 4953 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f792e563eede022e098794aebad84314cdba8dec72df04d548c4bae8f94c6ba4" Dec 11 10:34:03 crc kubenswrapper[4953]: I1211 10:34:03.585130 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-gnwcw" Dec 11 10:34:03 crc kubenswrapper[4953]: I1211 10:34:03.588002 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1e05f528-a644-4eb2-878d-65ca7558e66b","Type":"ContainerStarted","Data":"91068aec120efedc1df48c316743420f9d3f674d603325497c81bdae43eb78a0"} Dec 11 10:34:03 crc kubenswrapper[4953]: I1211 10:34:03.603381 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5dc4fcdbc-s9gvv" podStartSLOduration=3.6033586189999998 podStartE2EDuration="3.603358619s" podCreationTimestamp="2025-12-11 10:34:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:34:03.597313469 +0000 UTC m=+1361.621172512" watchObservedRunningTime="2025-12-11 10:34:03.603358619 +0000 UTC m=+1361.627217652" Dec 11 10:34:03 crc kubenswrapper[4953]: I1211 10:34:03.769276 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5dc4fcdbc-s9gvv"] Dec 11 10:34:03 crc kubenswrapper[4953]: I1211 10:34:03.822421 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6b9c8b59c-h4qws"] Dec 11 10:34:03 crc kubenswrapper[4953]: E1211 10:34:03.822975 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fe16c5e-1161-4e0d-83d4-9f07a2643a6a" containerName="neutron-db-sync" Dec 11 10:34:03 crc kubenswrapper[4953]: I1211 10:34:03.822994 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fe16c5e-1161-4e0d-83d4-9f07a2643a6a" containerName="neutron-db-sync" Dec 11 10:34:03 crc kubenswrapper[4953]: I1211 10:34:03.823236 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="5fe16c5e-1161-4e0d-83d4-9f07a2643a6a" containerName="neutron-db-sync" Dec 11 10:34:03 crc kubenswrapper[4953]: I1211 10:34:03.824509 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b9c8b59c-h4qws" Dec 11 10:34:03 crc kubenswrapper[4953]: I1211 10:34:03.892011 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b9c8b59c-h4qws"] Dec 11 10:34:03 crc kubenswrapper[4953]: I1211 10:34:03.939462 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-75bd4868-pp5tq"] Dec 11 10:34:03 crc kubenswrapper[4953]: I1211 10:34:03.940928 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-75bd4868-pp5tq" Dec 11 10:34:03 crc kubenswrapper[4953]: I1211 10:34:03.942619 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/41289b74-d869-42e1-875c-4ba58c7cd4c2-ovsdbserver-nb\") pod \"dnsmasq-dns-6b9c8b59c-h4qws\" (UID: \"41289b74-d869-42e1-875c-4ba58c7cd4c2\") " pod="openstack/dnsmasq-dns-6b9c8b59c-h4qws" Dec 11 10:34:03 crc kubenswrapper[4953]: I1211 10:34:03.943009 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41289b74-d869-42e1-875c-4ba58c7cd4c2-config\") pod \"dnsmasq-dns-6b9c8b59c-h4qws\" (UID: \"41289b74-d869-42e1-875c-4ba58c7cd4c2\") " pod="openstack/dnsmasq-dns-6b9c8b59c-h4qws" Dec 11 10:34:03 crc kubenswrapper[4953]: I1211 10:34:03.943641 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtwhv\" (UniqueName: \"kubernetes.io/projected/41289b74-d869-42e1-875c-4ba58c7cd4c2-kube-api-access-mtwhv\") pod \"dnsmasq-dns-6b9c8b59c-h4qws\" (UID: \"41289b74-d869-42e1-875c-4ba58c7cd4c2\") " pod="openstack/dnsmasq-dns-6b9c8b59c-h4qws" Dec 11 10:34:03 crc kubenswrapper[4953]: I1211 10:34:03.943700 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/41289b74-d869-42e1-875c-4ba58c7cd4c2-ovsdbserver-sb\") pod \"dnsmasq-dns-6b9c8b59c-h4qws\" (UID: \"41289b74-d869-42e1-875c-4ba58c7cd4c2\") " pod="openstack/dnsmasq-dns-6b9c8b59c-h4qws" Dec 11 10:34:03 crc kubenswrapper[4953]: I1211 10:34:03.943745 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/41289b74-d869-42e1-875c-4ba58c7cd4c2-dns-swift-storage-0\") pod \"dnsmasq-dns-6b9c8b59c-h4qws\" (UID: \"41289b74-d869-42e1-875c-4ba58c7cd4c2\") " pod="openstack/dnsmasq-dns-6b9c8b59c-h4qws" Dec 11 10:34:03 crc kubenswrapper[4953]: I1211 10:34:03.943819 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/41289b74-d869-42e1-875c-4ba58c7cd4c2-dns-svc\") pod \"dnsmasq-dns-6b9c8b59c-h4qws\" (UID: \"41289b74-d869-42e1-875c-4ba58c7cd4c2\") " pod="openstack/dnsmasq-dns-6b9c8b59c-h4qws" Dec 11 10:34:03 crc kubenswrapper[4953]: I1211 10:34:03.948369 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-vqh69" Dec 11 10:34:03 crc kubenswrapper[4953]: I1211 10:34:03.948485 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Dec 11 10:34:03 crc kubenswrapper[4953]: I1211 10:34:03.948555 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Dec 11 10:34:03 crc kubenswrapper[4953]: I1211 10:34:03.948720 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Dec 11 10:34:04 crc kubenswrapper[4953]: I1211 10:34:04.009645 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-75bd4868-pp5tq"] Dec 11 10:34:04 crc kubenswrapper[4953]: I1211 10:34:04.045379 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/41289b74-d869-42e1-875c-4ba58c7cd4c2-ovsdbserver-nb\") pod \"dnsmasq-dns-6b9c8b59c-h4qws\" (UID: \"41289b74-d869-42e1-875c-4ba58c7cd4c2\") " pod="openstack/dnsmasq-dns-6b9c8b59c-h4qws" Dec 11 10:34:04 crc kubenswrapper[4953]: I1211 10:34:04.045552 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41289b74-d869-42e1-875c-4ba58c7cd4c2-config\") pod \"dnsmasq-dns-6b9c8b59c-h4qws\" (UID: \"41289b74-d869-42e1-875c-4ba58c7cd4c2\") " pod="openstack/dnsmasq-dns-6b9c8b59c-h4qws" Dec 11 10:34:04 crc kubenswrapper[4953]: I1211 10:34:04.045628 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e06bccd4-aefb-4055-b4eb-ef745234cbcb-ovndb-tls-certs\") pod \"neutron-75bd4868-pp5tq\" (UID: \"e06bccd4-aefb-4055-b4eb-ef745234cbcb\") " pod="openstack/neutron-75bd4868-pp5tq" Dec 11 10:34:04 crc kubenswrapper[4953]: I1211 10:34:04.045706 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e06bccd4-aefb-4055-b4eb-ef745234cbcb-combined-ca-bundle\") pod \"neutron-75bd4868-pp5tq\" (UID: \"e06bccd4-aefb-4055-b4eb-ef745234cbcb\") " pod="openstack/neutron-75bd4868-pp5tq" Dec 11 10:34:04 crc kubenswrapper[4953]: I1211 10:34:04.045788 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mtwhv\" (UniqueName: \"kubernetes.io/projected/41289b74-d869-42e1-875c-4ba58c7cd4c2-kube-api-access-mtwhv\") pod \"dnsmasq-dns-6b9c8b59c-h4qws\" (UID: \"41289b74-d869-42e1-875c-4ba58c7cd4c2\") " pod="openstack/dnsmasq-dns-6b9c8b59c-h4qws" Dec 11 10:34:04 crc kubenswrapper[4953]: I1211 10:34:04.045847 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/41289b74-d869-42e1-875c-4ba58c7cd4c2-ovsdbserver-sb\") pod \"dnsmasq-dns-6b9c8b59c-h4qws\" (UID: \"41289b74-d869-42e1-875c-4ba58c7cd4c2\") " pod="openstack/dnsmasq-dns-6b9c8b59c-h4qws" Dec 11 10:34:04 crc kubenswrapper[4953]: I1211 10:34:04.045881 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/41289b74-d869-42e1-875c-4ba58c7cd4c2-dns-swift-storage-0\") pod \"dnsmasq-dns-6b9c8b59c-h4qws\" (UID: \"41289b74-d869-42e1-875c-4ba58c7cd4c2\") " pod="openstack/dnsmasq-dns-6b9c8b59c-h4qws" Dec 11 10:34:04 crc kubenswrapper[4953]: I1211 10:34:04.046534 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/41289b74-d869-42e1-875c-4ba58c7cd4c2-ovsdbserver-nb\") pod \"dnsmasq-dns-6b9c8b59c-h4qws\" (UID: \"41289b74-d869-42e1-875c-4ba58c7cd4c2\") " pod="openstack/dnsmasq-dns-6b9c8b59c-h4qws" Dec 11 10:34:04 crc kubenswrapper[4953]: I1211 10:34:04.047541 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e06bccd4-aefb-4055-b4eb-ef745234cbcb-config\") pod \"neutron-75bd4868-pp5tq\" (UID: \"e06bccd4-aefb-4055-b4eb-ef745234cbcb\") " pod="openstack/neutron-75bd4868-pp5tq" Dec 11 10:34:04 crc kubenswrapper[4953]: I1211 10:34:04.048389 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e06bccd4-aefb-4055-b4eb-ef745234cbcb-httpd-config\") pod \"neutron-75bd4868-pp5tq\" (UID: \"e06bccd4-aefb-4055-b4eb-ef745234cbcb\") " pod="openstack/neutron-75bd4868-pp5tq" Dec 11 10:34:04 crc kubenswrapper[4953]: I1211 10:34:04.049557 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/41289b74-d869-42e1-875c-4ba58c7cd4c2-dns-swift-storage-0\") pod \"dnsmasq-dns-6b9c8b59c-h4qws\" (UID: \"41289b74-d869-42e1-875c-4ba58c7cd4c2\") " pod="openstack/dnsmasq-dns-6b9c8b59c-h4qws" Dec 11 10:34:04 crc kubenswrapper[4953]: I1211 10:34:04.049748 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/41289b74-d869-42e1-875c-4ba58c7cd4c2-dns-svc\") pod \"dnsmasq-dns-6b9c8b59c-h4qws\" (UID: \"41289b74-d869-42e1-875c-4ba58c7cd4c2\") " pod="openstack/dnsmasq-dns-6b9c8b59c-h4qws" Dec 11 10:34:04 crc kubenswrapper[4953]: I1211 10:34:04.049841 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ls5d\" (UniqueName: \"kubernetes.io/projected/e06bccd4-aefb-4055-b4eb-ef745234cbcb-kube-api-access-9ls5d\") pod \"neutron-75bd4868-pp5tq\" (UID: \"e06bccd4-aefb-4055-b4eb-ef745234cbcb\") " pod="openstack/neutron-75bd4868-pp5tq" Dec 11 10:34:04 crc kubenswrapper[4953]: I1211 10:34:04.049997 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41289b74-d869-42e1-875c-4ba58c7cd4c2-config\") pod \"dnsmasq-dns-6b9c8b59c-h4qws\" (UID: \"41289b74-d869-42e1-875c-4ba58c7cd4c2\") " pod="openstack/dnsmasq-dns-6b9c8b59c-h4qws" Dec 11 10:34:04 crc kubenswrapper[4953]: I1211 10:34:04.050760 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/41289b74-d869-42e1-875c-4ba58c7cd4c2-ovsdbserver-sb\") pod \"dnsmasq-dns-6b9c8b59c-h4qws\" (UID: \"41289b74-d869-42e1-875c-4ba58c7cd4c2\") " pod="openstack/dnsmasq-dns-6b9c8b59c-h4qws" Dec 11 10:34:04 crc kubenswrapper[4953]: I1211 10:34:04.051703 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/41289b74-d869-42e1-875c-4ba58c7cd4c2-dns-svc\") pod \"dnsmasq-dns-6b9c8b59c-h4qws\" (UID: \"41289b74-d869-42e1-875c-4ba58c7cd4c2\") " pod="openstack/dnsmasq-dns-6b9c8b59c-h4qws" Dec 11 10:34:04 crc kubenswrapper[4953]: I1211 10:34:04.080306 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mtwhv\" (UniqueName: \"kubernetes.io/projected/41289b74-d869-42e1-875c-4ba58c7cd4c2-kube-api-access-mtwhv\") pod \"dnsmasq-dns-6b9c8b59c-h4qws\" (UID: \"41289b74-d869-42e1-875c-4ba58c7cd4c2\") " pod="openstack/dnsmasq-dns-6b9c8b59c-h4qws" Dec 11 10:34:04 crc kubenswrapper[4953]: I1211 10:34:04.157434 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e06bccd4-aefb-4055-b4eb-ef745234cbcb-httpd-config\") pod \"neutron-75bd4868-pp5tq\" (UID: \"e06bccd4-aefb-4055-b4eb-ef745234cbcb\") " pod="openstack/neutron-75bd4868-pp5tq" Dec 11 10:34:04 crc kubenswrapper[4953]: I1211 10:34:04.158400 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9ls5d\" (UniqueName: \"kubernetes.io/projected/e06bccd4-aefb-4055-b4eb-ef745234cbcb-kube-api-access-9ls5d\") pod \"neutron-75bd4868-pp5tq\" (UID: \"e06bccd4-aefb-4055-b4eb-ef745234cbcb\") " pod="openstack/neutron-75bd4868-pp5tq" Dec 11 10:34:04 crc kubenswrapper[4953]: I1211 10:34:04.158716 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e06bccd4-aefb-4055-b4eb-ef745234cbcb-ovndb-tls-certs\") pod \"neutron-75bd4868-pp5tq\" (UID: \"e06bccd4-aefb-4055-b4eb-ef745234cbcb\") " pod="openstack/neutron-75bd4868-pp5tq" Dec 11 10:34:04 crc kubenswrapper[4953]: I1211 10:34:04.158792 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e06bccd4-aefb-4055-b4eb-ef745234cbcb-combined-ca-bundle\") pod \"neutron-75bd4868-pp5tq\" (UID: \"e06bccd4-aefb-4055-b4eb-ef745234cbcb\") " pod="openstack/neutron-75bd4868-pp5tq" Dec 11 10:34:04 crc kubenswrapper[4953]: I1211 10:34:04.158942 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e06bccd4-aefb-4055-b4eb-ef745234cbcb-config\") pod \"neutron-75bd4868-pp5tq\" (UID: \"e06bccd4-aefb-4055-b4eb-ef745234cbcb\") " pod="openstack/neutron-75bd4868-pp5tq" Dec 11 10:34:04 crc kubenswrapper[4953]: I1211 10:34:04.165619 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e06bccd4-aefb-4055-b4eb-ef745234cbcb-httpd-config\") pod \"neutron-75bd4868-pp5tq\" (UID: \"e06bccd4-aefb-4055-b4eb-ef745234cbcb\") " pod="openstack/neutron-75bd4868-pp5tq" Dec 11 10:34:04 crc kubenswrapper[4953]: I1211 10:34:04.167497 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e06bccd4-aefb-4055-b4eb-ef745234cbcb-combined-ca-bundle\") pod \"neutron-75bd4868-pp5tq\" (UID: \"e06bccd4-aefb-4055-b4eb-ef745234cbcb\") " pod="openstack/neutron-75bd4868-pp5tq" Dec 11 10:34:04 crc kubenswrapper[4953]: I1211 10:34:04.171195 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e06bccd4-aefb-4055-b4eb-ef745234cbcb-ovndb-tls-certs\") pod \"neutron-75bd4868-pp5tq\" (UID: \"e06bccd4-aefb-4055-b4eb-ef745234cbcb\") " pod="openstack/neutron-75bd4868-pp5tq" Dec 11 10:34:04 crc kubenswrapper[4953]: I1211 10:34:04.172037 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/e06bccd4-aefb-4055-b4eb-ef745234cbcb-config\") pod \"neutron-75bd4868-pp5tq\" (UID: \"e06bccd4-aefb-4055-b4eb-ef745234cbcb\") " pod="openstack/neutron-75bd4868-pp5tq" Dec 11 10:34:04 crc kubenswrapper[4953]: I1211 10:34:04.185751 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9ls5d\" (UniqueName: \"kubernetes.io/projected/e06bccd4-aefb-4055-b4eb-ef745234cbcb-kube-api-access-9ls5d\") pod \"neutron-75bd4868-pp5tq\" (UID: \"e06bccd4-aefb-4055-b4eb-ef745234cbcb\") " pod="openstack/neutron-75bd4868-pp5tq" Dec 11 10:34:04 crc kubenswrapper[4953]: I1211 10:34:04.200932 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b9c8b59c-h4qws" Dec 11 10:34:04 crc kubenswrapper[4953]: I1211 10:34:04.220123 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-kv78b" Dec 11 10:34:04 crc kubenswrapper[4953]: I1211 10:34:04.260367 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24b23531-3ad1-4b46-88f6-e930d79b6556-combined-ca-bundle\") pod \"24b23531-3ad1-4b46-88f6-e930d79b6556\" (UID: \"24b23531-3ad1-4b46-88f6-e930d79b6556\") " Dec 11 10:34:04 crc kubenswrapper[4953]: I1211 10:34:04.260419 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/24b23531-3ad1-4b46-88f6-e930d79b6556-scripts\") pod \"24b23531-3ad1-4b46-88f6-e930d79b6556\" (UID: \"24b23531-3ad1-4b46-88f6-e930d79b6556\") " Dec 11 10:34:04 crc kubenswrapper[4953]: I1211 10:34:04.273479 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24b23531-3ad1-4b46-88f6-e930d79b6556-scripts" (OuterVolumeSpecName: "scripts") pod "24b23531-3ad1-4b46-88f6-e930d79b6556" (UID: "24b23531-3ad1-4b46-88f6-e930d79b6556"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:34:04 crc kubenswrapper[4953]: I1211 10:34:04.292631 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24b23531-3ad1-4b46-88f6-e930d79b6556-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "24b23531-3ad1-4b46-88f6-e930d79b6556" (UID: "24b23531-3ad1-4b46-88f6-e930d79b6556"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:34:04 crc kubenswrapper[4953]: I1211 10:34:04.353385 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 11 10:34:04 crc kubenswrapper[4953]: I1211 10:34:04.363823 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24b23531-3ad1-4b46-88f6-e930d79b6556-config-data\") pod \"24b23531-3ad1-4b46-88f6-e930d79b6556\" (UID: \"24b23531-3ad1-4b46-88f6-e930d79b6556\") " Dec 11 10:34:04 crc kubenswrapper[4953]: I1211 10:34:04.363984 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hgbpp\" (UniqueName: \"kubernetes.io/projected/24b23531-3ad1-4b46-88f6-e930d79b6556-kube-api-access-hgbpp\") pod \"24b23531-3ad1-4b46-88f6-e930d79b6556\" (UID: \"24b23531-3ad1-4b46-88f6-e930d79b6556\") " Dec 11 10:34:04 crc kubenswrapper[4953]: I1211 10:34:04.364017 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/24b23531-3ad1-4b46-88f6-e930d79b6556-logs\") pod \"24b23531-3ad1-4b46-88f6-e930d79b6556\" (UID: \"24b23531-3ad1-4b46-88f6-e930d79b6556\") " Dec 11 10:34:04 crc kubenswrapper[4953]: I1211 10:34:04.364485 4953 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24b23531-3ad1-4b46-88f6-e930d79b6556-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 10:34:04 crc kubenswrapper[4953]: I1211 10:34:04.364506 4953 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/24b23531-3ad1-4b46-88f6-e930d79b6556-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 10:34:04 crc kubenswrapper[4953]: I1211 10:34:04.364811 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/24b23531-3ad1-4b46-88f6-e930d79b6556-logs" (OuterVolumeSpecName: "logs") pod "24b23531-3ad1-4b46-88f6-e930d79b6556" (UID: "24b23531-3ad1-4b46-88f6-e930d79b6556"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:34:04 crc kubenswrapper[4953]: I1211 10:34:04.375746 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24b23531-3ad1-4b46-88f6-e930d79b6556-kube-api-access-hgbpp" (OuterVolumeSpecName: "kube-api-access-hgbpp") pod "24b23531-3ad1-4b46-88f6-e930d79b6556" (UID: "24b23531-3ad1-4b46-88f6-e930d79b6556"). InnerVolumeSpecName "kube-api-access-hgbpp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:34:04 crc kubenswrapper[4953]: I1211 10:34:04.435373 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24b23531-3ad1-4b46-88f6-e930d79b6556-config-data" (OuterVolumeSpecName: "config-data") pod "24b23531-3ad1-4b46-88f6-e930d79b6556" (UID: "24b23531-3ad1-4b46-88f6-e930d79b6556"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:34:04 crc kubenswrapper[4953]: I1211 10:34:04.448463 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 11 10:34:04 crc kubenswrapper[4953]: I1211 10:34:04.455168 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-75bd4868-pp5tq" Dec 11 10:34:04 crc kubenswrapper[4953]: I1211 10:34:04.487534 4953 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24b23531-3ad1-4b46-88f6-e930d79b6556-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 10:34:04 crc kubenswrapper[4953]: I1211 10:34:04.487556 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hgbpp\" (UniqueName: \"kubernetes.io/projected/24b23531-3ad1-4b46-88f6-e930d79b6556-kube-api-access-hgbpp\") on node \"crc\" DevicePath \"\"" Dec 11 10:34:04 crc kubenswrapper[4953]: I1211 10:34:04.487567 4953 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/24b23531-3ad1-4b46-88f6-e930d79b6556-logs\") on node \"crc\" DevicePath \"\"" Dec 11 10:34:04 crc kubenswrapper[4953]: I1211 10:34:04.633996 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1e05f528-a644-4eb2-878d-65ca7558e66b","Type":"ContainerStarted","Data":"856dfcd83a38a7441349e889788d51b0f8f9d225c46e38db576588d64b1b29a2"} Dec 11 10:34:04 crc kubenswrapper[4953]: I1211 10:34:04.648871 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-kv78b" event={"ID":"24b23531-3ad1-4b46-88f6-e930d79b6556","Type":"ContainerDied","Data":"d05ea9371e00f9a76783f9ecab654baa090e90f55b4c9d6bb28a1cdf5b7a8447"} Dec 11 10:34:04 crc kubenswrapper[4953]: I1211 10:34:04.648934 4953 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d05ea9371e00f9a76783f9ecab654baa090e90f55b4c9d6bb28a1cdf5b7a8447" Dec 11 10:34:04 crc kubenswrapper[4953]: I1211 10:34:04.649148 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-kv78b" Dec 11 10:34:04 crc kubenswrapper[4953]: I1211 10:34:04.649315 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5dc4fcdbc-s9gvv" Dec 11 10:34:04 crc kubenswrapper[4953]: I1211 10:34:04.867602 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-7567d9469d-rx5dx"] Dec 11 10:34:04 crc kubenswrapper[4953]: E1211 10:34:04.868055 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24b23531-3ad1-4b46-88f6-e930d79b6556" containerName="placement-db-sync" Dec 11 10:34:04 crc kubenswrapper[4953]: I1211 10:34:04.868069 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="24b23531-3ad1-4b46-88f6-e930d79b6556" containerName="placement-db-sync" Dec 11 10:34:04 crc kubenswrapper[4953]: I1211 10:34:04.868244 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="24b23531-3ad1-4b46-88f6-e930d79b6556" containerName="placement-db-sync" Dec 11 10:34:04 crc kubenswrapper[4953]: I1211 10:34:04.872748 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7567d9469d-rx5dx" Dec 11 10:34:04 crc kubenswrapper[4953]: I1211 10:34:04.897840 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Dec 11 10:34:04 crc kubenswrapper[4953]: I1211 10:34:04.898043 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Dec 11 10:34:04 crc kubenswrapper[4953]: I1211 10:34:04.900214 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Dec 11 10:34:04 crc kubenswrapper[4953]: I1211 10:34:04.900400 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-sjbdx" Dec 11 10:34:04 crc kubenswrapper[4953]: I1211 10:34:04.903794 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Dec 11 10:34:04 crc kubenswrapper[4953]: I1211 10:34:04.923856 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7567d9469d-rx5dx"] Dec 11 10:34:04 crc kubenswrapper[4953]: I1211 10:34:04.943775 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b9c8b59c-h4qws"] Dec 11 10:34:04 crc kubenswrapper[4953]: W1211 10:34:04.974157 4953 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod41289b74_d869_42e1_875c_4ba58c7cd4c2.slice/crio-468b237f2754b5b1deadc73ca419702adc05fea149b09441ce56e5c05edf6be0 WatchSource:0}: Error finding container 468b237f2754b5b1deadc73ca419702adc05fea149b09441ce56e5c05edf6be0: Status 404 returned error can't find the container with id 468b237f2754b5b1deadc73ca419702adc05fea149b09441ce56e5c05edf6be0 Dec 11 10:34:05 crc kubenswrapper[4953]: I1211 10:34:05.012730 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zggh\" (UniqueName: \"kubernetes.io/projected/345a513a-93a0-4e23-9266-3eeaf3ff0c10-kube-api-access-4zggh\") pod \"placement-7567d9469d-rx5dx\" (UID: \"345a513a-93a0-4e23-9266-3eeaf3ff0c10\") " pod="openstack/placement-7567d9469d-rx5dx" Dec 11 10:34:05 crc kubenswrapper[4953]: I1211 10:34:05.012772 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/345a513a-93a0-4e23-9266-3eeaf3ff0c10-config-data\") pod \"placement-7567d9469d-rx5dx\" (UID: \"345a513a-93a0-4e23-9266-3eeaf3ff0c10\") " pod="openstack/placement-7567d9469d-rx5dx" Dec 11 10:34:05 crc kubenswrapper[4953]: I1211 10:34:05.012855 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/345a513a-93a0-4e23-9266-3eeaf3ff0c10-internal-tls-certs\") pod \"placement-7567d9469d-rx5dx\" (UID: \"345a513a-93a0-4e23-9266-3eeaf3ff0c10\") " pod="openstack/placement-7567d9469d-rx5dx" Dec 11 10:34:05 crc kubenswrapper[4953]: I1211 10:34:05.012891 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/345a513a-93a0-4e23-9266-3eeaf3ff0c10-scripts\") pod \"placement-7567d9469d-rx5dx\" (UID: \"345a513a-93a0-4e23-9266-3eeaf3ff0c10\") " pod="openstack/placement-7567d9469d-rx5dx" Dec 11 10:34:05 crc kubenswrapper[4953]: I1211 10:34:05.012952 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/345a513a-93a0-4e23-9266-3eeaf3ff0c10-public-tls-certs\") pod \"placement-7567d9469d-rx5dx\" (UID: \"345a513a-93a0-4e23-9266-3eeaf3ff0c10\") " pod="openstack/placement-7567d9469d-rx5dx" Dec 11 10:34:05 crc kubenswrapper[4953]: I1211 10:34:05.012972 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/345a513a-93a0-4e23-9266-3eeaf3ff0c10-logs\") pod \"placement-7567d9469d-rx5dx\" (UID: \"345a513a-93a0-4e23-9266-3eeaf3ff0c10\") " pod="openstack/placement-7567d9469d-rx5dx" Dec 11 10:34:05 crc kubenswrapper[4953]: I1211 10:34:05.013003 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/345a513a-93a0-4e23-9266-3eeaf3ff0c10-combined-ca-bundle\") pod \"placement-7567d9469d-rx5dx\" (UID: \"345a513a-93a0-4e23-9266-3eeaf3ff0c10\") " pod="openstack/placement-7567d9469d-rx5dx" Dec 11 10:34:05 crc kubenswrapper[4953]: I1211 10:34:05.115196 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/345a513a-93a0-4e23-9266-3eeaf3ff0c10-scripts\") pod \"placement-7567d9469d-rx5dx\" (UID: \"345a513a-93a0-4e23-9266-3eeaf3ff0c10\") " pod="openstack/placement-7567d9469d-rx5dx" Dec 11 10:34:05 crc kubenswrapper[4953]: I1211 10:34:05.115344 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/345a513a-93a0-4e23-9266-3eeaf3ff0c10-public-tls-certs\") pod \"placement-7567d9469d-rx5dx\" (UID: \"345a513a-93a0-4e23-9266-3eeaf3ff0c10\") " pod="openstack/placement-7567d9469d-rx5dx" Dec 11 10:34:05 crc kubenswrapper[4953]: I1211 10:34:05.115389 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/345a513a-93a0-4e23-9266-3eeaf3ff0c10-logs\") pod \"placement-7567d9469d-rx5dx\" (UID: \"345a513a-93a0-4e23-9266-3eeaf3ff0c10\") " pod="openstack/placement-7567d9469d-rx5dx" Dec 11 10:34:05 crc kubenswrapper[4953]: I1211 10:34:05.115444 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/345a513a-93a0-4e23-9266-3eeaf3ff0c10-combined-ca-bundle\") pod \"placement-7567d9469d-rx5dx\" (UID: \"345a513a-93a0-4e23-9266-3eeaf3ff0c10\") " pod="openstack/placement-7567d9469d-rx5dx" Dec 11 10:34:05 crc kubenswrapper[4953]: I1211 10:34:05.115480 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4zggh\" (UniqueName: \"kubernetes.io/projected/345a513a-93a0-4e23-9266-3eeaf3ff0c10-kube-api-access-4zggh\") pod \"placement-7567d9469d-rx5dx\" (UID: \"345a513a-93a0-4e23-9266-3eeaf3ff0c10\") " pod="openstack/placement-7567d9469d-rx5dx" Dec 11 10:34:05 crc kubenswrapper[4953]: I1211 10:34:05.115514 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/345a513a-93a0-4e23-9266-3eeaf3ff0c10-config-data\") pod \"placement-7567d9469d-rx5dx\" (UID: \"345a513a-93a0-4e23-9266-3eeaf3ff0c10\") " pod="openstack/placement-7567d9469d-rx5dx" Dec 11 10:34:05 crc kubenswrapper[4953]: I1211 10:34:05.115583 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/345a513a-93a0-4e23-9266-3eeaf3ff0c10-internal-tls-certs\") pod \"placement-7567d9469d-rx5dx\" (UID: \"345a513a-93a0-4e23-9266-3eeaf3ff0c10\") " pod="openstack/placement-7567d9469d-rx5dx" Dec 11 10:34:05 crc kubenswrapper[4953]: I1211 10:34:05.116977 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/345a513a-93a0-4e23-9266-3eeaf3ff0c10-logs\") pod \"placement-7567d9469d-rx5dx\" (UID: \"345a513a-93a0-4e23-9266-3eeaf3ff0c10\") " pod="openstack/placement-7567d9469d-rx5dx" Dec 11 10:34:05 crc kubenswrapper[4953]: I1211 10:34:05.121844 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/345a513a-93a0-4e23-9266-3eeaf3ff0c10-config-data\") pod \"placement-7567d9469d-rx5dx\" (UID: \"345a513a-93a0-4e23-9266-3eeaf3ff0c10\") " pod="openstack/placement-7567d9469d-rx5dx" Dec 11 10:34:05 crc kubenswrapper[4953]: I1211 10:34:05.122633 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/345a513a-93a0-4e23-9266-3eeaf3ff0c10-scripts\") pod \"placement-7567d9469d-rx5dx\" (UID: \"345a513a-93a0-4e23-9266-3eeaf3ff0c10\") " pod="openstack/placement-7567d9469d-rx5dx" Dec 11 10:34:05 crc kubenswrapper[4953]: I1211 10:34:05.126266 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/345a513a-93a0-4e23-9266-3eeaf3ff0c10-combined-ca-bundle\") pod \"placement-7567d9469d-rx5dx\" (UID: \"345a513a-93a0-4e23-9266-3eeaf3ff0c10\") " pod="openstack/placement-7567d9469d-rx5dx" Dec 11 10:34:05 crc kubenswrapper[4953]: I1211 10:34:05.128944 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/345a513a-93a0-4e23-9266-3eeaf3ff0c10-public-tls-certs\") pod \"placement-7567d9469d-rx5dx\" (UID: \"345a513a-93a0-4e23-9266-3eeaf3ff0c10\") " pod="openstack/placement-7567d9469d-rx5dx" Dec 11 10:34:05 crc kubenswrapper[4953]: I1211 10:34:05.133288 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/345a513a-93a0-4e23-9266-3eeaf3ff0c10-internal-tls-certs\") pod \"placement-7567d9469d-rx5dx\" (UID: \"345a513a-93a0-4e23-9266-3eeaf3ff0c10\") " pod="openstack/placement-7567d9469d-rx5dx" Dec 11 10:34:05 crc kubenswrapper[4953]: I1211 10:34:05.151529 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4zggh\" (UniqueName: \"kubernetes.io/projected/345a513a-93a0-4e23-9266-3eeaf3ff0c10-kube-api-access-4zggh\") pod \"placement-7567d9469d-rx5dx\" (UID: \"345a513a-93a0-4e23-9266-3eeaf3ff0c10\") " pod="openstack/placement-7567d9469d-rx5dx" Dec 11 10:34:05 crc kubenswrapper[4953]: I1211 10:34:05.234810 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7567d9469d-rx5dx" Dec 11 10:34:05 crc kubenswrapper[4953]: I1211 10:34:05.279885 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-n8dxg" Dec 11 10:34:05 crc kubenswrapper[4953]: I1211 10:34:05.421027 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3079f9fc-3d3e-4647-a889-fae4277437fc-fernet-keys\") pod \"3079f9fc-3d3e-4647-a889-fae4277437fc\" (UID: \"3079f9fc-3d3e-4647-a889-fae4277437fc\") " Dec 11 10:34:05 crc kubenswrapper[4953]: I1211 10:34:05.421092 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3079f9fc-3d3e-4647-a889-fae4277437fc-credential-keys\") pod \"3079f9fc-3d3e-4647-a889-fae4277437fc\" (UID: \"3079f9fc-3d3e-4647-a889-fae4277437fc\") " Dec 11 10:34:05 crc kubenswrapper[4953]: I1211 10:34:05.421130 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3079f9fc-3d3e-4647-a889-fae4277437fc-config-data\") pod \"3079f9fc-3d3e-4647-a889-fae4277437fc\" (UID: \"3079f9fc-3d3e-4647-a889-fae4277437fc\") " Dec 11 10:34:05 crc kubenswrapper[4953]: I1211 10:34:05.421182 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3079f9fc-3d3e-4647-a889-fae4277437fc-combined-ca-bundle\") pod \"3079f9fc-3d3e-4647-a889-fae4277437fc\" (UID: \"3079f9fc-3d3e-4647-a889-fae4277437fc\") " Dec 11 10:34:05 crc kubenswrapper[4953]: I1211 10:34:05.421350 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7v82r\" (UniqueName: \"kubernetes.io/projected/3079f9fc-3d3e-4647-a889-fae4277437fc-kube-api-access-7v82r\") pod \"3079f9fc-3d3e-4647-a889-fae4277437fc\" (UID: \"3079f9fc-3d3e-4647-a889-fae4277437fc\") " Dec 11 10:34:05 crc kubenswrapper[4953]: I1211 10:34:05.421452 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3079f9fc-3d3e-4647-a889-fae4277437fc-scripts\") pod \"3079f9fc-3d3e-4647-a889-fae4277437fc\" (UID: \"3079f9fc-3d3e-4647-a889-fae4277437fc\") " Dec 11 10:34:05 crc kubenswrapper[4953]: I1211 10:34:05.426244 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3079f9fc-3d3e-4647-a889-fae4277437fc-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "3079f9fc-3d3e-4647-a889-fae4277437fc" (UID: "3079f9fc-3d3e-4647-a889-fae4277437fc"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:34:05 crc kubenswrapper[4953]: I1211 10:34:05.428673 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3079f9fc-3d3e-4647-a889-fae4277437fc-scripts" (OuterVolumeSpecName: "scripts") pod "3079f9fc-3d3e-4647-a889-fae4277437fc" (UID: "3079f9fc-3d3e-4647-a889-fae4277437fc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:34:05 crc kubenswrapper[4953]: I1211 10:34:05.440001 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3079f9fc-3d3e-4647-a889-fae4277437fc-kube-api-access-7v82r" (OuterVolumeSpecName: "kube-api-access-7v82r") pod "3079f9fc-3d3e-4647-a889-fae4277437fc" (UID: "3079f9fc-3d3e-4647-a889-fae4277437fc"). InnerVolumeSpecName "kube-api-access-7v82r". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:34:05 crc kubenswrapper[4953]: I1211 10:34:05.444998 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3079f9fc-3d3e-4647-a889-fae4277437fc-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "3079f9fc-3d3e-4647-a889-fae4277437fc" (UID: "3079f9fc-3d3e-4647-a889-fae4277437fc"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:34:05 crc kubenswrapper[4953]: I1211 10:34:05.468719 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3079f9fc-3d3e-4647-a889-fae4277437fc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3079f9fc-3d3e-4647-a889-fae4277437fc" (UID: "3079f9fc-3d3e-4647-a889-fae4277437fc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:34:05 crc kubenswrapper[4953]: I1211 10:34:05.489704 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3079f9fc-3d3e-4647-a889-fae4277437fc-config-data" (OuterVolumeSpecName: "config-data") pod "3079f9fc-3d3e-4647-a889-fae4277437fc" (UID: "3079f9fc-3d3e-4647-a889-fae4277437fc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:34:05 crc kubenswrapper[4953]: I1211 10:34:05.528873 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7v82r\" (UniqueName: \"kubernetes.io/projected/3079f9fc-3d3e-4647-a889-fae4277437fc-kube-api-access-7v82r\") on node \"crc\" DevicePath \"\"" Dec 11 10:34:05 crc kubenswrapper[4953]: I1211 10:34:05.528926 4953 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3079f9fc-3d3e-4647-a889-fae4277437fc-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 10:34:05 crc kubenswrapper[4953]: I1211 10:34:05.528940 4953 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3079f9fc-3d3e-4647-a889-fae4277437fc-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 11 10:34:05 crc kubenswrapper[4953]: I1211 10:34:05.528952 4953 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3079f9fc-3d3e-4647-a889-fae4277437fc-credential-keys\") on node \"crc\" DevicePath \"\"" Dec 11 10:34:05 crc kubenswrapper[4953]: I1211 10:34:05.528966 4953 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3079f9fc-3d3e-4647-a889-fae4277437fc-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 10:34:05 crc kubenswrapper[4953]: I1211 10:34:05.528979 4953 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3079f9fc-3d3e-4647-a889-fae4277437fc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 10:34:05 crc kubenswrapper[4953]: I1211 10:34:05.564458 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-75bd4868-pp5tq"] Dec 11 10:34:05 crc kubenswrapper[4953]: I1211 10:34:05.676196 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-n8dxg" Dec 11 10:34:05 crc kubenswrapper[4953]: I1211 10:34:05.676220 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-n8dxg" event={"ID":"3079f9fc-3d3e-4647-a889-fae4277437fc","Type":"ContainerDied","Data":"44e177a5c1d7e6f8e71f7ef0d9ee1a2caf526ec0cf4e74e43c7a25c9084f32dc"} Dec 11 10:34:05 crc kubenswrapper[4953]: I1211 10:34:05.676266 4953 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="44e177a5c1d7e6f8e71f7ef0d9ee1a2caf526ec0cf4e74e43c7a25c9084f32dc" Dec 11 10:34:05 crc kubenswrapper[4953]: I1211 10:34:05.695337 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1e05f528-a644-4eb2-878d-65ca7558e66b","Type":"ContainerStarted","Data":"fd4bd68e3b453d1ec7a5e434e9ffd67804c362038e8f476bc8dd7f720835d07c"} Dec 11 10:34:05 crc kubenswrapper[4953]: I1211 10:34:05.695523 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="1e05f528-a644-4eb2-878d-65ca7558e66b" containerName="glance-log" containerID="cri-o://856dfcd83a38a7441349e889788d51b0f8f9d225c46e38db576588d64b1b29a2" gracePeriod=30 Dec 11 10:34:05 crc kubenswrapper[4953]: I1211 10:34:05.696154 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="1e05f528-a644-4eb2-878d-65ca7558e66b" containerName="glance-httpd" containerID="cri-o://fd4bd68e3b453d1ec7a5e434e9ffd67804c362038e8f476bc8dd7f720835d07c" gracePeriod=30 Dec 11 10:34:05 crc kubenswrapper[4953]: I1211 10:34:05.705247 4953 generic.go:334] "Generic (PLEG): container finished" podID="41289b74-d869-42e1-875c-4ba58c7cd4c2" containerID="ebec616faad41e4c980a50c5c12b2c9ee0e89e06b3fde358ccfdbe02d5b9dabf" exitCode=0 Dec 11 10:34:05 crc kubenswrapper[4953]: I1211 10:34:05.705309 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b9c8b59c-h4qws" event={"ID":"41289b74-d869-42e1-875c-4ba58c7cd4c2","Type":"ContainerDied","Data":"ebec616faad41e4c980a50c5c12b2c9ee0e89e06b3fde358ccfdbe02d5b9dabf"} Dec 11 10:34:05 crc kubenswrapper[4953]: I1211 10:34:05.705335 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b9c8b59c-h4qws" event={"ID":"41289b74-d869-42e1-875c-4ba58c7cd4c2","Type":"ContainerStarted","Data":"468b237f2754b5b1deadc73ca419702adc05fea149b09441ce56e5c05edf6be0"} Dec 11 10:34:05 crc kubenswrapper[4953]: I1211 10:34:05.719965 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-75bd4868-pp5tq" event={"ID":"e06bccd4-aefb-4055-b4eb-ef745234cbcb","Type":"ContainerStarted","Data":"5927711a51eb04b60afec3c91ee723574f00db82e0124df7790c58fd1d1531b5"} Dec 11 10:34:05 crc kubenswrapper[4953]: I1211 10:34:05.754354 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-55b68558f8-r49n8"] Dec 11 10:34:05 crc kubenswrapper[4953]: E1211 10:34:05.755105 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3079f9fc-3d3e-4647-a889-fae4277437fc" containerName="keystone-bootstrap" Dec 11 10:34:05 crc kubenswrapper[4953]: I1211 10:34:05.755123 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="3079f9fc-3d3e-4647-a889-fae4277437fc" containerName="keystone-bootstrap" Dec 11 10:34:05 crc kubenswrapper[4953]: I1211 10:34:05.755530 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="3079f9fc-3d3e-4647-a889-fae4277437fc" containerName="keystone-bootstrap" Dec 11 10:34:05 crc kubenswrapper[4953]: I1211 10:34:05.757512 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-55b68558f8-r49n8" Dec 11 10:34:05 crc kubenswrapper[4953]: I1211 10:34:05.766534 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 11 10:34:05 crc kubenswrapper[4953]: I1211 10:34:05.772646 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 11 10:34:05 crc kubenswrapper[4953]: I1211 10:34:05.773274 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Dec 11 10:34:05 crc kubenswrapper[4953]: I1211 10:34:05.773715 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-v2jcr" Dec 11 10:34:05 crc kubenswrapper[4953]: I1211 10:34:05.773832 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Dec 11 10:34:05 crc kubenswrapper[4953]: I1211 10:34:05.778538 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=5.778516379 podStartE2EDuration="5.778516379s" podCreationTimestamp="2025-12-11 10:34:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:34:05.728256239 +0000 UTC m=+1363.752115272" watchObservedRunningTime="2025-12-11 10:34:05.778516379 +0000 UTC m=+1363.802375422" Dec 11 10:34:05 crc kubenswrapper[4953]: I1211 10:34:05.782839 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 11 10:34:05 crc kubenswrapper[4953]: I1211 10:34:05.783290 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7734dd1c-9884-47a8-86fb-7f7dbf0c1af3","Type":"ContainerStarted","Data":"af5c887e7eca5fd7ff6f9c4156747d8342560f30e97d1ce68369f481de4f9ac9"} Dec 11 10:34:05 crc kubenswrapper[4953]: I1211 10:34:05.783342 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5dc4fcdbc-s9gvv" podUID="6f3b40d9-353f-44c2-b0fe-6e0f62e8ddbf" containerName="dnsmasq-dns" containerID="cri-o://dc8f357c0cc6139f74b3dd83b06fac66a1b420c470e862e678caa306efab7940" gracePeriod=10 Dec 11 10:34:05 crc kubenswrapper[4953]: I1211 10:34:05.783393 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="7734dd1c-9884-47a8-86fb-7f7dbf0c1af3" containerName="glance-log" containerID="cri-o://8b938ad3bfb7e53a045b18b3f364af08cdb2463b187560c5f2a84f9baa4e9479" gracePeriod=30 Dec 11 10:34:05 crc kubenswrapper[4953]: I1211 10:34:05.783537 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="7734dd1c-9884-47a8-86fb-7f7dbf0c1af3" containerName="glance-httpd" containerID="cri-o://af5c887e7eca5fd7ff6f9c4156747d8342560f30e97d1ce68369f481de4f9ac9" gracePeriod=30 Dec 11 10:34:05 crc kubenswrapper[4953]: I1211 10:34:05.824479 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-55b68558f8-r49n8"] Dec 11 10:34:05 crc kubenswrapper[4953]: I1211 10:34:05.842925 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4e64ea9-3129-46a7-8197-bdd7730ad3f1-scripts\") pod \"keystone-55b68558f8-r49n8\" (UID: \"b4e64ea9-3129-46a7-8197-bdd7730ad3f1\") " pod="openstack/keystone-55b68558f8-r49n8" Dec 11 10:34:05 crc kubenswrapper[4953]: I1211 10:34:05.843083 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4e64ea9-3129-46a7-8197-bdd7730ad3f1-internal-tls-certs\") pod \"keystone-55b68558f8-r49n8\" (UID: \"b4e64ea9-3129-46a7-8197-bdd7730ad3f1\") " pod="openstack/keystone-55b68558f8-r49n8" Dec 11 10:34:05 crc kubenswrapper[4953]: I1211 10:34:05.843118 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b4e64ea9-3129-46a7-8197-bdd7730ad3f1-credential-keys\") pod \"keystone-55b68558f8-r49n8\" (UID: \"b4e64ea9-3129-46a7-8197-bdd7730ad3f1\") " pod="openstack/keystone-55b68558f8-r49n8" Dec 11 10:34:05 crc kubenswrapper[4953]: I1211 10:34:05.843141 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4e64ea9-3129-46a7-8197-bdd7730ad3f1-config-data\") pod \"keystone-55b68558f8-r49n8\" (UID: \"b4e64ea9-3129-46a7-8197-bdd7730ad3f1\") " pod="openstack/keystone-55b68558f8-r49n8" Dec 11 10:34:05 crc kubenswrapper[4953]: I1211 10:34:05.843186 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qz6qh\" (UniqueName: \"kubernetes.io/projected/b4e64ea9-3129-46a7-8197-bdd7730ad3f1-kube-api-access-qz6qh\") pod \"keystone-55b68558f8-r49n8\" (UID: \"b4e64ea9-3129-46a7-8197-bdd7730ad3f1\") " pod="openstack/keystone-55b68558f8-r49n8" Dec 11 10:34:05 crc kubenswrapper[4953]: I1211 10:34:05.843223 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b4e64ea9-3129-46a7-8197-bdd7730ad3f1-fernet-keys\") pod \"keystone-55b68558f8-r49n8\" (UID: \"b4e64ea9-3129-46a7-8197-bdd7730ad3f1\") " pod="openstack/keystone-55b68558f8-r49n8" Dec 11 10:34:05 crc kubenswrapper[4953]: I1211 10:34:05.843252 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4e64ea9-3129-46a7-8197-bdd7730ad3f1-public-tls-certs\") pod \"keystone-55b68558f8-r49n8\" (UID: \"b4e64ea9-3129-46a7-8197-bdd7730ad3f1\") " pod="openstack/keystone-55b68558f8-r49n8" Dec 11 10:34:05 crc kubenswrapper[4953]: I1211 10:34:05.843280 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4e64ea9-3129-46a7-8197-bdd7730ad3f1-combined-ca-bundle\") pod \"keystone-55b68558f8-r49n8\" (UID: \"b4e64ea9-3129-46a7-8197-bdd7730ad3f1\") " pod="openstack/keystone-55b68558f8-r49n8" Dec 11 10:34:05 crc kubenswrapper[4953]: I1211 10:34:05.871558 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=5.871535492 podStartE2EDuration="5.871535492s" podCreationTimestamp="2025-12-11 10:34:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:34:05.819784639 +0000 UTC m=+1363.843643672" watchObservedRunningTime="2025-12-11 10:34:05.871535492 +0000 UTC m=+1363.895394535" Dec 11 10:34:05 crc kubenswrapper[4953]: I1211 10:34:05.911201 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7567d9469d-rx5dx"] Dec 11 10:34:05 crc kubenswrapper[4953]: W1211 10:34:05.923904 4953 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod345a513a_93a0_4e23_9266_3eeaf3ff0c10.slice/crio-4e803c3ca9de08930239d9fb373f85ae0e80aceb76085ed679d03e93d05ec822 WatchSource:0}: Error finding container 4e803c3ca9de08930239d9fb373f85ae0e80aceb76085ed679d03e93d05ec822: Status 404 returned error can't find the container with id 4e803c3ca9de08930239d9fb373f85ae0e80aceb76085ed679d03e93d05ec822 Dec 11 10:34:05 crc kubenswrapper[4953]: I1211 10:34:05.944955 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4e64ea9-3129-46a7-8197-bdd7730ad3f1-internal-tls-certs\") pod \"keystone-55b68558f8-r49n8\" (UID: \"b4e64ea9-3129-46a7-8197-bdd7730ad3f1\") " pod="openstack/keystone-55b68558f8-r49n8" Dec 11 10:34:05 crc kubenswrapper[4953]: I1211 10:34:05.945027 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b4e64ea9-3129-46a7-8197-bdd7730ad3f1-credential-keys\") pod \"keystone-55b68558f8-r49n8\" (UID: \"b4e64ea9-3129-46a7-8197-bdd7730ad3f1\") " pod="openstack/keystone-55b68558f8-r49n8" Dec 11 10:34:05 crc kubenswrapper[4953]: I1211 10:34:05.945155 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4e64ea9-3129-46a7-8197-bdd7730ad3f1-config-data\") pod \"keystone-55b68558f8-r49n8\" (UID: \"b4e64ea9-3129-46a7-8197-bdd7730ad3f1\") " pod="openstack/keystone-55b68558f8-r49n8" Dec 11 10:34:05 crc kubenswrapper[4953]: I1211 10:34:05.945230 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qz6qh\" (UniqueName: \"kubernetes.io/projected/b4e64ea9-3129-46a7-8197-bdd7730ad3f1-kube-api-access-qz6qh\") pod \"keystone-55b68558f8-r49n8\" (UID: \"b4e64ea9-3129-46a7-8197-bdd7730ad3f1\") " pod="openstack/keystone-55b68558f8-r49n8" Dec 11 10:34:05 crc kubenswrapper[4953]: I1211 10:34:05.945271 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b4e64ea9-3129-46a7-8197-bdd7730ad3f1-fernet-keys\") pod \"keystone-55b68558f8-r49n8\" (UID: \"b4e64ea9-3129-46a7-8197-bdd7730ad3f1\") " pod="openstack/keystone-55b68558f8-r49n8" Dec 11 10:34:05 crc kubenswrapper[4953]: I1211 10:34:05.945307 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4e64ea9-3129-46a7-8197-bdd7730ad3f1-public-tls-certs\") pod \"keystone-55b68558f8-r49n8\" (UID: \"b4e64ea9-3129-46a7-8197-bdd7730ad3f1\") " pod="openstack/keystone-55b68558f8-r49n8" Dec 11 10:34:05 crc kubenswrapper[4953]: I1211 10:34:05.945347 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4e64ea9-3129-46a7-8197-bdd7730ad3f1-combined-ca-bundle\") pod \"keystone-55b68558f8-r49n8\" (UID: \"b4e64ea9-3129-46a7-8197-bdd7730ad3f1\") " pod="openstack/keystone-55b68558f8-r49n8" Dec 11 10:34:05 crc kubenswrapper[4953]: I1211 10:34:05.945394 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4e64ea9-3129-46a7-8197-bdd7730ad3f1-scripts\") pod \"keystone-55b68558f8-r49n8\" (UID: \"b4e64ea9-3129-46a7-8197-bdd7730ad3f1\") " pod="openstack/keystone-55b68558f8-r49n8" Dec 11 10:34:05 crc kubenswrapper[4953]: I1211 10:34:05.954676 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4e64ea9-3129-46a7-8197-bdd7730ad3f1-scripts\") pod \"keystone-55b68558f8-r49n8\" (UID: \"b4e64ea9-3129-46a7-8197-bdd7730ad3f1\") " pod="openstack/keystone-55b68558f8-r49n8" Dec 11 10:34:05 crc kubenswrapper[4953]: I1211 10:34:05.959750 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4e64ea9-3129-46a7-8197-bdd7730ad3f1-internal-tls-certs\") pod \"keystone-55b68558f8-r49n8\" (UID: \"b4e64ea9-3129-46a7-8197-bdd7730ad3f1\") " pod="openstack/keystone-55b68558f8-r49n8" Dec 11 10:34:05 crc kubenswrapper[4953]: I1211 10:34:05.963293 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b4e64ea9-3129-46a7-8197-bdd7730ad3f1-fernet-keys\") pod \"keystone-55b68558f8-r49n8\" (UID: \"b4e64ea9-3129-46a7-8197-bdd7730ad3f1\") " pod="openstack/keystone-55b68558f8-r49n8" Dec 11 10:34:05 crc kubenswrapper[4953]: I1211 10:34:05.963845 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qz6qh\" (UniqueName: \"kubernetes.io/projected/b4e64ea9-3129-46a7-8197-bdd7730ad3f1-kube-api-access-qz6qh\") pod \"keystone-55b68558f8-r49n8\" (UID: \"b4e64ea9-3129-46a7-8197-bdd7730ad3f1\") " pod="openstack/keystone-55b68558f8-r49n8" Dec 11 10:34:05 crc kubenswrapper[4953]: I1211 10:34:05.966015 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b4e64ea9-3129-46a7-8197-bdd7730ad3f1-credential-keys\") pod \"keystone-55b68558f8-r49n8\" (UID: \"b4e64ea9-3129-46a7-8197-bdd7730ad3f1\") " pod="openstack/keystone-55b68558f8-r49n8" Dec 11 10:34:05 crc kubenswrapper[4953]: I1211 10:34:05.966539 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4e64ea9-3129-46a7-8197-bdd7730ad3f1-config-data\") pod \"keystone-55b68558f8-r49n8\" (UID: \"b4e64ea9-3129-46a7-8197-bdd7730ad3f1\") " pod="openstack/keystone-55b68558f8-r49n8" Dec 11 10:34:05 crc kubenswrapper[4953]: I1211 10:34:05.969711 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4e64ea9-3129-46a7-8197-bdd7730ad3f1-combined-ca-bundle\") pod \"keystone-55b68558f8-r49n8\" (UID: \"b4e64ea9-3129-46a7-8197-bdd7730ad3f1\") " pod="openstack/keystone-55b68558f8-r49n8" Dec 11 10:34:05 crc kubenswrapper[4953]: I1211 10:34:05.974037 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4e64ea9-3129-46a7-8197-bdd7730ad3f1-public-tls-certs\") pod \"keystone-55b68558f8-r49n8\" (UID: \"b4e64ea9-3129-46a7-8197-bdd7730ad3f1\") " pod="openstack/keystone-55b68558f8-r49n8" Dec 11 10:34:06 crc kubenswrapper[4953]: I1211 10:34:06.287420 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-55b68558f8-r49n8" Dec 11 10:34:06 crc kubenswrapper[4953]: I1211 10:34:06.410082 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5dc4fcdbc-s9gvv" Dec 11 10:34:06 crc kubenswrapper[4953]: I1211 10:34:06.416197 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 11 10:34:06 crc kubenswrapper[4953]: I1211 10:34:06.455872 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6f3b40d9-353f-44c2-b0fe-6e0f62e8ddbf-ovsdbserver-sb\") pod \"6f3b40d9-353f-44c2-b0fe-6e0f62e8ddbf\" (UID: \"6f3b40d9-353f-44c2-b0fe-6e0f62e8ddbf\") " Dec 11 10:34:06 crc kubenswrapper[4953]: I1211 10:34:06.456156 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"1e05f528-a644-4eb2-878d-65ca7558e66b\" (UID: \"1e05f528-a644-4eb2-878d-65ca7558e66b\") " Dec 11 10:34:06 crc kubenswrapper[4953]: I1211 10:34:06.456181 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6f3b40d9-353f-44c2-b0fe-6e0f62e8ddbf-dns-svc\") pod \"6f3b40d9-353f-44c2-b0fe-6e0f62e8ddbf\" (UID: \"6f3b40d9-353f-44c2-b0fe-6e0f62e8ddbf\") " Dec 11 10:34:06 crc kubenswrapper[4953]: I1211 10:34:06.456202 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e05f528-a644-4eb2-878d-65ca7558e66b-combined-ca-bundle\") pod \"1e05f528-a644-4eb2-878d-65ca7558e66b\" (UID: \"1e05f528-a644-4eb2-878d-65ca7558e66b\") " Dec 11 10:34:06 crc kubenswrapper[4953]: I1211 10:34:06.456227 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f3b40d9-353f-44c2-b0fe-6e0f62e8ddbf-config\") pod \"6f3b40d9-353f-44c2-b0fe-6e0f62e8ddbf\" (UID: \"6f3b40d9-353f-44c2-b0fe-6e0f62e8ddbf\") " Dec 11 10:34:06 crc kubenswrapper[4953]: I1211 10:34:06.456245 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rpwhn\" (UniqueName: \"kubernetes.io/projected/6f3b40d9-353f-44c2-b0fe-6e0f62e8ddbf-kube-api-access-rpwhn\") pod \"6f3b40d9-353f-44c2-b0fe-6e0f62e8ddbf\" (UID: \"6f3b40d9-353f-44c2-b0fe-6e0f62e8ddbf\") " Dec 11 10:34:06 crc kubenswrapper[4953]: I1211 10:34:06.456265 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e05f528-a644-4eb2-878d-65ca7558e66b-logs\") pod \"1e05f528-a644-4eb2-878d-65ca7558e66b\" (UID: \"1e05f528-a644-4eb2-878d-65ca7558e66b\") " Dec 11 10:34:06 crc kubenswrapper[4953]: I1211 10:34:06.456295 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1e05f528-a644-4eb2-878d-65ca7558e66b-scripts\") pod \"1e05f528-a644-4eb2-878d-65ca7558e66b\" (UID: \"1e05f528-a644-4eb2-878d-65ca7558e66b\") " Dec 11 10:34:06 crc kubenswrapper[4953]: I1211 10:34:06.456347 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6f3b40d9-353f-44c2-b0fe-6e0f62e8ddbf-ovsdbserver-nb\") pod \"6f3b40d9-353f-44c2-b0fe-6e0f62e8ddbf\" (UID: \"6f3b40d9-353f-44c2-b0fe-6e0f62e8ddbf\") " Dec 11 10:34:06 crc kubenswrapper[4953]: I1211 10:34:06.456378 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e05f528-a644-4eb2-878d-65ca7558e66b-config-data\") pod \"1e05f528-a644-4eb2-878d-65ca7558e66b\" (UID: \"1e05f528-a644-4eb2-878d-65ca7558e66b\") " Dec 11 10:34:06 crc kubenswrapper[4953]: I1211 10:34:06.456400 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dxs6l\" (UniqueName: \"kubernetes.io/projected/1e05f528-a644-4eb2-878d-65ca7558e66b-kube-api-access-dxs6l\") pod \"1e05f528-a644-4eb2-878d-65ca7558e66b\" (UID: \"1e05f528-a644-4eb2-878d-65ca7558e66b\") " Dec 11 10:34:06 crc kubenswrapper[4953]: I1211 10:34:06.456421 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6f3b40d9-353f-44c2-b0fe-6e0f62e8ddbf-dns-swift-storage-0\") pod \"6f3b40d9-353f-44c2-b0fe-6e0f62e8ddbf\" (UID: \"6f3b40d9-353f-44c2-b0fe-6e0f62e8ddbf\") " Dec 11 10:34:06 crc kubenswrapper[4953]: I1211 10:34:06.456442 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1e05f528-a644-4eb2-878d-65ca7558e66b-httpd-run\") pod \"1e05f528-a644-4eb2-878d-65ca7558e66b\" (UID: \"1e05f528-a644-4eb2-878d-65ca7558e66b\") " Dec 11 10:34:06 crc kubenswrapper[4953]: I1211 10:34:06.457585 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e05f528-a644-4eb2-878d-65ca7558e66b-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "1e05f528-a644-4eb2-878d-65ca7558e66b" (UID: "1e05f528-a644-4eb2-878d-65ca7558e66b"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:34:06 crc kubenswrapper[4953]: I1211 10:34:06.457853 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e05f528-a644-4eb2-878d-65ca7558e66b-logs" (OuterVolumeSpecName: "logs") pod "1e05f528-a644-4eb2-878d-65ca7558e66b" (UID: "1e05f528-a644-4eb2-878d-65ca7558e66b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:34:06 crc kubenswrapper[4953]: I1211 10:34:06.466635 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f3b40d9-353f-44c2-b0fe-6e0f62e8ddbf-kube-api-access-rpwhn" (OuterVolumeSpecName: "kube-api-access-rpwhn") pod "6f3b40d9-353f-44c2-b0fe-6e0f62e8ddbf" (UID: "6f3b40d9-353f-44c2-b0fe-6e0f62e8ddbf"). InnerVolumeSpecName "kube-api-access-rpwhn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:34:06 crc kubenswrapper[4953]: I1211 10:34:06.467025 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e05f528-a644-4eb2-878d-65ca7558e66b-scripts" (OuterVolumeSpecName: "scripts") pod "1e05f528-a644-4eb2-878d-65ca7558e66b" (UID: "1e05f528-a644-4eb2-878d-65ca7558e66b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:34:06 crc kubenswrapper[4953]: I1211 10:34:06.486923 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "glance") pod "1e05f528-a644-4eb2-878d-65ca7558e66b" (UID: "1e05f528-a644-4eb2-878d-65ca7558e66b"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 11 10:34:06 crc kubenswrapper[4953]: I1211 10:34:06.496218 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e05f528-a644-4eb2-878d-65ca7558e66b-kube-api-access-dxs6l" (OuterVolumeSpecName: "kube-api-access-dxs6l") pod "1e05f528-a644-4eb2-878d-65ca7558e66b" (UID: "1e05f528-a644-4eb2-878d-65ca7558e66b"). InnerVolumeSpecName "kube-api-access-dxs6l". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:34:06 crc kubenswrapper[4953]: I1211 10:34:06.559092 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dxs6l\" (UniqueName: \"kubernetes.io/projected/1e05f528-a644-4eb2-878d-65ca7558e66b-kube-api-access-dxs6l\") on node \"crc\" DevicePath \"\"" Dec 11 10:34:06 crc kubenswrapper[4953]: I1211 10:34:06.559127 4953 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1e05f528-a644-4eb2-878d-65ca7558e66b-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 11 10:34:06 crc kubenswrapper[4953]: I1211 10:34:06.559146 4953 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Dec 11 10:34:06 crc kubenswrapper[4953]: I1211 10:34:06.559156 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rpwhn\" (UniqueName: \"kubernetes.io/projected/6f3b40d9-353f-44c2-b0fe-6e0f62e8ddbf-kube-api-access-rpwhn\") on node \"crc\" DevicePath \"\"" Dec 11 10:34:06 crc kubenswrapper[4953]: I1211 10:34:06.559165 4953 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e05f528-a644-4eb2-878d-65ca7558e66b-logs\") on node \"crc\" DevicePath \"\"" Dec 11 10:34:06 crc kubenswrapper[4953]: I1211 10:34:06.559173 4953 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1e05f528-a644-4eb2-878d-65ca7558e66b-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 10:34:06 crc kubenswrapper[4953]: I1211 10:34:06.603661 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e05f528-a644-4eb2-878d-65ca7558e66b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1e05f528-a644-4eb2-878d-65ca7558e66b" (UID: "1e05f528-a644-4eb2-878d-65ca7558e66b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:34:06 crc kubenswrapper[4953]: I1211 10:34:06.633755 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f3b40d9-353f-44c2-b0fe-6e0f62e8ddbf-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6f3b40d9-353f-44c2-b0fe-6e0f62e8ddbf" (UID: "6f3b40d9-353f-44c2-b0fe-6e0f62e8ddbf"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:34:06 crc kubenswrapper[4953]: I1211 10:34:06.654387 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f3b40d9-353f-44c2-b0fe-6e0f62e8ddbf-config" (OuterVolumeSpecName: "config") pod "6f3b40d9-353f-44c2-b0fe-6e0f62e8ddbf" (UID: "6f3b40d9-353f-44c2-b0fe-6e0f62e8ddbf"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:34:06 crc kubenswrapper[4953]: I1211 10:34:06.654981 4953 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Dec 11 10:34:06 crc kubenswrapper[4953]: I1211 10:34:06.661524 4953 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Dec 11 10:34:06 crc kubenswrapper[4953]: I1211 10:34:06.661708 4953 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e05f528-a644-4eb2-878d-65ca7558e66b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 10:34:06 crc kubenswrapper[4953]: I1211 10:34:06.661817 4953 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f3b40d9-353f-44c2-b0fe-6e0f62e8ddbf-config\") on node \"crc\" DevicePath \"\"" Dec 11 10:34:06 crc kubenswrapper[4953]: I1211 10:34:06.661896 4953 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6f3b40d9-353f-44c2-b0fe-6e0f62e8ddbf-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 11 10:34:06 crc kubenswrapper[4953]: I1211 10:34:06.668250 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f3b40d9-353f-44c2-b0fe-6e0f62e8ddbf-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6f3b40d9-353f-44c2-b0fe-6e0f62e8ddbf" (UID: "6f3b40d9-353f-44c2-b0fe-6e0f62e8ddbf"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:34:06 crc kubenswrapper[4953]: I1211 10:34:06.681418 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f3b40d9-353f-44c2-b0fe-6e0f62e8ddbf-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "6f3b40d9-353f-44c2-b0fe-6e0f62e8ddbf" (UID: "6f3b40d9-353f-44c2-b0fe-6e0f62e8ddbf"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:34:06 crc kubenswrapper[4953]: I1211 10:34:06.720749 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f3b40d9-353f-44c2-b0fe-6e0f62e8ddbf-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6f3b40d9-353f-44c2-b0fe-6e0f62e8ddbf" (UID: "6f3b40d9-353f-44c2-b0fe-6e0f62e8ddbf"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:34:06 crc kubenswrapper[4953]: I1211 10:34:06.757829 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e05f528-a644-4eb2-878d-65ca7558e66b-config-data" (OuterVolumeSpecName: "config-data") pod "1e05f528-a644-4eb2-878d-65ca7558e66b" (UID: "1e05f528-a644-4eb2-878d-65ca7558e66b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:34:06 crc kubenswrapper[4953]: I1211 10:34:06.763490 4953 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e05f528-a644-4eb2-878d-65ca7558e66b-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 10:34:06 crc kubenswrapper[4953]: I1211 10:34:06.763541 4953 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6f3b40d9-353f-44c2-b0fe-6e0f62e8ddbf-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 11 10:34:06 crc kubenswrapper[4953]: I1211 10:34:06.763557 4953 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6f3b40d9-353f-44c2-b0fe-6e0f62e8ddbf-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 11 10:34:06 crc kubenswrapper[4953]: I1211 10:34:06.763589 4953 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6f3b40d9-353f-44c2-b0fe-6e0f62e8ddbf-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 11 10:34:06 crc kubenswrapper[4953]: I1211 10:34:06.808917 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-75bd4868-pp5tq" event={"ID":"e06bccd4-aefb-4055-b4eb-ef745234cbcb","Type":"ContainerStarted","Data":"ce6bd3dbc16b31a00af008f4d0a7b764c9419ef9ba4de250ba6d865f380c53fd"} Dec 11 10:34:06 crc kubenswrapper[4953]: I1211 10:34:06.812093 4953 generic.go:334] "Generic (PLEG): container finished" podID="7734dd1c-9884-47a8-86fb-7f7dbf0c1af3" containerID="af5c887e7eca5fd7ff6f9c4156747d8342560f30e97d1ce68369f481de4f9ac9" exitCode=0 Dec 11 10:34:06 crc kubenswrapper[4953]: I1211 10:34:06.812134 4953 generic.go:334] "Generic (PLEG): container finished" podID="7734dd1c-9884-47a8-86fb-7f7dbf0c1af3" containerID="8b938ad3bfb7e53a045b18b3f364af08cdb2463b187560c5f2a84f9baa4e9479" exitCode=143 Dec 11 10:34:06 crc kubenswrapper[4953]: I1211 10:34:06.812128 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7734dd1c-9884-47a8-86fb-7f7dbf0c1af3","Type":"ContainerDied","Data":"af5c887e7eca5fd7ff6f9c4156747d8342560f30e97d1ce68369f481de4f9ac9"} Dec 11 10:34:06 crc kubenswrapper[4953]: I1211 10:34:06.812162 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7734dd1c-9884-47a8-86fb-7f7dbf0c1af3","Type":"ContainerDied","Data":"8b938ad3bfb7e53a045b18b3f364af08cdb2463b187560c5f2a84f9baa4e9479"} Dec 11 10:34:06 crc kubenswrapper[4953]: I1211 10:34:06.815221 4953 generic.go:334] "Generic (PLEG): container finished" podID="1e05f528-a644-4eb2-878d-65ca7558e66b" containerID="fd4bd68e3b453d1ec7a5e434e9ffd67804c362038e8f476bc8dd7f720835d07c" exitCode=143 Dec 11 10:34:06 crc kubenswrapper[4953]: I1211 10:34:06.815248 4953 generic.go:334] "Generic (PLEG): container finished" podID="1e05f528-a644-4eb2-878d-65ca7558e66b" containerID="856dfcd83a38a7441349e889788d51b0f8f9d225c46e38db576588d64b1b29a2" exitCode=143 Dec 11 10:34:06 crc kubenswrapper[4953]: I1211 10:34:06.815251 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1e05f528-a644-4eb2-878d-65ca7558e66b","Type":"ContainerDied","Data":"fd4bd68e3b453d1ec7a5e434e9ffd67804c362038e8f476bc8dd7f720835d07c"} Dec 11 10:34:06 crc kubenswrapper[4953]: I1211 10:34:06.815290 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1e05f528-a644-4eb2-878d-65ca7558e66b","Type":"ContainerDied","Data":"856dfcd83a38a7441349e889788d51b0f8f9d225c46e38db576588d64b1b29a2"} Dec 11 10:34:06 crc kubenswrapper[4953]: I1211 10:34:06.815306 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1e05f528-a644-4eb2-878d-65ca7558e66b","Type":"ContainerDied","Data":"91068aec120efedc1df48c316743420f9d3f674d603325497c81bdae43eb78a0"} Dec 11 10:34:06 crc kubenswrapper[4953]: I1211 10:34:06.815325 4953 scope.go:117] "RemoveContainer" containerID="fd4bd68e3b453d1ec7a5e434e9ffd67804c362038e8f476bc8dd7f720835d07c" Dec 11 10:34:06 crc kubenswrapper[4953]: I1211 10:34:06.815475 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 11 10:34:06 crc kubenswrapper[4953]: I1211 10:34:06.827519 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7567d9469d-rx5dx" event={"ID":"345a513a-93a0-4e23-9266-3eeaf3ff0c10","Type":"ContainerStarted","Data":"111e78ea1225285d6f9cf9e61ccddd3adee93f71a7ea5c5159526554c821ed7c"} Dec 11 10:34:06 crc kubenswrapper[4953]: I1211 10:34:06.827562 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7567d9469d-rx5dx" event={"ID":"345a513a-93a0-4e23-9266-3eeaf3ff0c10","Type":"ContainerStarted","Data":"4e803c3ca9de08930239d9fb373f85ae0e80aceb76085ed679d03e93d05ec822"} Dec 11 10:34:06 crc kubenswrapper[4953]: I1211 10:34:06.831131 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b9c8b59c-h4qws" event={"ID":"41289b74-d869-42e1-875c-4ba58c7cd4c2","Type":"ContainerStarted","Data":"9a255d59c1db42c2842d3af9eb3eb3254ee5b3e3e12d4f55efba19b41ef003d4"} Dec 11 10:34:06 crc kubenswrapper[4953]: I1211 10:34:06.832616 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6b9c8b59c-h4qws" Dec 11 10:34:06 crc kubenswrapper[4953]: I1211 10:34:06.835183 4953 generic.go:334] "Generic (PLEG): container finished" podID="6f3b40d9-353f-44c2-b0fe-6e0f62e8ddbf" containerID="dc8f357c0cc6139f74b3dd83b06fac66a1b420c470e862e678caa306efab7940" exitCode=0 Dec 11 10:34:06 crc kubenswrapper[4953]: I1211 10:34:06.835244 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5dc4fcdbc-s9gvv" event={"ID":"6f3b40d9-353f-44c2-b0fe-6e0f62e8ddbf","Type":"ContainerDied","Data":"dc8f357c0cc6139f74b3dd83b06fac66a1b420c470e862e678caa306efab7940"} Dec 11 10:34:06 crc kubenswrapper[4953]: I1211 10:34:06.835291 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5dc4fcdbc-s9gvv" event={"ID":"6f3b40d9-353f-44c2-b0fe-6e0f62e8ddbf","Type":"ContainerDied","Data":"e3992bade22e56713ef83740f5d162cb0dc36001ba79303fdb04b8d12f67f889"} Dec 11 10:34:06 crc kubenswrapper[4953]: I1211 10:34:06.835386 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5dc4fcdbc-s9gvv" Dec 11 10:34:06 crc kubenswrapper[4953]: I1211 10:34:06.859693 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6b9c8b59c-h4qws" podStartSLOduration=3.859675586 podStartE2EDuration="3.859675586s" podCreationTimestamp="2025-12-11 10:34:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:34:06.857350807 +0000 UTC m=+1364.881209840" watchObservedRunningTime="2025-12-11 10:34:06.859675586 +0000 UTC m=+1364.883534619" Dec 11 10:34:07 crc kubenswrapper[4953]: I1211 10:34:07.003300 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-55b68558f8-r49n8"] Dec 11 10:34:07 crc kubenswrapper[4953]: I1211 10:34:07.008482 4953 scope.go:117] "RemoveContainer" containerID="856dfcd83a38a7441349e889788d51b0f8f9d225c46e38db576588d64b1b29a2" Dec 11 10:34:07 crc kubenswrapper[4953]: I1211 10:34:07.018934 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 11 10:34:07 crc kubenswrapper[4953]: I1211 10:34:07.038920 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 11 10:34:07 crc kubenswrapper[4953]: I1211 10:34:07.062327 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 11 10:34:07 crc kubenswrapper[4953]: I1211 10:34:07.085332 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5dc4fcdbc-s9gvv"] Dec 11 10:34:07 crc kubenswrapper[4953]: I1211 10:34:07.133761 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5dc4fcdbc-s9gvv"] Dec 11 10:34:07 crc kubenswrapper[4953]: I1211 10:34:07.152624 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 11 10:34:07 crc kubenswrapper[4953]: E1211 10:34:07.153069 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f3b40d9-353f-44c2-b0fe-6e0f62e8ddbf" containerName="init" Dec 11 10:34:07 crc kubenswrapper[4953]: I1211 10:34:07.153091 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f3b40d9-353f-44c2-b0fe-6e0f62e8ddbf" containerName="init" Dec 11 10:34:07 crc kubenswrapper[4953]: E1211 10:34:07.153105 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7734dd1c-9884-47a8-86fb-7f7dbf0c1af3" containerName="glance-log" Dec 11 10:34:07 crc kubenswrapper[4953]: I1211 10:34:07.153113 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="7734dd1c-9884-47a8-86fb-7f7dbf0c1af3" containerName="glance-log" Dec 11 10:34:07 crc kubenswrapper[4953]: E1211 10:34:07.153126 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e05f528-a644-4eb2-878d-65ca7558e66b" containerName="glance-log" Dec 11 10:34:07 crc kubenswrapper[4953]: I1211 10:34:07.153136 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e05f528-a644-4eb2-878d-65ca7558e66b" containerName="glance-log" Dec 11 10:34:07 crc kubenswrapper[4953]: E1211 10:34:07.153153 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e05f528-a644-4eb2-878d-65ca7558e66b" containerName="glance-httpd" Dec 11 10:34:07 crc kubenswrapper[4953]: I1211 10:34:07.153160 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e05f528-a644-4eb2-878d-65ca7558e66b" containerName="glance-httpd" Dec 11 10:34:07 crc kubenswrapper[4953]: E1211 10:34:07.153185 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f3b40d9-353f-44c2-b0fe-6e0f62e8ddbf" containerName="dnsmasq-dns" Dec 11 10:34:07 crc kubenswrapper[4953]: I1211 10:34:07.153192 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f3b40d9-353f-44c2-b0fe-6e0f62e8ddbf" containerName="dnsmasq-dns" Dec 11 10:34:07 crc kubenswrapper[4953]: E1211 10:34:07.153206 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7734dd1c-9884-47a8-86fb-7f7dbf0c1af3" containerName="glance-httpd" Dec 11 10:34:07 crc kubenswrapper[4953]: I1211 10:34:07.153214 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="7734dd1c-9884-47a8-86fb-7f7dbf0c1af3" containerName="glance-httpd" Dec 11 10:34:07 crc kubenswrapper[4953]: I1211 10:34:07.153431 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e05f528-a644-4eb2-878d-65ca7558e66b" containerName="glance-log" Dec 11 10:34:07 crc kubenswrapper[4953]: I1211 10:34:07.153453 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="7734dd1c-9884-47a8-86fb-7f7dbf0c1af3" containerName="glance-log" Dec 11 10:34:07 crc kubenswrapper[4953]: I1211 10:34:07.153466 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f3b40d9-353f-44c2-b0fe-6e0f62e8ddbf" containerName="dnsmasq-dns" Dec 11 10:34:07 crc kubenswrapper[4953]: I1211 10:34:07.153479 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e05f528-a644-4eb2-878d-65ca7558e66b" containerName="glance-httpd" Dec 11 10:34:07 crc kubenswrapper[4953]: I1211 10:34:07.153497 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="7734dd1c-9884-47a8-86fb-7f7dbf0c1af3" containerName="glance-httpd" Dec 11 10:34:07 crc kubenswrapper[4953]: I1211 10:34:07.155167 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 11 10:34:07 crc kubenswrapper[4953]: I1211 10:34:07.162117 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Dec 11 10:34:07 crc kubenswrapper[4953]: I1211 10:34:07.162388 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Dec 11 10:34:07 crc kubenswrapper[4953]: I1211 10:34:07.170281 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 11 10:34:07 crc kubenswrapper[4953]: I1211 10:34:07.171079 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7734dd1c-9884-47a8-86fb-7f7dbf0c1af3-combined-ca-bundle\") pod \"7734dd1c-9884-47a8-86fb-7f7dbf0c1af3\" (UID: \"7734dd1c-9884-47a8-86fb-7f7dbf0c1af3\") " Dec 11 10:34:07 crc kubenswrapper[4953]: I1211 10:34:07.171237 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7734dd1c-9884-47a8-86fb-7f7dbf0c1af3-httpd-run\") pod \"7734dd1c-9884-47a8-86fb-7f7dbf0c1af3\" (UID: \"7734dd1c-9884-47a8-86fb-7f7dbf0c1af3\") " Dec 11 10:34:07 crc kubenswrapper[4953]: I1211 10:34:07.171288 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7734dd1c-9884-47a8-86fb-7f7dbf0c1af3-config-data\") pod \"7734dd1c-9884-47a8-86fb-7f7dbf0c1af3\" (UID: \"7734dd1c-9884-47a8-86fb-7f7dbf0c1af3\") " Dec 11 10:34:07 crc kubenswrapper[4953]: I1211 10:34:07.171335 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"7734dd1c-9884-47a8-86fb-7f7dbf0c1af3\" (UID: \"7734dd1c-9884-47a8-86fb-7f7dbf0c1af3\") " Dec 11 10:34:07 crc kubenswrapper[4953]: I1211 10:34:07.171398 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7734dd1c-9884-47a8-86fb-7f7dbf0c1af3-scripts\") pod \"7734dd1c-9884-47a8-86fb-7f7dbf0c1af3\" (UID: \"7734dd1c-9884-47a8-86fb-7f7dbf0c1af3\") " Dec 11 10:34:07 crc kubenswrapper[4953]: I1211 10:34:07.171536 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hb7nt\" (UniqueName: \"kubernetes.io/projected/7734dd1c-9884-47a8-86fb-7f7dbf0c1af3-kube-api-access-hb7nt\") pod \"7734dd1c-9884-47a8-86fb-7f7dbf0c1af3\" (UID: \"7734dd1c-9884-47a8-86fb-7f7dbf0c1af3\") " Dec 11 10:34:07 crc kubenswrapper[4953]: I1211 10:34:07.171608 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7734dd1c-9884-47a8-86fb-7f7dbf0c1af3-logs\") pod \"7734dd1c-9884-47a8-86fb-7f7dbf0c1af3\" (UID: \"7734dd1c-9884-47a8-86fb-7f7dbf0c1af3\") " Dec 11 10:34:07 crc kubenswrapper[4953]: I1211 10:34:07.172538 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7734dd1c-9884-47a8-86fb-7f7dbf0c1af3-logs" (OuterVolumeSpecName: "logs") pod "7734dd1c-9884-47a8-86fb-7f7dbf0c1af3" (UID: "7734dd1c-9884-47a8-86fb-7f7dbf0c1af3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:34:07 crc kubenswrapper[4953]: I1211 10:34:07.177318 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "glance") pod "7734dd1c-9884-47a8-86fb-7f7dbf0c1af3" (UID: "7734dd1c-9884-47a8-86fb-7f7dbf0c1af3"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 11 10:34:07 crc kubenswrapper[4953]: I1211 10:34:07.177746 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7734dd1c-9884-47a8-86fb-7f7dbf0c1af3-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "7734dd1c-9884-47a8-86fb-7f7dbf0c1af3" (UID: "7734dd1c-9884-47a8-86fb-7f7dbf0c1af3"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:34:07 crc kubenswrapper[4953]: I1211 10:34:07.186946 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7734dd1c-9884-47a8-86fb-7f7dbf0c1af3-scripts" (OuterVolumeSpecName: "scripts") pod "7734dd1c-9884-47a8-86fb-7f7dbf0c1af3" (UID: "7734dd1c-9884-47a8-86fb-7f7dbf0c1af3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:34:07 crc kubenswrapper[4953]: I1211 10:34:07.197466 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7734dd1c-9884-47a8-86fb-7f7dbf0c1af3-kube-api-access-hb7nt" (OuterVolumeSpecName: "kube-api-access-hb7nt") pod "7734dd1c-9884-47a8-86fb-7f7dbf0c1af3" (UID: "7734dd1c-9884-47a8-86fb-7f7dbf0c1af3"). InnerVolumeSpecName "kube-api-access-hb7nt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:34:07 crc kubenswrapper[4953]: I1211 10:34:07.207903 4953 scope.go:117] "RemoveContainer" containerID="fd4bd68e3b453d1ec7a5e434e9ffd67804c362038e8f476bc8dd7f720835d07c" Dec 11 10:34:07 crc kubenswrapper[4953]: E1211 10:34:07.213031 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd4bd68e3b453d1ec7a5e434e9ffd67804c362038e8f476bc8dd7f720835d07c\": container with ID starting with fd4bd68e3b453d1ec7a5e434e9ffd67804c362038e8f476bc8dd7f720835d07c not found: ID does not exist" containerID="fd4bd68e3b453d1ec7a5e434e9ffd67804c362038e8f476bc8dd7f720835d07c" Dec 11 10:34:07 crc kubenswrapper[4953]: I1211 10:34:07.213075 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd4bd68e3b453d1ec7a5e434e9ffd67804c362038e8f476bc8dd7f720835d07c"} err="failed to get container status \"fd4bd68e3b453d1ec7a5e434e9ffd67804c362038e8f476bc8dd7f720835d07c\": rpc error: code = NotFound desc = could not find container \"fd4bd68e3b453d1ec7a5e434e9ffd67804c362038e8f476bc8dd7f720835d07c\": container with ID starting with fd4bd68e3b453d1ec7a5e434e9ffd67804c362038e8f476bc8dd7f720835d07c not found: ID does not exist" Dec 11 10:34:07 crc kubenswrapper[4953]: I1211 10:34:07.213107 4953 scope.go:117] "RemoveContainer" containerID="856dfcd83a38a7441349e889788d51b0f8f9d225c46e38db576588d64b1b29a2" Dec 11 10:34:07 crc kubenswrapper[4953]: E1211 10:34:07.214781 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"856dfcd83a38a7441349e889788d51b0f8f9d225c46e38db576588d64b1b29a2\": container with ID starting with 856dfcd83a38a7441349e889788d51b0f8f9d225c46e38db576588d64b1b29a2 not found: ID does not exist" containerID="856dfcd83a38a7441349e889788d51b0f8f9d225c46e38db576588d64b1b29a2" Dec 11 10:34:07 crc kubenswrapper[4953]: I1211 10:34:07.214840 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"856dfcd83a38a7441349e889788d51b0f8f9d225c46e38db576588d64b1b29a2"} err="failed to get container status \"856dfcd83a38a7441349e889788d51b0f8f9d225c46e38db576588d64b1b29a2\": rpc error: code = NotFound desc = could not find container \"856dfcd83a38a7441349e889788d51b0f8f9d225c46e38db576588d64b1b29a2\": container with ID starting with 856dfcd83a38a7441349e889788d51b0f8f9d225c46e38db576588d64b1b29a2 not found: ID does not exist" Dec 11 10:34:07 crc kubenswrapper[4953]: I1211 10:34:07.214881 4953 scope.go:117] "RemoveContainer" containerID="fd4bd68e3b453d1ec7a5e434e9ffd67804c362038e8f476bc8dd7f720835d07c" Dec 11 10:34:07 crc kubenswrapper[4953]: I1211 10:34:07.215282 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd4bd68e3b453d1ec7a5e434e9ffd67804c362038e8f476bc8dd7f720835d07c"} err="failed to get container status \"fd4bd68e3b453d1ec7a5e434e9ffd67804c362038e8f476bc8dd7f720835d07c\": rpc error: code = NotFound desc = could not find container \"fd4bd68e3b453d1ec7a5e434e9ffd67804c362038e8f476bc8dd7f720835d07c\": container with ID starting with fd4bd68e3b453d1ec7a5e434e9ffd67804c362038e8f476bc8dd7f720835d07c not found: ID does not exist" Dec 11 10:34:07 crc kubenswrapper[4953]: I1211 10:34:07.215511 4953 scope.go:117] "RemoveContainer" containerID="856dfcd83a38a7441349e889788d51b0f8f9d225c46e38db576588d64b1b29a2" Dec 11 10:34:07 crc kubenswrapper[4953]: I1211 10:34:07.217390 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"856dfcd83a38a7441349e889788d51b0f8f9d225c46e38db576588d64b1b29a2"} err="failed to get container status \"856dfcd83a38a7441349e889788d51b0f8f9d225c46e38db576588d64b1b29a2\": rpc error: code = NotFound desc = could not find container \"856dfcd83a38a7441349e889788d51b0f8f9d225c46e38db576588d64b1b29a2\": container with ID starting with 856dfcd83a38a7441349e889788d51b0f8f9d225c46e38db576588d64b1b29a2 not found: ID does not exist" Dec 11 10:34:07 crc kubenswrapper[4953]: I1211 10:34:07.217428 4953 scope.go:117] "RemoveContainer" containerID="dc8f357c0cc6139f74b3dd83b06fac66a1b420c470e862e678caa306efab7940" Dec 11 10:34:07 crc kubenswrapper[4953]: I1211 10:34:07.246885 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7734dd1c-9884-47a8-86fb-7f7dbf0c1af3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7734dd1c-9884-47a8-86fb-7f7dbf0c1af3" (UID: "7734dd1c-9884-47a8-86fb-7f7dbf0c1af3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:34:07 crc kubenswrapper[4953]: I1211 10:34:07.254631 4953 scope.go:117] "RemoveContainer" containerID="d3a2f32fa13b9b070accf6e1045a8eb80ad869e00a70f3ba2130840107e102ac" Dec 11 10:34:07 crc kubenswrapper[4953]: I1211 10:34:07.279718 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8c04c52-6e9d-4254-a222-85f06c186b92-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b8c04c52-6e9d-4254-a222-85f06c186b92\") " pod="openstack/glance-default-internal-api-0" Dec 11 10:34:07 crc kubenswrapper[4953]: I1211 10:34:07.279767 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b8c04c52-6e9d-4254-a222-85f06c186b92-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"b8c04c52-6e9d-4254-a222-85f06c186b92\") " pod="openstack/glance-default-internal-api-0" Dec 11 10:34:07 crc kubenswrapper[4953]: I1211 10:34:07.279794 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b8c04c52-6e9d-4254-a222-85f06c186b92-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b8c04c52-6e9d-4254-a222-85f06c186b92\") " pod="openstack/glance-default-internal-api-0" Dec 11 10:34:07 crc kubenswrapper[4953]: I1211 10:34:07.279834 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"b8c04c52-6e9d-4254-a222-85f06c186b92\") " pod="openstack/glance-default-internal-api-0" Dec 11 10:34:07 crc kubenswrapper[4953]: I1211 10:34:07.279856 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5ndh\" (UniqueName: \"kubernetes.io/projected/b8c04c52-6e9d-4254-a222-85f06c186b92-kube-api-access-n5ndh\") pod \"glance-default-internal-api-0\" (UID: \"b8c04c52-6e9d-4254-a222-85f06c186b92\") " pod="openstack/glance-default-internal-api-0" Dec 11 10:34:07 crc kubenswrapper[4953]: I1211 10:34:07.279918 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8c04c52-6e9d-4254-a222-85f06c186b92-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b8c04c52-6e9d-4254-a222-85f06c186b92\") " pod="openstack/glance-default-internal-api-0" Dec 11 10:34:07 crc kubenswrapper[4953]: I1211 10:34:07.279960 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b8c04c52-6e9d-4254-a222-85f06c186b92-logs\") pod \"glance-default-internal-api-0\" (UID: \"b8c04c52-6e9d-4254-a222-85f06c186b92\") " pod="openstack/glance-default-internal-api-0" Dec 11 10:34:07 crc kubenswrapper[4953]: I1211 10:34:07.279991 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b8c04c52-6e9d-4254-a222-85f06c186b92-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b8c04c52-6e9d-4254-a222-85f06c186b92\") " pod="openstack/glance-default-internal-api-0" Dec 11 10:34:07 crc kubenswrapper[4953]: I1211 10:34:07.280058 4953 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7734dd1c-9884-47a8-86fb-7f7dbf0c1af3-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 11 10:34:07 crc kubenswrapper[4953]: I1211 10:34:07.280087 4953 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Dec 11 10:34:07 crc kubenswrapper[4953]: I1211 10:34:07.280097 4953 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7734dd1c-9884-47a8-86fb-7f7dbf0c1af3-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 10:34:07 crc kubenswrapper[4953]: I1211 10:34:07.280106 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hb7nt\" (UniqueName: \"kubernetes.io/projected/7734dd1c-9884-47a8-86fb-7f7dbf0c1af3-kube-api-access-hb7nt\") on node \"crc\" DevicePath \"\"" Dec 11 10:34:07 crc kubenswrapper[4953]: I1211 10:34:07.280117 4953 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7734dd1c-9884-47a8-86fb-7f7dbf0c1af3-logs\") on node \"crc\" DevicePath \"\"" Dec 11 10:34:07 crc kubenswrapper[4953]: I1211 10:34:07.280125 4953 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7734dd1c-9884-47a8-86fb-7f7dbf0c1af3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 10:34:07 crc kubenswrapper[4953]: I1211 10:34:07.289864 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7734dd1c-9884-47a8-86fb-7f7dbf0c1af3-config-data" (OuterVolumeSpecName: "config-data") pod "7734dd1c-9884-47a8-86fb-7f7dbf0c1af3" (UID: "7734dd1c-9884-47a8-86fb-7f7dbf0c1af3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:34:07 crc kubenswrapper[4953]: I1211 10:34:07.327076 4953 scope.go:117] "RemoveContainer" containerID="dc8f357c0cc6139f74b3dd83b06fac66a1b420c470e862e678caa306efab7940" Dec 11 10:34:07 crc kubenswrapper[4953]: E1211 10:34:07.327714 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc8f357c0cc6139f74b3dd83b06fac66a1b420c470e862e678caa306efab7940\": container with ID starting with dc8f357c0cc6139f74b3dd83b06fac66a1b420c470e862e678caa306efab7940 not found: ID does not exist" containerID="dc8f357c0cc6139f74b3dd83b06fac66a1b420c470e862e678caa306efab7940" Dec 11 10:34:07 crc kubenswrapper[4953]: I1211 10:34:07.327839 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc8f357c0cc6139f74b3dd83b06fac66a1b420c470e862e678caa306efab7940"} err="failed to get container status \"dc8f357c0cc6139f74b3dd83b06fac66a1b420c470e862e678caa306efab7940\": rpc error: code = NotFound desc = could not find container \"dc8f357c0cc6139f74b3dd83b06fac66a1b420c470e862e678caa306efab7940\": container with ID starting with dc8f357c0cc6139f74b3dd83b06fac66a1b420c470e862e678caa306efab7940 not found: ID does not exist" Dec 11 10:34:07 crc kubenswrapper[4953]: I1211 10:34:07.327945 4953 scope.go:117] "RemoveContainer" containerID="d3a2f32fa13b9b070accf6e1045a8eb80ad869e00a70f3ba2130840107e102ac" Dec 11 10:34:07 crc kubenswrapper[4953]: E1211 10:34:07.328360 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d3a2f32fa13b9b070accf6e1045a8eb80ad869e00a70f3ba2130840107e102ac\": container with ID starting with d3a2f32fa13b9b070accf6e1045a8eb80ad869e00a70f3ba2130840107e102ac not found: ID does not exist" containerID="d3a2f32fa13b9b070accf6e1045a8eb80ad869e00a70f3ba2130840107e102ac" Dec 11 10:34:07 crc kubenswrapper[4953]: I1211 10:34:07.328414 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3a2f32fa13b9b070accf6e1045a8eb80ad869e00a70f3ba2130840107e102ac"} err="failed to get container status \"d3a2f32fa13b9b070accf6e1045a8eb80ad869e00a70f3ba2130840107e102ac\": rpc error: code = NotFound desc = could not find container \"d3a2f32fa13b9b070accf6e1045a8eb80ad869e00a70f3ba2130840107e102ac\": container with ID starting with d3a2f32fa13b9b070accf6e1045a8eb80ad869e00a70f3ba2130840107e102ac not found: ID does not exist" Dec 11 10:34:07 crc kubenswrapper[4953]: I1211 10:34:07.338501 4953 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Dec 11 10:34:07 crc kubenswrapper[4953]: I1211 10:34:07.381211 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8c04c52-6e9d-4254-a222-85f06c186b92-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b8c04c52-6e9d-4254-a222-85f06c186b92\") " pod="openstack/glance-default-internal-api-0" Dec 11 10:34:07 crc kubenswrapper[4953]: I1211 10:34:07.381524 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b8c04c52-6e9d-4254-a222-85f06c186b92-logs\") pod \"glance-default-internal-api-0\" (UID: \"b8c04c52-6e9d-4254-a222-85f06c186b92\") " pod="openstack/glance-default-internal-api-0" Dec 11 10:34:07 crc kubenswrapper[4953]: I1211 10:34:07.381603 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b8c04c52-6e9d-4254-a222-85f06c186b92-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b8c04c52-6e9d-4254-a222-85f06c186b92\") " pod="openstack/glance-default-internal-api-0" Dec 11 10:34:07 crc kubenswrapper[4953]: I1211 10:34:07.381690 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8c04c52-6e9d-4254-a222-85f06c186b92-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b8c04c52-6e9d-4254-a222-85f06c186b92\") " pod="openstack/glance-default-internal-api-0" Dec 11 10:34:07 crc kubenswrapper[4953]: I1211 10:34:07.381726 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b8c04c52-6e9d-4254-a222-85f06c186b92-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"b8c04c52-6e9d-4254-a222-85f06c186b92\") " pod="openstack/glance-default-internal-api-0" Dec 11 10:34:07 crc kubenswrapper[4953]: I1211 10:34:07.381746 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b8c04c52-6e9d-4254-a222-85f06c186b92-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b8c04c52-6e9d-4254-a222-85f06c186b92\") " pod="openstack/glance-default-internal-api-0" Dec 11 10:34:07 crc kubenswrapper[4953]: I1211 10:34:07.381763 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"b8c04c52-6e9d-4254-a222-85f06c186b92\") " pod="openstack/glance-default-internal-api-0" Dec 11 10:34:07 crc kubenswrapper[4953]: I1211 10:34:07.381786 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n5ndh\" (UniqueName: \"kubernetes.io/projected/b8c04c52-6e9d-4254-a222-85f06c186b92-kube-api-access-n5ndh\") pod \"glance-default-internal-api-0\" (UID: \"b8c04c52-6e9d-4254-a222-85f06c186b92\") " pod="openstack/glance-default-internal-api-0" Dec 11 10:34:07 crc kubenswrapper[4953]: I1211 10:34:07.381846 4953 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7734dd1c-9884-47a8-86fb-7f7dbf0c1af3-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 10:34:07 crc kubenswrapper[4953]: I1211 10:34:07.381858 4953 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Dec 11 10:34:07 crc kubenswrapper[4953]: I1211 10:34:07.382949 4953 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"b8c04c52-6e9d-4254-a222-85f06c186b92\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/glance-default-internal-api-0" Dec 11 10:34:07 crc kubenswrapper[4953]: I1211 10:34:07.383331 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b8c04c52-6e9d-4254-a222-85f06c186b92-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b8c04c52-6e9d-4254-a222-85f06c186b92\") " pod="openstack/glance-default-internal-api-0" Dec 11 10:34:07 crc kubenswrapper[4953]: I1211 10:34:07.383446 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b8c04c52-6e9d-4254-a222-85f06c186b92-logs\") pod \"glance-default-internal-api-0\" (UID: \"b8c04c52-6e9d-4254-a222-85f06c186b92\") " pod="openstack/glance-default-internal-api-0" Dec 11 10:34:07 crc kubenswrapper[4953]: I1211 10:34:07.386688 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b8c04c52-6e9d-4254-a222-85f06c186b92-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b8c04c52-6e9d-4254-a222-85f06c186b92\") " pod="openstack/glance-default-internal-api-0" Dec 11 10:34:07 crc kubenswrapper[4953]: I1211 10:34:07.401499 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8c04c52-6e9d-4254-a222-85f06c186b92-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b8c04c52-6e9d-4254-a222-85f06c186b92\") " pod="openstack/glance-default-internal-api-0" Dec 11 10:34:07 crc kubenswrapper[4953]: I1211 10:34:07.401499 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b8c04c52-6e9d-4254-a222-85f06c186b92-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"b8c04c52-6e9d-4254-a222-85f06c186b92\") " pod="openstack/glance-default-internal-api-0" Dec 11 10:34:07 crc kubenswrapper[4953]: I1211 10:34:07.401733 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8c04c52-6e9d-4254-a222-85f06c186b92-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b8c04c52-6e9d-4254-a222-85f06c186b92\") " pod="openstack/glance-default-internal-api-0" Dec 11 10:34:07 crc kubenswrapper[4953]: I1211 10:34:07.416342 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n5ndh\" (UniqueName: \"kubernetes.io/projected/b8c04c52-6e9d-4254-a222-85f06c186b92-kube-api-access-n5ndh\") pod \"glance-default-internal-api-0\" (UID: \"b8c04c52-6e9d-4254-a222-85f06c186b92\") " pod="openstack/glance-default-internal-api-0" Dec 11 10:34:07 crc kubenswrapper[4953]: I1211 10:34:07.429029 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"b8c04c52-6e9d-4254-a222-85f06c186b92\") " pod="openstack/glance-default-internal-api-0" Dec 11 10:34:07 crc kubenswrapper[4953]: I1211 10:34:07.513001 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 11 10:34:07 crc kubenswrapper[4953]: I1211 10:34:07.522188 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-677c7c8c9c-gh7rd"] Dec 11 10:34:07 crc kubenswrapper[4953]: I1211 10:34:07.524332 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-677c7c8c9c-gh7rd" Dec 11 10:34:07 crc kubenswrapper[4953]: I1211 10:34:07.533924 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Dec 11 10:34:07 crc kubenswrapper[4953]: I1211 10:34:07.538447 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Dec 11 10:34:07 crc kubenswrapper[4953]: I1211 10:34:07.556842 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-677c7c8c9c-gh7rd"] Dec 11 10:34:07 crc kubenswrapper[4953]: I1211 10:34:07.693880 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/261b522a-b786-4b2b-975c-43f1cc0d8ccf-httpd-config\") pod \"neutron-677c7c8c9c-gh7rd\" (UID: \"261b522a-b786-4b2b-975c-43f1cc0d8ccf\") " pod="openstack/neutron-677c7c8c9c-gh7rd" Dec 11 10:34:07 crc kubenswrapper[4953]: I1211 10:34:07.694308 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/261b522a-b786-4b2b-975c-43f1cc0d8ccf-combined-ca-bundle\") pod \"neutron-677c7c8c9c-gh7rd\" (UID: \"261b522a-b786-4b2b-975c-43f1cc0d8ccf\") " pod="openstack/neutron-677c7c8c9c-gh7rd" Dec 11 10:34:07 crc kubenswrapper[4953]: I1211 10:34:07.694341 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/261b522a-b786-4b2b-975c-43f1cc0d8ccf-config\") pod \"neutron-677c7c8c9c-gh7rd\" (UID: \"261b522a-b786-4b2b-975c-43f1cc0d8ccf\") " pod="openstack/neutron-677c7c8c9c-gh7rd" Dec 11 10:34:07 crc kubenswrapper[4953]: I1211 10:34:07.694376 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/261b522a-b786-4b2b-975c-43f1cc0d8ccf-ovndb-tls-certs\") pod \"neutron-677c7c8c9c-gh7rd\" (UID: \"261b522a-b786-4b2b-975c-43f1cc0d8ccf\") " pod="openstack/neutron-677c7c8c9c-gh7rd" Dec 11 10:34:07 crc kubenswrapper[4953]: I1211 10:34:07.694504 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdjfv\" (UniqueName: \"kubernetes.io/projected/261b522a-b786-4b2b-975c-43f1cc0d8ccf-kube-api-access-qdjfv\") pod \"neutron-677c7c8c9c-gh7rd\" (UID: \"261b522a-b786-4b2b-975c-43f1cc0d8ccf\") " pod="openstack/neutron-677c7c8c9c-gh7rd" Dec 11 10:34:07 crc kubenswrapper[4953]: I1211 10:34:07.694556 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/261b522a-b786-4b2b-975c-43f1cc0d8ccf-internal-tls-certs\") pod \"neutron-677c7c8c9c-gh7rd\" (UID: \"261b522a-b786-4b2b-975c-43f1cc0d8ccf\") " pod="openstack/neutron-677c7c8c9c-gh7rd" Dec 11 10:34:07 crc kubenswrapper[4953]: I1211 10:34:07.696078 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/261b522a-b786-4b2b-975c-43f1cc0d8ccf-public-tls-certs\") pod \"neutron-677c7c8c9c-gh7rd\" (UID: \"261b522a-b786-4b2b-975c-43f1cc0d8ccf\") " pod="openstack/neutron-677c7c8c9c-gh7rd" Dec 11 10:34:07 crc kubenswrapper[4953]: I1211 10:34:07.797961 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdjfv\" (UniqueName: \"kubernetes.io/projected/261b522a-b786-4b2b-975c-43f1cc0d8ccf-kube-api-access-qdjfv\") pod \"neutron-677c7c8c9c-gh7rd\" (UID: \"261b522a-b786-4b2b-975c-43f1cc0d8ccf\") " pod="openstack/neutron-677c7c8c9c-gh7rd" Dec 11 10:34:07 crc kubenswrapper[4953]: I1211 10:34:07.798029 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/261b522a-b786-4b2b-975c-43f1cc0d8ccf-internal-tls-certs\") pod \"neutron-677c7c8c9c-gh7rd\" (UID: \"261b522a-b786-4b2b-975c-43f1cc0d8ccf\") " pod="openstack/neutron-677c7c8c9c-gh7rd" Dec 11 10:34:07 crc kubenswrapper[4953]: I1211 10:34:07.798056 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/261b522a-b786-4b2b-975c-43f1cc0d8ccf-public-tls-certs\") pod \"neutron-677c7c8c9c-gh7rd\" (UID: \"261b522a-b786-4b2b-975c-43f1cc0d8ccf\") " pod="openstack/neutron-677c7c8c9c-gh7rd" Dec 11 10:34:07 crc kubenswrapper[4953]: I1211 10:34:07.798102 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/261b522a-b786-4b2b-975c-43f1cc0d8ccf-httpd-config\") pod \"neutron-677c7c8c9c-gh7rd\" (UID: \"261b522a-b786-4b2b-975c-43f1cc0d8ccf\") " pod="openstack/neutron-677c7c8c9c-gh7rd" Dec 11 10:34:07 crc kubenswrapper[4953]: I1211 10:34:07.798165 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/261b522a-b786-4b2b-975c-43f1cc0d8ccf-combined-ca-bundle\") pod \"neutron-677c7c8c9c-gh7rd\" (UID: \"261b522a-b786-4b2b-975c-43f1cc0d8ccf\") " pod="openstack/neutron-677c7c8c9c-gh7rd" Dec 11 10:34:07 crc kubenswrapper[4953]: I1211 10:34:07.798186 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/261b522a-b786-4b2b-975c-43f1cc0d8ccf-config\") pod \"neutron-677c7c8c9c-gh7rd\" (UID: \"261b522a-b786-4b2b-975c-43f1cc0d8ccf\") " pod="openstack/neutron-677c7c8c9c-gh7rd" Dec 11 10:34:07 crc kubenswrapper[4953]: I1211 10:34:07.798214 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/261b522a-b786-4b2b-975c-43f1cc0d8ccf-ovndb-tls-certs\") pod \"neutron-677c7c8c9c-gh7rd\" (UID: \"261b522a-b786-4b2b-975c-43f1cc0d8ccf\") " pod="openstack/neutron-677c7c8c9c-gh7rd" Dec 11 10:34:07 crc kubenswrapper[4953]: I1211 10:34:07.803088 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/261b522a-b786-4b2b-975c-43f1cc0d8ccf-public-tls-certs\") pod \"neutron-677c7c8c9c-gh7rd\" (UID: \"261b522a-b786-4b2b-975c-43f1cc0d8ccf\") " pod="openstack/neutron-677c7c8c9c-gh7rd" Dec 11 10:34:07 crc kubenswrapper[4953]: I1211 10:34:07.803521 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/261b522a-b786-4b2b-975c-43f1cc0d8ccf-internal-tls-certs\") pod \"neutron-677c7c8c9c-gh7rd\" (UID: \"261b522a-b786-4b2b-975c-43f1cc0d8ccf\") " pod="openstack/neutron-677c7c8c9c-gh7rd" Dec 11 10:34:07 crc kubenswrapper[4953]: I1211 10:34:07.803779 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/261b522a-b786-4b2b-975c-43f1cc0d8ccf-config\") pod \"neutron-677c7c8c9c-gh7rd\" (UID: \"261b522a-b786-4b2b-975c-43f1cc0d8ccf\") " pod="openstack/neutron-677c7c8c9c-gh7rd" Dec 11 10:34:07 crc kubenswrapper[4953]: I1211 10:34:07.805954 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/261b522a-b786-4b2b-975c-43f1cc0d8ccf-combined-ca-bundle\") pod \"neutron-677c7c8c9c-gh7rd\" (UID: \"261b522a-b786-4b2b-975c-43f1cc0d8ccf\") " pod="openstack/neutron-677c7c8c9c-gh7rd" Dec 11 10:34:07 crc kubenswrapper[4953]: I1211 10:34:07.806689 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/261b522a-b786-4b2b-975c-43f1cc0d8ccf-httpd-config\") pod \"neutron-677c7c8c9c-gh7rd\" (UID: \"261b522a-b786-4b2b-975c-43f1cc0d8ccf\") " pod="openstack/neutron-677c7c8c9c-gh7rd" Dec 11 10:34:07 crc kubenswrapper[4953]: I1211 10:34:07.819611 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdjfv\" (UniqueName: \"kubernetes.io/projected/261b522a-b786-4b2b-975c-43f1cc0d8ccf-kube-api-access-qdjfv\") pod \"neutron-677c7c8c9c-gh7rd\" (UID: \"261b522a-b786-4b2b-975c-43f1cc0d8ccf\") " pod="openstack/neutron-677c7c8c9c-gh7rd" Dec 11 10:34:07 crc kubenswrapper[4953]: I1211 10:34:07.822654 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/261b522a-b786-4b2b-975c-43f1cc0d8ccf-ovndb-tls-certs\") pod \"neutron-677c7c8c9c-gh7rd\" (UID: \"261b522a-b786-4b2b-975c-43f1cc0d8ccf\") " pod="openstack/neutron-677c7c8c9c-gh7rd" Dec 11 10:34:07 crc kubenswrapper[4953]: I1211 10:34:07.848998 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7567d9469d-rx5dx" event={"ID":"345a513a-93a0-4e23-9266-3eeaf3ff0c10","Type":"ContainerStarted","Data":"dfc2a9a94740a5c1e7c18669633ef308479efaeb144cead2c91d20383752f603"} Dec 11 10:34:07 crc kubenswrapper[4953]: I1211 10:34:07.849762 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-7567d9469d-rx5dx" Dec 11 10:34:07 crc kubenswrapper[4953]: I1211 10:34:07.855162 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-677c7c8c9c-gh7rd" Dec 11 10:34:07 crc kubenswrapper[4953]: I1211 10:34:07.863765 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-75bd4868-pp5tq" event={"ID":"e06bccd4-aefb-4055-b4eb-ef745234cbcb","Type":"ContainerStarted","Data":"936b2499143aa85b9d93c66ed77301a1ddbc5fb5ce305f425ffb081da5d18a69"} Dec 11 10:34:07 crc kubenswrapper[4953]: I1211 10:34:07.863823 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-75bd4868-pp5tq" Dec 11 10:34:07 crc kubenswrapper[4953]: I1211 10:34:07.866830 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7734dd1c-9884-47a8-86fb-7f7dbf0c1af3","Type":"ContainerDied","Data":"994c5967d28e5e38feb9a29a80b2012a41602362de777cc160827bfecf4f649e"} Dec 11 10:34:07 crc kubenswrapper[4953]: I1211 10:34:07.866871 4953 scope.go:117] "RemoveContainer" containerID="af5c887e7eca5fd7ff6f9c4156747d8342560f30e97d1ce68369f481de4f9ac9" Dec 11 10:34:07 crc kubenswrapper[4953]: I1211 10:34:07.867118 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 11 10:34:07 crc kubenswrapper[4953]: I1211 10:34:07.885296 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-7567d9469d-rx5dx" podStartSLOduration=3.885269048 podStartE2EDuration="3.885269048s" podCreationTimestamp="2025-12-11 10:34:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:34:07.874657292 +0000 UTC m=+1365.898516335" watchObservedRunningTime="2025-12-11 10:34:07.885269048 +0000 UTC m=+1365.909128101" Dec 11 10:34:07 crc kubenswrapper[4953]: I1211 10:34:07.886617 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-55b68558f8-r49n8" event={"ID":"b4e64ea9-3129-46a7-8197-bdd7730ad3f1","Type":"ContainerStarted","Data":"e95a830582a33c31ab5aeaf4d56f0badd309548a0a67ed5368d9a6983add712a"} Dec 11 10:34:07 crc kubenswrapper[4953]: I1211 10:34:07.886667 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-55b68558f8-r49n8" event={"ID":"b4e64ea9-3129-46a7-8197-bdd7730ad3f1","Type":"ContainerStarted","Data":"55c5e9404c3838b9c9ec907a74d56b2088eb8ac8f0292c6df2f9030b8129afcb"} Dec 11 10:34:07 crc kubenswrapper[4953]: I1211 10:34:07.887467 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-55b68558f8-r49n8" Dec 11 10:34:07 crc kubenswrapper[4953]: I1211 10:34:07.910371 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-75bd4868-pp5tq" podStartSLOduration=4.910348895 podStartE2EDuration="4.910348895s" podCreationTimestamp="2025-12-11 10:34:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:34:07.903226964 +0000 UTC m=+1365.927085997" watchObservedRunningTime="2025-12-11 10:34:07.910348895 +0000 UTC m=+1365.934207928" Dec 11 10:34:07 crc kubenswrapper[4953]: I1211 10:34:07.939969 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-55b68558f8-r49n8" podStartSLOduration=2.939947388 podStartE2EDuration="2.939947388s" podCreationTimestamp="2025-12-11 10:34:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:34:07.934237648 +0000 UTC m=+1365.958096691" watchObservedRunningTime="2025-12-11 10:34:07.939947388 +0000 UTC m=+1365.963806411" Dec 11 10:34:07 crc kubenswrapper[4953]: I1211 10:34:07.965226 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 11 10:34:07 crc kubenswrapper[4953]: I1211 10:34:07.978630 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 11 10:34:08 crc kubenswrapper[4953]: I1211 10:34:08.001261 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Dec 11 10:34:08 crc kubenswrapper[4953]: I1211 10:34:08.009899 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 11 10:34:08 crc kubenswrapper[4953]: I1211 10:34:08.019733 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Dec 11 10:34:08 crc kubenswrapper[4953]: I1211 10:34:08.020150 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Dec 11 10:34:08 crc kubenswrapper[4953]: I1211 10:34:08.035013 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 11 10:34:08 crc kubenswrapper[4953]: I1211 10:34:08.112524 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/755305c4-518a-48f8-b732-a825b32487f6-config-data\") pod \"glance-default-external-api-0\" (UID: \"755305c4-518a-48f8-b732-a825b32487f6\") " pod="openstack/glance-default-external-api-0" Dec 11 10:34:08 crc kubenswrapper[4953]: I1211 10:34:08.112940 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/755305c4-518a-48f8-b732-a825b32487f6-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"755305c4-518a-48f8-b732-a825b32487f6\") " pod="openstack/glance-default-external-api-0" Dec 11 10:34:08 crc kubenswrapper[4953]: I1211 10:34:08.113088 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/755305c4-518a-48f8-b732-a825b32487f6-scripts\") pod \"glance-default-external-api-0\" (UID: \"755305c4-518a-48f8-b732-a825b32487f6\") " pod="openstack/glance-default-external-api-0" Dec 11 10:34:08 crc kubenswrapper[4953]: I1211 10:34:08.113200 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/755305c4-518a-48f8-b732-a825b32487f6-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"755305c4-518a-48f8-b732-a825b32487f6\") " pod="openstack/glance-default-external-api-0" Dec 11 10:34:08 crc kubenswrapper[4953]: I1211 10:34:08.113299 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/755305c4-518a-48f8-b732-a825b32487f6-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"755305c4-518a-48f8-b732-a825b32487f6\") " pod="openstack/glance-default-external-api-0" Dec 11 10:34:08 crc kubenswrapper[4953]: I1211 10:34:08.113388 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/755305c4-518a-48f8-b732-a825b32487f6-logs\") pod \"glance-default-external-api-0\" (UID: \"755305c4-518a-48f8-b732-a825b32487f6\") " pod="openstack/glance-default-external-api-0" Dec 11 10:34:08 crc kubenswrapper[4953]: I1211 10:34:08.113589 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"755305c4-518a-48f8-b732-a825b32487f6\") " pod="openstack/glance-default-external-api-0" Dec 11 10:34:08 crc kubenswrapper[4953]: I1211 10:34:08.113708 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjhcg\" (UniqueName: \"kubernetes.io/projected/755305c4-518a-48f8-b732-a825b32487f6-kube-api-access-jjhcg\") pod \"glance-default-external-api-0\" (UID: \"755305c4-518a-48f8-b732-a825b32487f6\") " pod="openstack/glance-default-external-api-0" Dec 11 10:34:08 crc kubenswrapper[4953]: I1211 10:34:08.214864 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/755305c4-518a-48f8-b732-a825b32487f6-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"755305c4-518a-48f8-b732-a825b32487f6\") " pod="openstack/glance-default-external-api-0" Dec 11 10:34:08 crc kubenswrapper[4953]: I1211 10:34:08.215159 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/755305c4-518a-48f8-b732-a825b32487f6-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"755305c4-518a-48f8-b732-a825b32487f6\") " pod="openstack/glance-default-external-api-0" Dec 11 10:34:08 crc kubenswrapper[4953]: I1211 10:34:08.215286 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/755305c4-518a-48f8-b732-a825b32487f6-logs\") pod \"glance-default-external-api-0\" (UID: \"755305c4-518a-48f8-b732-a825b32487f6\") " pod="openstack/glance-default-external-api-0" Dec 11 10:34:08 crc kubenswrapper[4953]: I1211 10:34:08.215483 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"755305c4-518a-48f8-b732-a825b32487f6\") " pod="openstack/glance-default-external-api-0" Dec 11 10:34:08 crc kubenswrapper[4953]: I1211 10:34:08.215614 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jjhcg\" (UniqueName: \"kubernetes.io/projected/755305c4-518a-48f8-b732-a825b32487f6-kube-api-access-jjhcg\") pod \"glance-default-external-api-0\" (UID: \"755305c4-518a-48f8-b732-a825b32487f6\") " pod="openstack/glance-default-external-api-0" Dec 11 10:34:08 crc kubenswrapper[4953]: I1211 10:34:08.215718 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/755305c4-518a-48f8-b732-a825b32487f6-config-data\") pod \"glance-default-external-api-0\" (UID: \"755305c4-518a-48f8-b732-a825b32487f6\") " pod="openstack/glance-default-external-api-0" Dec 11 10:34:08 crc kubenswrapper[4953]: I1211 10:34:08.215848 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/755305c4-518a-48f8-b732-a825b32487f6-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"755305c4-518a-48f8-b732-a825b32487f6\") " pod="openstack/glance-default-external-api-0" Dec 11 10:34:08 crc kubenswrapper[4953]: I1211 10:34:08.215915 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/755305c4-518a-48f8-b732-a825b32487f6-logs\") pod \"glance-default-external-api-0\" (UID: \"755305c4-518a-48f8-b732-a825b32487f6\") " pod="openstack/glance-default-external-api-0" Dec 11 10:34:08 crc kubenswrapper[4953]: I1211 10:34:08.215975 4953 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"755305c4-518a-48f8-b732-a825b32487f6\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/glance-default-external-api-0" Dec 11 10:34:08 crc kubenswrapper[4953]: I1211 10:34:08.216035 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/755305c4-518a-48f8-b732-a825b32487f6-scripts\") pod \"glance-default-external-api-0\" (UID: \"755305c4-518a-48f8-b732-a825b32487f6\") " pod="openstack/glance-default-external-api-0" Dec 11 10:34:08 crc kubenswrapper[4953]: I1211 10:34:08.217747 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/755305c4-518a-48f8-b732-a825b32487f6-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"755305c4-518a-48f8-b732-a825b32487f6\") " pod="openstack/glance-default-external-api-0" Dec 11 10:34:08 crc kubenswrapper[4953]: I1211 10:34:08.224853 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/755305c4-518a-48f8-b732-a825b32487f6-config-data\") pod \"glance-default-external-api-0\" (UID: \"755305c4-518a-48f8-b732-a825b32487f6\") " pod="openstack/glance-default-external-api-0" Dec 11 10:34:08 crc kubenswrapper[4953]: I1211 10:34:08.225328 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/755305c4-518a-48f8-b732-a825b32487f6-scripts\") pod \"glance-default-external-api-0\" (UID: \"755305c4-518a-48f8-b732-a825b32487f6\") " pod="openstack/glance-default-external-api-0" Dec 11 10:34:08 crc kubenswrapper[4953]: I1211 10:34:08.227394 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/755305c4-518a-48f8-b732-a825b32487f6-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"755305c4-518a-48f8-b732-a825b32487f6\") " pod="openstack/glance-default-external-api-0" Dec 11 10:34:08 crc kubenswrapper[4953]: I1211 10:34:08.231371 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/755305c4-518a-48f8-b732-a825b32487f6-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"755305c4-518a-48f8-b732-a825b32487f6\") " pod="openstack/glance-default-external-api-0" Dec 11 10:34:08 crc kubenswrapper[4953]: I1211 10:34:08.232029 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jjhcg\" (UniqueName: \"kubernetes.io/projected/755305c4-518a-48f8-b732-a825b32487f6-kube-api-access-jjhcg\") pod \"glance-default-external-api-0\" (UID: \"755305c4-518a-48f8-b732-a825b32487f6\") " pod="openstack/glance-default-external-api-0" Dec 11 10:34:08 crc kubenswrapper[4953]: I1211 10:34:08.242232 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"755305c4-518a-48f8-b732-a825b32487f6\") " pod="openstack/glance-default-external-api-0" Dec 11 10:34:08 crc kubenswrapper[4953]: I1211 10:34:08.357335 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 11 10:34:08 crc kubenswrapper[4953]: I1211 10:34:08.493494 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e05f528-a644-4eb2-878d-65ca7558e66b" path="/var/lib/kubelet/pods/1e05f528-a644-4eb2-878d-65ca7558e66b/volumes" Dec 11 10:34:08 crc kubenswrapper[4953]: I1211 10:34:08.494440 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f3b40d9-353f-44c2-b0fe-6e0f62e8ddbf" path="/var/lib/kubelet/pods/6f3b40d9-353f-44c2-b0fe-6e0f62e8ddbf/volumes" Dec 11 10:34:08 crc kubenswrapper[4953]: I1211 10:34:08.495209 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7734dd1c-9884-47a8-86fb-7f7dbf0c1af3" path="/var/lib/kubelet/pods/7734dd1c-9884-47a8-86fb-7f7dbf0c1af3/volumes" Dec 11 10:34:08 crc kubenswrapper[4953]: E1211 10:34:08.608114 4953 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod97b9ff8e_f944_48ee_803a_d6873a9db805.slice/crio-conmon-79cc47d9dc3c03e712eaad55e52c68d02d784451419037cdd7fbdbf61ac6149e.scope\": RecentStats: unable to find data in memory cache]" Dec 11 10:34:08 crc kubenswrapper[4953]: I1211 10:34:08.895968 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-7567d9469d-rx5dx" Dec 11 10:34:10 crc kubenswrapper[4953]: I1211 10:34:10.844758 4953 scope.go:117] "RemoveContainer" containerID="8b938ad3bfb7e53a045b18b3f364af08cdb2463b187560c5f2a84f9baa4e9479" Dec 11 10:34:11 crc kubenswrapper[4953]: I1211 10:34:11.425643 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-677c7c8c9c-gh7rd"] Dec 11 10:34:11 crc kubenswrapper[4953]: I1211 10:34:11.555015 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 11 10:34:11 crc kubenswrapper[4953]: I1211 10:34:11.962137 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-677c7c8c9c-gh7rd" event={"ID":"261b522a-b786-4b2b-975c-43f1cc0d8ccf","Type":"ContainerStarted","Data":"a95e449e61c33c558d02a877bc79a04b411cb5d264a424c33a6de6627ddfb3ee"} Dec 11 10:34:11 crc kubenswrapper[4953]: I1211 10:34:11.970028 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"755305c4-518a-48f8-b732-a825b32487f6","Type":"ContainerStarted","Data":"94741ede7ed8427b5f6a9607545b74932b6d6d1c091555374a5e7b143ed17bfd"} Dec 11 10:34:12 crc kubenswrapper[4953]: I1211 10:34:12.660988 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 11 10:34:13 crc kubenswrapper[4953]: I1211 10:34:13.021675 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6d8b6f34-cdf2-4285-ba3e-3a14621430e5","Type":"ContainerStarted","Data":"9677c91cd4dc130ea589d769d53b8e181f14b5902e1b3aa2756e000d3caa2bbf"} Dec 11 10:34:13 crc kubenswrapper[4953]: I1211 10:34:13.036210 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b8c04c52-6e9d-4254-a222-85f06c186b92","Type":"ContainerStarted","Data":"4b4078bfee6504a43c8452ccd9f53e4be023e0a20fb30022f659e18955499ca1"} Dec 11 10:34:13 crc kubenswrapper[4953]: I1211 10:34:13.053129 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-vj92k" event={"ID":"c46c2893-6218-455e-a4ee-cf1b4cda45b7","Type":"ContainerStarted","Data":"18da5e592b7c52416a16674c9366cc1d74bf9348703b8485835f8bd76a25aaba"} Dec 11 10:34:13 crc kubenswrapper[4953]: I1211 10:34:13.066551 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-677c7c8c9c-gh7rd" event={"ID":"261b522a-b786-4b2b-975c-43f1cc0d8ccf","Type":"ContainerStarted","Data":"15faef1b4ad4c5d4d8142bd02ca5c8b72aa84f70caf14fbea0d98e763e1ee6d8"} Dec 11 10:34:13 crc kubenswrapper[4953]: I1211 10:34:13.123809 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-vj92k" podStartSLOduration=4.209550469 podStartE2EDuration="39.123785031s" podCreationTimestamp="2025-12-11 10:33:34 +0000 UTC" firstStartedPulling="2025-12-11 10:33:37.039756138 +0000 UTC m=+1335.063615171" lastFinishedPulling="2025-12-11 10:34:11.9539907 +0000 UTC m=+1369.977849733" observedRunningTime="2025-12-11 10:34:13.104053482 +0000 UTC m=+1371.127912535" watchObservedRunningTime="2025-12-11 10:34:13.123785031 +0000 UTC m=+1371.147644074" Dec 11 10:34:14 crc kubenswrapper[4953]: I1211 10:34:14.101875 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-677c7c8c9c-gh7rd" event={"ID":"261b522a-b786-4b2b-975c-43f1cc0d8ccf","Type":"ContainerStarted","Data":"8ec34f149eb7b0df59ed60ac6fbbd810019ea5b30d0ab842e625394e2d8c2226"} Dec 11 10:34:14 crc kubenswrapper[4953]: I1211 10:34:14.106180 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-677c7c8c9c-gh7rd" Dec 11 10:34:14 crc kubenswrapper[4953]: I1211 10:34:14.154486 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"755305c4-518a-48f8-b732-a825b32487f6","Type":"ContainerStarted","Data":"3cc49f527dc719457e9eb36e597ae7042fff0664634dbe3b3be02b7b0b78227b"} Dec 11 10:34:14 crc kubenswrapper[4953]: I1211 10:34:14.176905 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-677c7c8c9c-gh7rd" podStartSLOduration=7.176879752 podStartE2EDuration="7.176879752s" podCreationTimestamp="2025-12-11 10:34:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:34:14.155074471 +0000 UTC m=+1372.178933504" watchObservedRunningTime="2025-12-11 10:34:14.176879752 +0000 UTC m=+1372.200738795" Dec 11 10:34:14 crc kubenswrapper[4953]: I1211 10:34:14.255358 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6b9c8b59c-h4qws" Dec 11 10:34:14 crc kubenswrapper[4953]: I1211 10:34:14.382828 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-fd458c8cc-bk9wq"] Dec 11 10:34:14 crc kubenswrapper[4953]: I1211 10:34:14.383679 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-fd458c8cc-bk9wq" podUID="6eaf3f1e-6dc5-4283-9fce-0955a2f18821" containerName="dnsmasq-dns" containerID="cri-o://d3bb9df50f5cdaf964d561c82928b11bff5098e78bf6642e181807a69bac3a00" gracePeriod=10 Dec 11 10:34:15 crc kubenswrapper[4953]: I1211 10:34:15.067269 4953 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-fd458c8cc-bk9wq" podUID="6eaf3f1e-6dc5-4283-9fce-0955a2f18821" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.144:5353: connect: connection refused" Dec 11 10:34:15 crc kubenswrapper[4953]: I1211 10:34:15.171865 4953 generic.go:334] "Generic (PLEG): container finished" podID="6eaf3f1e-6dc5-4283-9fce-0955a2f18821" containerID="d3bb9df50f5cdaf964d561c82928b11bff5098e78bf6642e181807a69bac3a00" exitCode=0 Dec 11 10:34:15 crc kubenswrapper[4953]: I1211 10:34:15.171945 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fd458c8cc-bk9wq" event={"ID":"6eaf3f1e-6dc5-4283-9fce-0955a2f18821","Type":"ContainerDied","Data":"d3bb9df50f5cdaf964d561c82928b11bff5098e78bf6642e181807a69bac3a00"} Dec 11 10:34:16 crc kubenswrapper[4953]: I1211 10:34:16.203311 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b8c04c52-6e9d-4254-a222-85f06c186b92","Type":"ContainerStarted","Data":"178ccd876435208341ba5464adb24cfa2cf54bd9fcc3241af2650d6d14702f82"} Dec 11 10:34:16 crc kubenswrapper[4953]: I1211 10:34:16.209875 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"755305c4-518a-48f8-b732-a825b32487f6","Type":"ContainerStarted","Data":"45336e0c31787cbf6b006a00a75f728e5a2a56c26216027dc3655f6abff8a706"} Dec 11 10:34:16 crc kubenswrapper[4953]: I1211 10:34:16.220979 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-fzjwm" event={"ID":"ad3e4c7f-96eb-40bd-bcdd-7e3771908b6a","Type":"ContainerStarted","Data":"a0f8dccdb79158a6e8b0a17545e3aa563f2f8c468c204b94766b32ede580daf7"} Dec 11 10:34:16 crc kubenswrapper[4953]: I1211 10:34:16.659626 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=9.659598908 podStartE2EDuration="9.659598908s" podCreationTimestamp="2025-12-11 10:34:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:34:16.632301034 +0000 UTC m=+1374.656160077" watchObservedRunningTime="2025-12-11 10:34:16.659598908 +0000 UTC m=+1374.683457941" Dec 11 10:34:16 crc kubenswrapper[4953]: I1211 10:34:16.670044 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-fzjwm" podStartSLOduration=6.424588318 podStartE2EDuration="42.670021638s" podCreationTimestamp="2025-12-11 10:33:34 +0000 UTC" firstStartedPulling="2025-12-11 10:33:37.01915995 +0000 UTC m=+1335.043018983" lastFinishedPulling="2025-12-11 10:34:13.26459327 +0000 UTC m=+1371.288452303" observedRunningTime="2025-12-11 10:34:16.655442093 +0000 UTC m=+1374.679301126" watchObservedRunningTime="2025-12-11 10:34:16.670021638 +0000 UTC m=+1374.693880671" Dec 11 10:34:16 crc kubenswrapper[4953]: I1211 10:34:16.673804 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fd458c8cc-bk9wq" Dec 11 10:34:16 crc kubenswrapper[4953]: I1211 10:34:16.788465 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r7wqz\" (UniqueName: \"kubernetes.io/projected/6eaf3f1e-6dc5-4283-9fce-0955a2f18821-kube-api-access-r7wqz\") pod \"6eaf3f1e-6dc5-4283-9fce-0955a2f18821\" (UID: \"6eaf3f1e-6dc5-4283-9fce-0955a2f18821\") " Dec 11 10:34:16 crc kubenswrapper[4953]: I1211 10:34:16.788640 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6eaf3f1e-6dc5-4283-9fce-0955a2f18821-ovsdbserver-nb\") pod \"6eaf3f1e-6dc5-4283-9fce-0955a2f18821\" (UID: \"6eaf3f1e-6dc5-4283-9fce-0955a2f18821\") " Dec 11 10:34:16 crc kubenswrapper[4953]: I1211 10:34:16.788728 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6eaf3f1e-6dc5-4283-9fce-0955a2f18821-dns-swift-storage-0\") pod \"6eaf3f1e-6dc5-4283-9fce-0955a2f18821\" (UID: \"6eaf3f1e-6dc5-4283-9fce-0955a2f18821\") " Dec 11 10:34:16 crc kubenswrapper[4953]: I1211 10:34:16.788757 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6eaf3f1e-6dc5-4283-9fce-0955a2f18821-ovsdbserver-sb\") pod \"6eaf3f1e-6dc5-4283-9fce-0955a2f18821\" (UID: \"6eaf3f1e-6dc5-4283-9fce-0955a2f18821\") " Dec 11 10:34:16 crc kubenswrapper[4953]: I1211 10:34:16.788817 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6eaf3f1e-6dc5-4283-9fce-0955a2f18821-config\") pod \"6eaf3f1e-6dc5-4283-9fce-0955a2f18821\" (UID: \"6eaf3f1e-6dc5-4283-9fce-0955a2f18821\") " Dec 11 10:34:16 crc kubenswrapper[4953]: I1211 10:34:16.788894 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6eaf3f1e-6dc5-4283-9fce-0955a2f18821-dns-svc\") pod \"6eaf3f1e-6dc5-4283-9fce-0955a2f18821\" (UID: \"6eaf3f1e-6dc5-4283-9fce-0955a2f18821\") " Dec 11 10:34:16 crc kubenswrapper[4953]: I1211 10:34:16.804044 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6eaf3f1e-6dc5-4283-9fce-0955a2f18821-kube-api-access-r7wqz" (OuterVolumeSpecName: "kube-api-access-r7wqz") pod "6eaf3f1e-6dc5-4283-9fce-0955a2f18821" (UID: "6eaf3f1e-6dc5-4283-9fce-0955a2f18821"). InnerVolumeSpecName "kube-api-access-r7wqz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:34:16 crc kubenswrapper[4953]: I1211 10:34:16.876260 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6eaf3f1e-6dc5-4283-9fce-0955a2f18821-config" (OuterVolumeSpecName: "config") pod "6eaf3f1e-6dc5-4283-9fce-0955a2f18821" (UID: "6eaf3f1e-6dc5-4283-9fce-0955a2f18821"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:34:16 crc kubenswrapper[4953]: I1211 10:34:16.891356 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r7wqz\" (UniqueName: \"kubernetes.io/projected/6eaf3f1e-6dc5-4283-9fce-0955a2f18821-kube-api-access-r7wqz\") on node \"crc\" DevicePath \"\"" Dec 11 10:34:16 crc kubenswrapper[4953]: I1211 10:34:16.891393 4953 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6eaf3f1e-6dc5-4283-9fce-0955a2f18821-config\") on node \"crc\" DevicePath \"\"" Dec 11 10:34:16 crc kubenswrapper[4953]: I1211 10:34:16.892592 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6eaf3f1e-6dc5-4283-9fce-0955a2f18821-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6eaf3f1e-6dc5-4283-9fce-0955a2f18821" (UID: "6eaf3f1e-6dc5-4283-9fce-0955a2f18821"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:34:16 crc kubenswrapper[4953]: I1211 10:34:16.908545 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6eaf3f1e-6dc5-4283-9fce-0955a2f18821-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "6eaf3f1e-6dc5-4283-9fce-0955a2f18821" (UID: "6eaf3f1e-6dc5-4283-9fce-0955a2f18821"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:34:16 crc kubenswrapper[4953]: I1211 10:34:16.910086 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6eaf3f1e-6dc5-4283-9fce-0955a2f18821-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6eaf3f1e-6dc5-4283-9fce-0955a2f18821" (UID: "6eaf3f1e-6dc5-4283-9fce-0955a2f18821"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:34:16 crc kubenswrapper[4953]: I1211 10:34:16.916182 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6eaf3f1e-6dc5-4283-9fce-0955a2f18821-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6eaf3f1e-6dc5-4283-9fce-0955a2f18821" (UID: "6eaf3f1e-6dc5-4283-9fce-0955a2f18821"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:34:16 crc kubenswrapper[4953]: I1211 10:34:16.993008 4953 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6eaf3f1e-6dc5-4283-9fce-0955a2f18821-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 11 10:34:16 crc kubenswrapper[4953]: I1211 10:34:16.993059 4953 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6eaf3f1e-6dc5-4283-9fce-0955a2f18821-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 11 10:34:16 crc kubenswrapper[4953]: I1211 10:34:16.993076 4953 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6eaf3f1e-6dc5-4283-9fce-0955a2f18821-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 11 10:34:16 crc kubenswrapper[4953]: I1211 10:34:16.993091 4953 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6eaf3f1e-6dc5-4283-9fce-0955a2f18821-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 11 10:34:17 crc kubenswrapper[4953]: I1211 10:34:17.231097 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b8c04c52-6e9d-4254-a222-85f06c186b92","Type":"ContainerStarted","Data":"affd56e178058ed46144c2bfb37fc7368d599e663b8408df548ed9b1be736499"} Dec 11 10:34:17 crc kubenswrapper[4953]: I1211 10:34:17.233140 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fd458c8cc-bk9wq" event={"ID":"6eaf3f1e-6dc5-4283-9fce-0955a2f18821","Type":"ContainerDied","Data":"1d53ea0dee0bae03e762f92b51484520e356039aee01693272f8971adde2fb36"} Dec 11 10:34:17 crc kubenswrapper[4953]: I1211 10:34:17.233182 4953 scope.go:117] "RemoveContainer" containerID="d3bb9df50f5cdaf964d561c82928b11bff5098e78bf6642e181807a69bac3a00" Dec 11 10:34:17 crc kubenswrapper[4953]: I1211 10:34:17.233208 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fd458c8cc-bk9wq" Dec 11 10:34:17 crc kubenswrapper[4953]: I1211 10:34:17.253062 4953 scope.go:117] "RemoveContainer" containerID="cb3388d4f6392a8a1aa9d14419f6a8c47b8b33077fa0d041d34378312b05ba2f" Dec 11 10:34:17 crc kubenswrapper[4953]: I1211 10:34:17.311782 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=10.311763128 podStartE2EDuration="10.311763128s" podCreationTimestamp="2025-12-11 10:34:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:34:17.274177027 +0000 UTC m=+1375.298036060" watchObservedRunningTime="2025-12-11 10:34:17.311763128 +0000 UTC m=+1375.335622161" Dec 11 10:34:17 crc kubenswrapper[4953]: I1211 10:34:17.312743 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-fd458c8cc-bk9wq"] Dec 11 10:34:17 crc kubenswrapper[4953]: I1211 10:34:17.334719 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-fd458c8cc-bk9wq"] Dec 11 10:34:17 crc kubenswrapper[4953]: I1211 10:34:17.513999 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 11 10:34:17 crc kubenswrapper[4953]: I1211 10:34:17.514051 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 11 10:34:17 crc kubenswrapper[4953]: I1211 10:34:17.555092 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 11 10:34:17 crc kubenswrapper[4953]: I1211 10:34:17.559199 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 11 10:34:18 crc kubenswrapper[4953]: I1211 10:34:18.292297 4953 patch_prober.go:28] interesting pod/machine-config-daemon-q2898 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 10:34:18 crc kubenswrapper[4953]: I1211 10:34:18.292711 4953 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 10:34:18 crc kubenswrapper[4953]: I1211 10:34:18.304646 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 11 10:34:18 crc kubenswrapper[4953]: I1211 10:34:18.304708 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 11 10:34:18 crc kubenswrapper[4953]: I1211 10:34:18.358361 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 11 10:34:18 crc kubenswrapper[4953]: I1211 10:34:18.358432 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 11 10:34:18 crc kubenswrapper[4953]: I1211 10:34:18.390412 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 11 10:34:18 crc kubenswrapper[4953]: I1211 10:34:18.419384 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 11 10:34:18 crc kubenswrapper[4953]: I1211 10:34:18.620203 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6eaf3f1e-6dc5-4283-9fce-0955a2f18821" path="/var/lib/kubelet/pods/6eaf3f1e-6dc5-4283-9fce-0955a2f18821/volumes" Dec 11 10:34:19 crc kubenswrapper[4953]: E1211 10:34:19.107009 4953 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc46c2893_6218_455e_a4ee_cf1b4cda45b7.slice/crio-18da5e592b7c52416a16674c9366cc1d74bf9348703b8485835f8bd76a25aaba.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc46c2893_6218_455e_a4ee_cf1b4cda45b7.slice/crio-conmon-18da5e592b7c52416a16674c9366cc1d74bf9348703b8485835f8bd76a25aaba.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod97b9ff8e_f944_48ee_803a_d6873a9db805.slice/crio-conmon-79cc47d9dc3c03e712eaad55e52c68d02d784451419037cdd7fbdbf61ac6149e.scope\": RecentStats: unable to find data in memory cache]" Dec 11 10:34:19 crc kubenswrapper[4953]: I1211 10:34:19.622262 4953 generic.go:334] "Generic (PLEG): container finished" podID="c46c2893-6218-455e-a4ee-cf1b4cda45b7" containerID="18da5e592b7c52416a16674c9366cc1d74bf9348703b8485835f8bd76a25aaba" exitCode=0 Dec 11 10:34:19 crc kubenswrapper[4953]: I1211 10:34:19.624018 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-vj92k" event={"ID":"c46c2893-6218-455e-a4ee-cf1b4cda45b7","Type":"ContainerDied","Data":"18da5e592b7c52416a16674c9366cc1d74bf9348703b8485835f8bd76a25aaba"} Dec 11 10:34:19 crc kubenswrapper[4953]: I1211 10:34:19.624355 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 11 10:34:19 crc kubenswrapper[4953]: I1211 10:34:19.624392 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 11 10:34:21 crc kubenswrapper[4953]: I1211 10:34:21.642803 4953 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 11 10:34:22 crc kubenswrapper[4953]: I1211 10:34:22.966731 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 11 10:34:22 crc kubenswrapper[4953]: I1211 10:34:22.967056 4953 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 11 10:34:23 crc kubenswrapper[4953]: I1211 10:34:23.228405 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 11 10:34:23 crc kubenswrapper[4953]: I1211 10:34:23.259489 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 11 10:34:23 crc kubenswrapper[4953]: I1211 10:34:23.720863 4953 generic.go:334] "Generic (PLEG): container finished" podID="ad3e4c7f-96eb-40bd-bcdd-7e3771908b6a" containerID="a0f8dccdb79158a6e8b0a17545e3aa563f2f8c468c204b94766b32ede580daf7" exitCode=0 Dec 11 10:34:23 crc kubenswrapper[4953]: I1211 10:34:23.721200 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-fzjwm" event={"ID":"ad3e4c7f-96eb-40bd-bcdd-7e3771908b6a","Type":"ContainerDied","Data":"a0f8dccdb79158a6e8b0a17545e3aa563f2f8c468c204b94766b32ede580daf7"} Dec 11 10:34:25 crc kubenswrapper[4953]: I1211 10:34:25.401507 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 11 10:34:26 crc kubenswrapper[4953]: I1211 10:34:26.748998 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-fzjwm" event={"ID":"ad3e4c7f-96eb-40bd-bcdd-7e3771908b6a","Type":"ContainerDied","Data":"d27bba439c4b635543631228acc1c2063a31e300590d4a5bdfc9e830e563ac53"} Dec 11 10:34:26 crc kubenswrapper[4953]: I1211 10:34:26.750613 4953 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d27bba439c4b635543631228acc1c2063a31e300590d4a5bdfc9e830e563ac53" Dec 11 10:34:26 crc kubenswrapper[4953]: I1211 10:34:26.762323 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-fzjwm" Dec 11 10:34:26 crc kubenswrapper[4953]: I1211 10:34:26.802393 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad3e4c7f-96eb-40bd-bcdd-7e3771908b6a-combined-ca-bundle\") pod \"ad3e4c7f-96eb-40bd-bcdd-7e3771908b6a\" (UID: \"ad3e4c7f-96eb-40bd-bcdd-7e3771908b6a\") " Dec 11 10:34:26 crc kubenswrapper[4953]: I1211 10:34:26.802473 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad3e4c7f-96eb-40bd-bcdd-7e3771908b6a-config-data\") pod \"ad3e4c7f-96eb-40bd-bcdd-7e3771908b6a\" (UID: \"ad3e4c7f-96eb-40bd-bcdd-7e3771908b6a\") " Dec 11 10:34:26 crc kubenswrapper[4953]: I1211 10:34:26.802503 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ad3e4c7f-96eb-40bd-bcdd-7e3771908b6a-scripts\") pod \"ad3e4c7f-96eb-40bd-bcdd-7e3771908b6a\" (UID: \"ad3e4c7f-96eb-40bd-bcdd-7e3771908b6a\") " Dec 11 10:34:26 crc kubenswrapper[4953]: I1211 10:34:26.802618 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ad3e4c7f-96eb-40bd-bcdd-7e3771908b6a-etc-machine-id\") pod \"ad3e4c7f-96eb-40bd-bcdd-7e3771908b6a\" (UID: \"ad3e4c7f-96eb-40bd-bcdd-7e3771908b6a\") " Dec 11 10:34:26 crc kubenswrapper[4953]: I1211 10:34:26.802687 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ad3e4c7f-96eb-40bd-bcdd-7e3771908b6a-db-sync-config-data\") pod \"ad3e4c7f-96eb-40bd-bcdd-7e3771908b6a\" (UID: \"ad3e4c7f-96eb-40bd-bcdd-7e3771908b6a\") " Dec 11 10:34:26 crc kubenswrapper[4953]: I1211 10:34:26.802720 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mdjkq\" (UniqueName: \"kubernetes.io/projected/ad3e4c7f-96eb-40bd-bcdd-7e3771908b6a-kube-api-access-mdjkq\") pod \"ad3e4c7f-96eb-40bd-bcdd-7e3771908b6a\" (UID: \"ad3e4c7f-96eb-40bd-bcdd-7e3771908b6a\") " Dec 11 10:34:26 crc kubenswrapper[4953]: I1211 10:34:26.803164 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ad3e4c7f-96eb-40bd-bcdd-7e3771908b6a-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "ad3e4c7f-96eb-40bd-bcdd-7e3771908b6a" (UID: "ad3e4c7f-96eb-40bd-bcdd-7e3771908b6a"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 10:34:26 crc kubenswrapper[4953]: I1211 10:34:26.811998 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad3e4c7f-96eb-40bd-bcdd-7e3771908b6a-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "ad3e4c7f-96eb-40bd-bcdd-7e3771908b6a" (UID: "ad3e4c7f-96eb-40bd-bcdd-7e3771908b6a"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:34:26 crc kubenswrapper[4953]: I1211 10:34:26.812044 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad3e4c7f-96eb-40bd-bcdd-7e3771908b6a-scripts" (OuterVolumeSpecName: "scripts") pod "ad3e4c7f-96eb-40bd-bcdd-7e3771908b6a" (UID: "ad3e4c7f-96eb-40bd-bcdd-7e3771908b6a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:34:26 crc kubenswrapper[4953]: I1211 10:34:26.812856 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad3e4c7f-96eb-40bd-bcdd-7e3771908b6a-kube-api-access-mdjkq" (OuterVolumeSpecName: "kube-api-access-mdjkq") pod "ad3e4c7f-96eb-40bd-bcdd-7e3771908b6a" (UID: "ad3e4c7f-96eb-40bd-bcdd-7e3771908b6a"). InnerVolumeSpecName "kube-api-access-mdjkq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:34:26 crc kubenswrapper[4953]: I1211 10:34:26.855047 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad3e4c7f-96eb-40bd-bcdd-7e3771908b6a-config-data" (OuterVolumeSpecName: "config-data") pod "ad3e4c7f-96eb-40bd-bcdd-7e3771908b6a" (UID: "ad3e4c7f-96eb-40bd-bcdd-7e3771908b6a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:34:26 crc kubenswrapper[4953]: I1211 10:34:26.857376 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad3e4c7f-96eb-40bd-bcdd-7e3771908b6a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ad3e4c7f-96eb-40bd-bcdd-7e3771908b6a" (UID: "ad3e4c7f-96eb-40bd-bcdd-7e3771908b6a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:34:26 crc kubenswrapper[4953]: I1211 10:34:26.904414 4953 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ad3e4c7f-96eb-40bd-bcdd-7e3771908b6a-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 10:34:26 crc kubenswrapper[4953]: I1211 10:34:26.904450 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mdjkq\" (UniqueName: \"kubernetes.io/projected/ad3e4c7f-96eb-40bd-bcdd-7e3771908b6a-kube-api-access-mdjkq\") on node \"crc\" DevicePath \"\"" Dec 11 10:34:26 crc kubenswrapper[4953]: I1211 10:34:26.904462 4953 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad3e4c7f-96eb-40bd-bcdd-7e3771908b6a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 10:34:26 crc kubenswrapper[4953]: I1211 10:34:26.904473 4953 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad3e4c7f-96eb-40bd-bcdd-7e3771908b6a-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 10:34:26 crc kubenswrapper[4953]: I1211 10:34:26.904481 4953 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ad3e4c7f-96eb-40bd-bcdd-7e3771908b6a-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 10:34:26 crc kubenswrapper[4953]: I1211 10:34:26.904489 4953 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ad3e4c7f-96eb-40bd-bcdd-7e3771908b6a-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 11 10:34:27 crc kubenswrapper[4953]: I1211 10:34:27.156590 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-vj92k" Dec 11 10:34:27 crc kubenswrapper[4953]: I1211 10:34:27.208507 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c46c2893-6218-455e-a4ee-cf1b4cda45b7-db-sync-config-data\") pod \"c46c2893-6218-455e-a4ee-cf1b4cda45b7\" (UID: \"c46c2893-6218-455e-a4ee-cf1b4cda45b7\") " Dec 11 10:34:27 crc kubenswrapper[4953]: I1211 10:34:27.208562 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g9d7f\" (UniqueName: \"kubernetes.io/projected/c46c2893-6218-455e-a4ee-cf1b4cda45b7-kube-api-access-g9d7f\") pod \"c46c2893-6218-455e-a4ee-cf1b4cda45b7\" (UID: \"c46c2893-6218-455e-a4ee-cf1b4cda45b7\") " Dec 11 10:34:27 crc kubenswrapper[4953]: I1211 10:34:27.208712 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c46c2893-6218-455e-a4ee-cf1b4cda45b7-combined-ca-bundle\") pod \"c46c2893-6218-455e-a4ee-cf1b4cda45b7\" (UID: \"c46c2893-6218-455e-a4ee-cf1b4cda45b7\") " Dec 11 10:34:27 crc kubenswrapper[4953]: I1211 10:34:27.213559 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c46c2893-6218-455e-a4ee-cf1b4cda45b7-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "c46c2893-6218-455e-a4ee-cf1b4cda45b7" (UID: "c46c2893-6218-455e-a4ee-cf1b4cda45b7"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:34:27 crc kubenswrapper[4953]: I1211 10:34:27.213671 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c46c2893-6218-455e-a4ee-cf1b4cda45b7-kube-api-access-g9d7f" (OuterVolumeSpecName: "kube-api-access-g9d7f") pod "c46c2893-6218-455e-a4ee-cf1b4cda45b7" (UID: "c46c2893-6218-455e-a4ee-cf1b4cda45b7"). InnerVolumeSpecName "kube-api-access-g9d7f". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:34:27 crc kubenswrapper[4953]: I1211 10:34:27.236392 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c46c2893-6218-455e-a4ee-cf1b4cda45b7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c46c2893-6218-455e-a4ee-cf1b4cda45b7" (UID: "c46c2893-6218-455e-a4ee-cf1b4cda45b7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:34:27 crc kubenswrapper[4953]: I1211 10:34:27.310484 4953 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c46c2893-6218-455e-a4ee-cf1b4cda45b7-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 10:34:27 crc kubenswrapper[4953]: I1211 10:34:27.310524 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g9d7f\" (UniqueName: \"kubernetes.io/projected/c46c2893-6218-455e-a4ee-cf1b4cda45b7-kube-api-access-g9d7f\") on node \"crc\" DevicePath \"\"" Dec 11 10:34:27 crc kubenswrapper[4953]: I1211 10:34:27.310538 4953 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c46c2893-6218-455e-a4ee-cf1b4cda45b7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 10:34:27 crc kubenswrapper[4953]: I1211 10:34:27.766962 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6d8b6f34-cdf2-4285-ba3e-3a14621430e5","Type":"ContainerStarted","Data":"70bd7d1e341debc2af897dcd018b6b214ea87e3d3d21ceaba6b2b5980b1fd70e"} Dec 11 10:34:27 crc kubenswrapper[4953]: I1211 10:34:27.767117 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6d8b6f34-cdf2-4285-ba3e-3a14621430e5" containerName="sg-core" containerID="cri-o://9677c91cd4dc130ea589d769d53b8e181f14b5902e1b3aa2756e000d3caa2bbf" gracePeriod=30 Dec 11 10:34:27 crc kubenswrapper[4953]: I1211 10:34:27.767154 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6d8b6f34-cdf2-4285-ba3e-3a14621430e5" containerName="ceilometer-notification-agent" containerID="cri-o://cc48d3199b1b2a4c8feaf882b069ef69dca86cf1af56abcb97105af58084c882" gracePeriod=30 Dec 11 10:34:27 crc kubenswrapper[4953]: I1211 10:34:27.767111 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6d8b6f34-cdf2-4285-ba3e-3a14621430e5" containerName="ceilometer-central-agent" containerID="cri-o://b063d78bdb8872cd1f53adf7805e16b4e8338bef9d8bd7c26023226e68457eb4" gracePeriod=30 Dec 11 10:34:27 crc kubenswrapper[4953]: I1211 10:34:27.767213 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6d8b6f34-cdf2-4285-ba3e-3a14621430e5" containerName="proxy-httpd" containerID="cri-o://70bd7d1e341debc2af897dcd018b6b214ea87e3d3d21ceaba6b2b5980b1fd70e" gracePeriod=30 Dec 11 10:34:27 crc kubenswrapper[4953]: I1211 10:34:27.767349 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 11 10:34:27 crc kubenswrapper[4953]: I1211 10:34:27.768897 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-fzjwm" Dec 11 10:34:27 crc kubenswrapper[4953]: I1211 10:34:27.769315 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-vj92k" Dec 11 10:34:27 crc kubenswrapper[4953]: I1211 10:34:27.770776 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-vj92k" event={"ID":"c46c2893-6218-455e-a4ee-cf1b4cda45b7","Type":"ContainerDied","Data":"56029ec7e2a42a192f9680e4e3663d1f9e8c1f15c36559368f015fe019823fc3"} Dec 11 10:34:27 crc kubenswrapper[4953]: I1211 10:34:27.770821 4953 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="56029ec7e2a42a192f9680e4e3663d1f9e8c1f15c36559368f015fe019823fc3" Dec 11 10:34:27 crc kubenswrapper[4953]: I1211 10:34:27.807814 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.540254807 podStartE2EDuration="53.807796441s" podCreationTimestamp="2025-12-11 10:33:34 +0000 UTC" firstStartedPulling="2025-12-11 10:33:36.970605511 +0000 UTC m=+1334.994464544" lastFinishedPulling="2025-12-11 10:34:27.238147145 +0000 UTC m=+1385.262006178" observedRunningTime="2025-12-11 10:34:27.798174714 +0000 UTC m=+1385.822033767" watchObservedRunningTime="2025-12-11 10:34:27.807796441 +0000 UTC m=+1385.831655474" Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.091209 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Dec 11 10:34:28 crc kubenswrapper[4953]: E1211 10:34:28.091924 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6eaf3f1e-6dc5-4283-9fce-0955a2f18821" containerName="init" Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.091955 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="6eaf3f1e-6dc5-4283-9fce-0955a2f18821" containerName="init" Dec 11 10:34:28 crc kubenswrapper[4953]: E1211 10:34:28.091976 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c46c2893-6218-455e-a4ee-cf1b4cda45b7" containerName="barbican-db-sync" Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.091982 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="c46c2893-6218-455e-a4ee-cf1b4cda45b7" containerName="barbican-db-sync" Dec 11 10:34:28 crc kubenswrapper[4953]: E1211 10:34:28.091999 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6eaf3f1e-6dc5-4283-9fce-0955a2f18821" containerName="dnsmasq-dns" Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.092005 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="6eaf3f1e-6dc5-4283-9fce-0955a2f18821" containerName="dnsmasq-dns" Dec 11 10:34:28 crc kubenswrapper[4953]: E1211 10:34:28.092027 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad3e4c7f-96eb-40bd-bcdd-7e3771908b6a" containerName="cinder-db-sync" Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.092033 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad3e4c7f-96eb-40bd-bcdd-7e3771908b6a" containerName="cinder-db-sync" Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.092206 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="6eaf3f1e-6dc5-4283-9fce-0955a2f18821" containerName="dnsmasq-dns" Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.092220 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="c46c2893-6218-455e-a4ee-cf1b4cda45b7" containerName="barbican-db-sync" Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.092233 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad3e4c7f-96eb-40bd-bcdd-7e3771908b6a" containerName="cinder-db-sync" Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.093194 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.100382 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-2dmsf" Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.100624 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.100748 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.101789 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.118970 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.128168 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8zs2\" (UniqueName: \"kubernetes.io/projected/96eb50ce-78f1-4258-bd16-4a79030a7209-kube-api-access-j8zs2\") pod \"cinder-scheduler-0\" (UID: \"96eb50ce-78f1-4258-bd16-4a79030a7209\") " pod="openstack/cinder-scheduler-0" Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.128271 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96eb50ce-78f1-4258-bd16-4a79030a7209-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"96eb50ce-78f1-4258-bd16-4a79030a7209\") " pod="openstack/cinder-scheduler-0" Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.128341 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/96eb50ce-78f1-4258-bd16-4a79030a7209-scripts\") pod \"cinder-scheduler-0\" (UID: \"96eb50ce-78f1-4258-bd16-4a79030a7209\") " pod="openstack/cinder-scheduler-0" Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.128392 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/96eb50ce-78f1-4258-bd16-4a79030a7209-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"96eb50ce-78f1-4258-bd16-4a79030a7209\") " pod="openstack/cinder-scheduler-0" Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.128424 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/96eb50ce-78f1-4258-bd16-4a79030a7209-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"96eb50ce-78f1-4258-bd16-4a79030a7209\") " pod="openstack/cinder-scheduler-0" Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.128471 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96eb50ce-78f1-4258-bd16-4a79030a7209-config-data\") pod \"cinder-scheduler-0\" (UID: \"96eb50ce-78f1-4258-bd16-4a79030a7209\") " pod="openstack/cinder-scheduler-0" Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.167748 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-9f5756c4f-t7jg6"] Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.169290 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9f5756c4f-t7jg6" Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.189643 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-9f5756c4f-t7jg6"] Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.229672 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a8bca99-1b39-4c66-9251-659c2feb0b42-config\") pod \"dnsmasq-dns-9f5756c4f-t7jg6\" (UID: \"6a8bca99-1b39-4c66-9251-659c2feb0b42\") " pod="openstack/dnsmasq-dns-9f5756c4f-t7jg6" Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.229760 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8zs2\" (UniqueName: \"kubernetes.io/projected/96eb50ce-78f1-4258-bd16-4a79030a7209-kube-api-access-j8zs2\") pod \"cinder-scheduler-0\" (UID: \"96eb50ce-78f1-4258-bd16-4a79030a7209\") " pod="openstack/cinder-scheduler-0" Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.229808 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96eb50ce-78f1-4258-bd16-4a79030a7209-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"96eb50ce-78f1-4258-bd16-4a79030a7209\") " pod="openstack/cinder-scheduler-0" Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.229850 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6a8bca99-1b39-4c66-9251-659c2feb0b42-ovsdbserver-nb\") pod \"dnsmasq-dns-9f5756c4f-t7jg6\" (UID: \"6a8bca99-1b39-4c66-9251-659c2feb0b42\") " pod="openstack/dnsmasq-dns-9f5756c4f-t7jg6" Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.229879 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/96eb50ce-78f1-4258-bd16-4a79030a7209-scripts\") pod \"cinder-scheduler-0\" (UID: \"96eb50ce-78f1-4258-bd16-4a79030a7209\") " pod="openstack/cinder-scheduler-0" Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.229899 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6a8bca99-1b39-4c66-9251-659c2feb0b42-dns-swift-storage-0\") pod \"dnsmasq-dns-9f5756c4f-t7jg6\" (UID: \"6a8bca99-1b39-4c66-9251-659c2feb0b42\") " pod="openstack/dnsmasq-dns-9f5756c4f-t7jg6" Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.229916 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6a8bca99-1b39-4c66-9251-659c2feb0b42-dns-svc\") pod \"dnsmasq-dns-9f5756c4f-t7jg6\" (UID: \"6a8bca99-1b39-4c66-9251-659c2feb0b42\") " pod="openstack/dnsmasq-dns-9f5756c4f-t7jg6" Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.229939 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8ftg\" (UniqueName: \"kubernetes.io/projected/6a8bca99-1b39-4c66-9251-659c2feb0b42-kube-api-access-z8ftg\") pod \"dnsmasq-dns-9f5756c4f-t7jg6\" (UID: \"6a8bca99-1b39-4c66-9251-659c2feb0b42\") " pod="openstack/dnsmasq-dns-9f5756c4f-t7jg6" Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.229966 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/96eb50ce-78f1-4258-bd16-4a79030a7209-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"96eb50ce-78f1-4258-bd16-4a79030a7209\") " pod="openstack/cinder-scheduler-0" Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.229990 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/96eb50ce-78f1-4258-bd16-4a79030a7209-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"96eb50ce-78f1-4258-bd16-4a79030a7209\") " pod="openstack/cinder-scheduler-0" Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.230015 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6a8bca99-1b39-4c66-9251-659c2feb0b42-ovsdbserver-sb\") pod \"dnsmasq-dns-9f5756c4f-t7jg6\" (UID: \"6a8bca99-1b39-4c66-9251-659c2feb0b42\") " pod="openstack/dnsmasq-dns-9f5756c4f-t7jg6" Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.230037 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96eb50ce-78f1-4258-bd16-4a79030a7209-config-data\") pod \"cinder-scheduler-0\" (UID: \"96eb50ce-78f1-4258-bd16-4a79030a7209\") " pod="openstack/cinder-scheduler-0" Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.230837 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/96eb50ce-78f1-4258-bd16-4a79030a7209-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"96eb50ce-78f1-4258-bd16-4a79030a7209\") " pod="openstack/cinder-scheduler-0" Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.236811 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/96eb50ce-78f1-4258-bd16-4a79030a7209-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"96eb50ce-78f1-4258-bd16-4a79030a7209\") " pod="openstack/cinder-scheduler-0" Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.245461 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/96eb50ce-78f1-4258-bd16-4a79030a7209-scripts\") pod \"cinder-scheduler-0\" (UID: \"96eb50ce-78f1-4258-bd16-4a79030a7209\") " pod="openstack/cinder-scheduler-0" Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.246952 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96eb50ce-78f1-4258-bd16-4a79030a7209-config-data\") pod \"cinder-scheduler-0\" (UID: \"96eb50ce-78f1-4258-bd16-4a79030a7209\") " pod="openstack/cinder-scheduler-0" Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.249715 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96eb50ce-78f1-4258-bd16-4a79030a7209-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"96eb50ce-78f1-4258-bd16-4a79030a7209\") " pod="openstack/cinder-scheduler-0" Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.282300 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8zs2\" (UniqueName: \"kubernetes.io/projected/96eb50ce-78f1-4258-bd16-4a79030a7209-kube-api-access-j8zs2\") pod \"cinder-scheduler-0\" (UID: \"96eb50ce-78f1-4258-bd16-4a79030a7209\") " pod="openstack/cinder-scheduler-0" Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.331827 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a8bca99-1b39-4c66-9251-659c2feb0b42-config\") pod \"dnsmasq-dns-9f5756c4f-t7jg6\" (UID: \"6a8bca99-1b39-4c66-9251-659c2feb0b42\") " pod="openstack/dnsmasq-dns-9f5756c4f-t7jg6" Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.331936 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6a8bca99-1b39-4c66-9251-659c2feb0b42-ovsdbserver-nb\") pod \"dnsmasq-dns-9f5756c4f-t7jg6\" (UID: \"6a8bca99-1b39-4c66-9251-659c2feb0b42\") " pod="openstack/dnsmasq-dns-9f5756c4f-t7jg6" Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.331971 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6a8bca99-1b39-4c66-9251-659c2feb0b42-dns-swift-storage-0\") pod \"dnsmasq-dns-9f5756c4f-t7jg6\" (UID: \"6a8bca99-1b39-4c66-9251-659c2feb0b42\") " pod="openstack/dnsmasq-dns-9f5756c4f-t7jg6" Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.331992 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6a8bca99-1b39-4c66-9251-659c2feb0b42-dns-svc\") pod \"dnsmasq-dns-9f5756c4f-t7jg6\" (UID: \"6a8bca99-1b39-4c66-9251-659c2feb0b42\") " pod="openstack/dnsmasq-dns-9f5756c4f-t7jg6" Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.332013 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8ftg\" (UniqueName: \"kubernetes.io/projected/6a8bca99-1b39-4c66-9251-659c2feb0b42-kube-api-access-z8ftg\") pod \"dnsmasq-dns-9f5756c4f-t7jg6\" (UID: \"6a8bca99-1b39-4c66-9251-659c2feb0b42\") " pod="openstack/dnsmasq-dns-9f5756c4f-t7jg6" Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.332080 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6a8bca99-1b39-4c66-9251-659c2feb0b42-ovsdbserver-sb\") pod \"dnsmasq-dns-9f5756c4f-t7jg6\" (UID: \"6a8bca99-1b39-4c66-9251-659c2feb0b42\") " pod="openstack/dnsmasq-dns-9f5756c4f-t7jg6" Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.333029 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a8bca99-1b39-4c66-9251-659c2feb0b42-config\") pod \"dnsmasq-dns-9f5756c4f-t7jg6\" (UID: \"6a8bca99-1b39-4c66-9251-659c2feb0b42\") " pod="openstack/dnsmasq-dns-9f5756c4f-t7jg6" Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.333260 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6a8bca99-1b39-4c66-9251-659c2feb0b42-dns-swift-storage-0\") pod \"dnsmasq-dns-9f5756c4f-t7jg6\" (UID: \"6a8bca99-1b39-4c66-9251-659c2feb0b42\") " pod="openstack/dnsmasq-dns-9f5756c4f-t7jg6" Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.333520 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6a8bca99-1b39-4c66-9251-659c2feb0b42-ovsdbserver-nb\") pod \"dnsmasq-dns-9f5756c4f-t7jg6\" (UID: \"6a8bca99-1b39-4c66-9251-659c2feb0b42\") " pod="openstack/dnsmasq-dns-9f5756c4f-t7jg6" Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.338341 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6a8bca99-1b39-4c66-9251-659c2feb0b42-dns-svc\") pod \"dnsmasq-dns-9f5756c4f-t7jg6\" (UID: \"6a8bca99-1b39-4c66-9251-659c2feb0b42\") " pod="openstack/dnsmasq-dns-9f5756c4f-t7jg6" Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.338853 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6a8bca99-1b39-4c66-9251-659c2feb0b42-ovsdbserver-sb\") pod \"dnsmasq-dns-9f5756c4f-t7jg6\" (UID: \"6a8bca99-1b39-4c66-9251-659c2feb0b42\") " pod="openstack/dnsmasq-dns-9f5756c4f-t7jg6" Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.368444 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8ftg\" (UniqueName: \"kubernetes.io/projected/6a8bca99-1b39-4c66-9251-659c2feb0b42-kube-api-access-z8ftg\") pod \"dnsmasq-dns-9f5756c4f-t7jg6\" (UID: \"6a8bca99-1b39-4c66-9251-659c2feb0b42\") " pod="openstack/dnsmasq-dns-9f5756c4f-t7jg6" Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.392154 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-6cffd87c8c-wlgnt"] Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.394328 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-6cffd87c8c-wlgnt" Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.403241 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-tl9gl" Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.404938 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.407585 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.421993 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-555fcfcf54-sqln7"] Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.423625 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-555fcfcf54-sqln7" Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.426358 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.433442 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/544e1955-4316-4587-90a8-94bac4f81ae5-config-data-custom\") pod \"barbican-worker-6cffd87c8c-wlgnt\" (UID: \"544e1955-4316-4587-90a8-94bac4f81ae5\") " pod="openstack/barbican-worker-6cffd87c8c-wlgnt" Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.433543 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/544e1955-4316-4587-90a8-94bac4f81ae5-logs\") pod \"barbican-worker-6cffd87c8c-wlgnt\" (UID: \"544e1955-4316-4587-90a8-94bac4f81ae5\") " pod="openstack/barbican-worker-6cffd87c8c-wlgnt" Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.433574 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8q67g\" (UniqueName: \"kubernetes.io/projected/544e1955-4316-4587-90a8-94bac4f81ae5-kube-api-access-8q67g\") pod \"barbican-worker-6cffd87c8c-wlgnt\" (UID: \"544e1955-4316-4587-90a8-94bac4f81ae5\") " pod="openstack/barbican-worker-6cffd87c8c-wlgnt" Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.433617 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/544e1955-4316-4587-90a8-94bac4f81ae5-config-data\") pod \"barbican-worker-6cffd87c8c-wlgnt\" (UID: \"544e1955-4316-4587-90a8-94bac4f81ae5\") " pod="openstack/barbican-worker-6cffd87c8c-wlgnt" Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.433716 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/544e1955-4316-4587-90a8-94bac4f81ae5-combined-ca-bundle\") pod \"barbican-worker-6cffd87c8c-wlgnt\" (UID: \"544e1955-4316-4587-90a8-94bac4f81ae5\") " pod="openstack/barbican-worker-6cffd87c8c-wlgnt" Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.449106 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.451631 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-6cffd87c8c-wlgnt"] Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.469930 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-555fcfcf54-sqln7"] Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.514408 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9f5756c4f-t7jg6" Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.541150 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.544508 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/caec0159-12b1-46f9-952c-10f229948036-combined-ca-bundle\") pod \"barbican-keystone-listener-555fcfcf54-sqln7\" (UID: \"caec0159-12b1-46f9-952c-10f229948036\") " pod="openstack/barbican-keystone-listener-555fcfcf54-sqln7" Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.544601 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5f6q8\" (UniqueName: \"kubernetes.io/projected/caec0159-12b1-46f9-952c-10f229948036-kube-api-access-5f6q8\") pod \"barbican-keystone-listener-555fcfcf54-sqln7\" (UID: \"caec0159-12b1-46f9-952c-10f229948036\") " pod="openstack/barbican-keystone-listener-555fcfcf54-sqln7" Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.544671 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/544e1955-4316-4587-90a8-94bac4f81ae5-combined-ca-bundle\") pod \"barbican-worker-6cffd87c8c-wlgnt\" (UID: \"544e1955-4316-4587-90a8-94bac4f81ae5\") " pod="openstack/barbican-worker-6cffd87c8c-wlgnt" Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.544712 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/544e1955-4316-4587-90a8-94bac4f81ae5-config-data-custom\") pod \"barbican-worker-6cffd87c8c-wlgnt\" (UID: \"544e1955-4316-4587-90a8-94bac4f81ae5\") " pod="openstack/barbican-worker-6cffd87c8c-wlgnt" Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.544764 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/caec0159-12b1-46f9-952c-10f229948036-config-data\") pod \"barbican-keystone-listener-555fcfcf54-sqln7\" (UID: \"caec0159-12b1-46f9-952c-10f229948036\") " pod="openstack/barbican-keystone-listener-555fcfcf54-sqln7" Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.544809 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/caec0159-12b1-46f9-952c-10f229948036-logs\") pod \"barbican-keystone-listener-555fcfcf54-sqln7\" (UID: \"caec0159-12b1-46f9-952c-10f229948036\") " pod="openstack/barbican-keystone-listener-555fcfcf54-sqln7" Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.544883 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/caec0159-12b1-46f9-952c-10f229948036-config-data-custom\") pod \"barbican-keystone-listener-555fcfcf54-sqln7\" (UID: \"caec0159-12b1-46f9-952c-10f229948036\") " pod="openstack/barbican-keystone-listener-555fcfcf54-sqln7" Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.544986 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/544e1955-4316-4587-90a8-94bac4f81ae5-logs\") pod \"barbican-worker-6cffd87c8c-wlgnt\" (UID: \"544e1955-4316-4587-90a8-94bac4f81ae5\") " pod="openstack/barbican-worker-6cffd87c8c-wlgnt" Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.545018 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8q67g\" (UniqueName: \"kubernetes.io/projected/544e1955-4316-4587-90a8-94bac4f81ae5-kube-api-access-8q67g\") pod \"barbican-worker-6cffd87c8c-wlgnt\" (UID: \"544e1955-4316-4587-90a8-94bac4f81ae5\") " pod="openstack/barbican-worker-6cffd87c8c-wlgnt" Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.545068 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/544e1955-4316-4587-90a8-94bac4f81ae5-config-data\") pod \"barbican-worker-6cffd87c8c-wlgnt\" (UID: \"544e1955-4316-4587-90a8-94bac4f81ae5\") " pod="openstack/barbican-worker-6cffd87c8c-wlgnt" Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.548488 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.548661 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.551390 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.551730 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/544e1955-4316-4587-90a8-94bac4f81ae5-logs\") pod \"barbican-worker-6cffd87c8c-wlgnt\" (UID: \"544e1955-4316-4587-90a8-94bac4f81ae5\") " pod="openstack/barbican-worker-6cffd87c8c-wlgnt" Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.557629 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/544e1955-4316-4587-90a8-94bac4f81ae5-combined-ca-bundle\") pod \"barbican-worker-6cffd87c8c-wlgnt\" (UID: \"544e1955-4316-4587-90a8-94bac4f81ae5\") " pod="openstack/barbican-worker-6cffd87c8c-wlgnt" Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.559053 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/544e1955-4316-4587-90a8-94bac4f81ae5-config-data-custom\") pod \"barbican-worker-6cffd87c8c-wlgnt\" (UID: \"544e1955-4316-4587-90a8-94bac4f81ae5\") " pod="openstack/barbican-worker-6cffd87c8c-wlgnt" Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.560196 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-9f5756c4f-t7jg6"] Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.565920 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/544e1955-4316-4587-90a8-94bac4f81ae5-config-data\") pod \"barbican-worker-6cffd87c8c-wlgnt\" (UID: \"544e1955-4316-4587-90a8-94bac4f81ae5\") " pod="openstack/barbican-worker-6cffd87c8c-wlgnt" Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.605838 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8q67g\" (UniqueName: \"kubernetes.io/projected/544e1955-4316-4587-90a8-94bac4f81ae5-kube-api-access-8q67g\") pod \"barbican-worker-6cffd87c8c-wlgnt\" (UID: \"544e1955-4316-4587-90a8-94bac4f81ae5\") " pod="openstack/barbican-worker-6cffd87c8c-wlgnt" Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.662973 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/caec0159-12b1-46f9-952c-10f229948036-combined-ca-bundle\") pod \"barbican-keystone-listener-555fcfcf54-sqln7\" (UID: \"caec0159-12b1-46f9-952c-10f229948036\") " pod="openstack/barbican-keystone-listener-555fcfcf54-sqln7" Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.663322 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xv7dq\" (UniqueName: \"kubernetes.io/projected/ef1a3525-9696-49d4-9a66-0fae5d1ed2e9-kube-api-access-xv7dq\") pod \"cinder-api-0\" (UID: \"ef1a3525-9696-49d4-9a66-0fae5d1ed2e9\") " pod="openstack/cinder-api-0" Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.663356 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5f6q8\" (UniqueName: \"kubernetes.io/projected/caec0159-12b1-46f9-952c-10f229948036-kube-api-access-5f6q8\") pod \"barbican-keystone-listener-555fcfcf54-sqln7\" (UID: \"caec0159-12b1-46f9-952c-10f229948036\") " pod="openstack/barbican-keystone-listener-555fcfcf54-sqln7" Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.663413 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ef1a3525-9696-49d4-9a66-0fae5d1ed2e9-scripts\") pod \"cinder-api-0\" (UID: \"ef1a3525-9696-49d4-9a66-0fae5d1ed2e9\") " pod="openstack/cinder-api-0" Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.663453 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/caec0159-12b1-46f9-952c-10f229948036-config-data\") pod \"barbican-keystone-listener-555fcfcf54-sqln7\" (UID: \"caec0159-12b1-46f9-952c-10f229948036\") " pod="openstack/barbican-keystone-listener-555fcfcf54-sqln7" Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.663483 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/caec0159-12b1-46f9-952c-10f229948036-logs\") pod \"barbican-keystone-listener-555fcfcf54-sqln7\" (UID: \"caec0159-12b1-46f9-952c-10f229948036\") " pod="openstack/barbican-keystone-listener-555fcfcf54-sqln7" Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.663504 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef1a3525-9696-49d4-9a66-0fae5d1ed2e9-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"ef1a3525-9696-49d4-9a66-0fae5d1ed2e9\") " pod="openstack/cinder-api-0" Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.663545 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/caec0159-12b1-46f9-952c-10f229948036-config-data-custom\") pod \"barbican-keystone-listener-555fcfcf54-sqln7\" (UID: \"caec0159-12b1-46f9-952c-10f229948036\") " pod="openstack/barbican-keystone-listener-555fcfcf54-sqln7" Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.663573 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ef1a3525-9696-49d4-9a66-0fae5d1ed2e9-config-data-custom\") pod \"cinder-api-0\" (UID: \"ef1a3525-9696-49d4-9a66-0fae5d1ed2e9\") " pod="openstack/cinder-api-0" Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.663634 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ef1a3525-9696-49d4-9a66-0fae5d1ed2e9-etc-machine-id\") pod \"cinder-api-0\" (UID: \"ef1a3525-9696-49d4-9a66-0fae5d1ed2e9\") " pod="openstack/cinder-api-0" Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.663745 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ef1a3525-9696-49d4-9a66-0fae5d1ed2e9-logs\") pod \"cinder-api-0\" (UID: \"ef1a3525-9696-49d4-9a66-0fae5d1ed2e9\") " pod="openstack/cinder-api-0" Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.663776 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef1a3525-9696-49d4-9a66-0fae5d1ed2e9-config-data\") pod \"cinder-api-0\" (UID: \"ef1a3525-9696-49d4-9a66-0fae5d1ed2e9\") " pod="openstack/cinder-api-0" Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.672001 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/caec0159-12b1-46f9-952c-10f229948036-logs\") pod \"barbican-keystone-listener-555fcfcf54-sqln7\" (UID: \"caec0159-12b1-46f9-952c-10f229948036\") " pod="openstack/barbican-keystone-listener-555fcfcf54-sqln7" Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.682966 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/caec0159-12b1-46f9-952c-10f229948036-config-data\") pod \"barbican-keystone-listener-555fcfcf54-sqln7\" (UID: \"caec0159-12b1-46f9-952c-10f229948036\") " pod="openstack/barbican-keystone-listener-555fcfcf54-sqln7" Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.683640 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/caec0159-12b1-46f9-952c-10f229948036-combined-ca-bundle\") pod \"barbican-keystone-listener-555fcfcf54-sqln7\" (UID: \"caec0159-12b1-46f9-952c-10f229948036\") " pod="openstack/barbican-keystone-listener-555fcfcf54-sqln7" Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.688155 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/caec0159-12b1-46f9-952c-10f229948036-config-data-custom\") pod \"barbican-keystone-listener-555fcfcf54-sqln7\" (UID: \"caec0159-12b1-46f9-952c-10f229948036\") " pod="openstack/barbican-keystone-listener-555fcfcf54-sqln7" Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.709699 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-75bfc9b94f-qfmcg"] Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.711810 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75bfc9b94f-qfmcg" Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.731125 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5f6q8\" (UniqueName: \"kubernetes.io/projected/caec0159-12b1-46f9-952c-10f229948036-kube-api-access-5f6q8\") pod \"barbican-keystone-listener-555fcfcf54-sqln7\" (UID: \"caec0159-12b1-46f9-952c-10f229948036\") " pod="openstack/barbican-keystone-listener-555fcfcf54-sqln7" Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.731889 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-6cffd87c8c-wlgnt" Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.745154 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75bfc9b94f-qfmcg"] Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.765288 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ef1a3525-9696-49d4-9a66-0fae5d1ed2e9-scripts\") pod \"cinder-api-0\" (UID: \"ef1a3525-9696-49d4-9a66-0fae5d1ed2e9\") " pod="openstack/cinder-api-0" Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.765336 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/586be632-9d3d-46be-9de4-5059e771edcf-dns-svc\") pod \"dnsmasq-dns-75bfc9b94f-qfmcg\" (UID: \"586be632-9d3d-46be-9de4-5059e771edcf\") " pod="openstack/dnsmasq-dns-75bfc9b94f-qfmcg" Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.765370 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/586be632-9d3d-46be-9de4-5059e771edcf-dns-swift-storage-0\") pod \"dnsmasq-dns-75bfc9b94f-qfmcg\" (UID: \"586be632-9d3d-46be-9de4-5059e771edcf\") " pod="openstack/dnsmasq-dns-75bfc9b94f-qfmcg" Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.765397 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef1a3525-9696-49d4-9a66-0fae5d1ed2e9-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"ef1a3525-9696-49d4-9a66-0fae5d1ed2e9\") " pod="openstack/cinder-api-0" Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.765421 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/586be632-9d3d-46be-9de4-5059e771edcf-ovsdbserver-sb\") pod \"dnsmasq-dns-75bfc9b94f-qfmcg\" (UID: \"586be632-9d3d-46be-9de4-5059e771edcf\") " pod="openstack/dnsmasq-dns-75bfc9b94f-qfmcg" Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.765444 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ef1a3525-9696-49d4-9a66-0fae5d1ed2e9-config-data-custom\") pod \"cinder-api-0\" (UID: \"ef1a3525-9696-49d4-9a66-0fae5d1ed2e9\") " pod="openstack/cinder-api-0" Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.765458 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/586be632-9d3d-46be-9de4-5059e771edcf-config\") pod \"dnsmasq-dns-75bfc9b94f-qfmcg\" (UID: \"586be632-9d3d-46be-9de4-5059e771edcf\") " pod="openstack/dnsmasq-dns-75bfc9b94f-qfmcg" Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.765487 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ef1a3525-9696-49d4-9a66-0fae5d1ed2e9-etc-machine-id\") pod \"cinder-api-0\" (UID: \"ef1a3525-9696-49d4-9a66-0fae5d1ed2e9\") " pod="openstack/cinder-api-0" Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.765502 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/586be632-9d3d-46be-9de4-5059e771edcf-ovsdbserver-nb\") pod \"dnsmasq-dns-75bfc9b94f-qfmcg\" (UID: \"586be632-9d3d-46be-9de4-5059e771edcf\") " pod="openstack/dnsmasq-dns-75bfc9b94f-qfmcg" Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.765565 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ef1a3525-9696-49d4-9a66-0fae5d1ed2e9-logs\") pod \"cinder-api-0\" (UID: \"ef1a3525-9696-49d4-9a66-0fae5d1ed2e9\") " pod="openstack/cinder-api-0" Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.765606 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef1a3525-9696-49d4-9a66-0fae5d1ed2e9-config-data\") pod \"cinder-api-0\" (UID: \"ef1a3525-9696-49d4-9a66-0fae5d1ed2e9\") " pod="openstack/cinder-api-0" Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.765626 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9zdp\" (UniqueName: \"kubernetes.io/projected/586be632-9d3d-46be-9de4-5059e771edcf-kube-api-access-q9zdp\") pod \"dnsmasq-dns-75bfc9b94f-qfmcg\" (UID: \"586be632-9d3d-46be-9de4-5059e771edcf\") " pod="openstack/dnsmasq-dns-75bfc9b94f-qfmcg" Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.765676 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xv7dq\" (UniqueName: \"kubernetes.io/projected/ef1a3525-9696-49d4-9a66-0fae5d1ed2e9-kube-api-access-xv7dq\") pod \"cinder-api-0\" (UID: \"ef1a3525-9696-49d4-9a66-0fae5d1ed2e9\") " pod="openstack/cinder-api-0" Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.771076 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ef1a3525-9696-49d4-9a66-0fae5d1ed2e9-logs\") pod \"cinder-api-0\" (UID: \"ef1a3525-9696-49d4-9a66-0fae5d1ed2e9\") " pod="openstack/cinder-api-0" Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.771520 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ef1a3525-9696-49d4-9a66-0fae5d1ed2e9-etc-machine-id\") pod \"cinder-api-0\" (UID: \"ef1a3525-9696-49d4-9a66-0fae5d1ed2e9\") " pod="openstack/cinder-api-0" Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.775122 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-555fcfcf54-sqln7" Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.788096 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-898848ccb-4kkwg"] Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.789747 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-898848ccb-4kkwg" Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.793145 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.808034 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ef1a3525-9696-49d4-9a66-0fae5d1ed2e9-scripts\") pod \"cinder-api-0\" (UID: \"ef1a3525-9696-49d4-9a66-0fae5d1ed2e9\") " pod="openstack/cinder-api-0" Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.810899 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef1a3525-9696-49d4-9a66-0fae5d1ed2e9-config-data\") pod \"cinder-api-0\" (UID: \"ef1a3525-9696-49d4-9a66-0fae5d1ed2e9\") " pod="openstack/cinder-api-0" Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.819325 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef1a3525-9696-49d4-9a66-0fae5d1ed2e9-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"ef1a3525-9696-49d4-9a66-0fae5d1ed2e9\") " pod="openstack/cinder-api-0" Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.821332 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xv7dq\" (UniqueName: \"kubernetes.io/projected/ef1a3525-9696-49d4-9a66-0fae5d1ed2e9-kube-api-access-xv7dq\") pod \"cinder-api-0\" (UID: \"ef1a3525-9696-49d4-9a66-0fae5d1ed2e9\") " pod="openstack/cinder-api-0" Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.829424 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ef1a3525-9696-49d4-9a66-0fae5d1ed2e9-config-data-custom\") pod \"cinder-api-0\" (UID: \"ef1a3525-9696-49d4-9a66-0fae5d1ed2e9\") " pod="openstack/cinder-api-0" Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.863677 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-898848ccb-4kkwg"] Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.868746 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/586be632-9d3d-46be-9de4-5059e771edcf-ovsdbserver-nb\") pod \"dnsmasq-dns-75bfc9b94f-qfmcg\" (UID: \"586be632-9d3d-46be-9de4-5059e771edcf\") " pod="openstack/dnsmasq-dns-75bfc9b94f-qfmcg" Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.868988 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8df58633-8c06-4dd0-a538-b696f9736f6d-logs\") pod \"barbican-api-898848ccb-4kkwg\" (UID: \"8df58633-8c06-4dd0-a538-b696f9736f6d\") " pod="openstack/barbican-api-898848ccb-4kkwg" Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.869175 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8df58633-8c06-4dd0-a538-b696f9736f6d-combined-ca-bundle\") pod \"barbican-api-898848ccb-4kkwg\" (UID: \"8df58633-8c06-4dd0-a538-b696f9736f6d\") " pod="openstack/barbican-api-898848ccb-4kkwg" Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.869301 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9zdp\" (UniqueName: \"kubernetes.io/projected/586be632-9d3d-46be-9de4-5059e771edcf-kube-api-access-q9zdp\") pod \"dnsmasq-dns-75bfc9b94f-qfmcg\" (UID: \"586be632-9d3d-46be-9de4-5059e771edcf\") " pod="openstack/dnsmasq-dns-75bfc9b94f-qfmcg" Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.869428 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjngd\" (UniqueName: \"kubernetes.io/projected/8df58633-8c06-4dd0-a538-b696f9736f6d-kube-api-access-sjngd\") pod \"barbican-api-898848ccb-4kkwg\" (UID: \"8df58633-8c06-4dd0-a538-b696f9736f6d\") " pod="openstack/barbican-api-898848ccb-4kkwg" Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.869574 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8df58633-8c06-4dd0-a538-b696f9736f6d-config-data\") pod \"barbican-api-898848ccb-4kkwg\" (UID: \"8df58633-8c06-4dd0-a538-b696f9736f6d\") " pod="openstack/barbican-api-898848ccb-4kkwg" Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.869728 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/586be632-9d3d-46be-9de4-5059e771edcf-dns-svc\") pod \"dnsmasq-dns-75bfc9b94f-qfmcg\" (UID: \"586be632-9d3d-46be-9de4-5059e771edcf\") " pod="openstack/dnsmasq-dns-75bfc9b94f-qfmcg" Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.869868 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8df58633-8c06-4dd0-a538-b696f9736f6d-config-data-custom\") pod \"barbican-api-898848ccb-4kkwg\" (UID: \"8df58633-8c06-4dd0-a538-b696f9736f6d\") " pod="openstack/barbican-api-898848ccb-4kkwg" Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.869990 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/586be632-9d3d-46be-9de4-5059e771edcf-dns-swift-storage-0\") pod \"dnsmasq-dns-75bfc9b94f-qfmcg\" (UID: \"586be632-9d3d-46be-9de4-5059e771edcf\") " pod="openstack/dnsmasq-dns-75bfc9b94f-qfmcg" Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.870122 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/586be632-9d3d-46be-9de4-5059e771edcf-ovsdbserver-sb\") pod \"dnsmasq-dns-75bfc9b94f-qfmcg\" (UID: \"586be632-9d3d-46be-9de4-5059e771edcf\") " pod="openstack/dnsmasq-dns-75bfc9b94f-qfmcg" Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.870234 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/586be632-9d3d-46be-9de4-5059e771edcf-config\") pod \"dnsmasq-dns-75bfc9b94f-qfmcg\" (UID: \"586be632-9d3d-46be-9de4-5059e771edcf\") " pod="openstack/dnsmasq-dns-75bfc9b94f-qfmcg" Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.881884 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/586be632-9d3d-46be-9de4-5059e771edcf-config\") pod \"dnsmasq-dns-75bfc9b94f-qfmcg\" (UID: \"586be632-9d3d-46be-9de4-5059e771edcf\") " pod="openstack/dnsmasq-dns-75bfc9b94f-qfmcg" Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.882850 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/586be632-9d3d-46be-9de4-5059e771edcf-ovsdbserver-sb\") pod \"dnsmasq-dns-75bfc9b94f-qfmcg\" (UID: \"586be632-9d3d-46be-9de4-5059e771edcf\") " pod="openstack/dnsmasq-dns-75bfc9b94f-qfmcg" Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.885370 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/586be632-9d3d-46be-9de4-5059e771edcf-dns-svc\") pod \"dnsmasq-dns-75bfc9b94f-qfmcg\" (UID: \"586be632-9d3d-46be-9de4-5059e771edcf\") " pod="openstack/dnsmasq-dns-75bfc9b94f-qfmcg" Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.886053 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/586be632-9d3d-46be-9de4-5059e771edcf-ovsdbserver-nb\") pod \"dnsmasq-dns-75bfc9b94f-qfmcg\" (UID: \"586be632-9d3d-46be-9de4-5059e771edcf\") " pod="openstack/dnsmasq-dns-75bfc9b94f-qfmcg" Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.887534 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/586be632-9d3d-46be-9de4-5059e771edcf-dns-swift-storage-0\") pod \"dnsmasq-dns-75bfc9b94f-qfmcg\" (UID: \"586be632-9d3d-46be-9de4-5059e771edcf\") " pod="openstack/dnsmasq-dns-75bfc9b94f-qfmcg" Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.926748 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.942518 4953 generic.go:334] "Generic (PLEG): container finished" podID="6d8b6f34-cdf2-4285-ba3e-3a14621430e5" containerID="70bd7d1e341debc2af897dcd018b6b214ea87e3d3d21ceaba6b2b5980b1fd70e" exitCode=0 Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.942553 4953 generic.go:334] "Generic (PLEG): container finished" podID="6d8b6f34-cdf2-4285-ba3e-3a14621430e5" containerID="9677c91cd4dc130ea589d769d53b8e181f14b5902e1b3aa2756e000d3caa2bbf" exitCode=2 Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.942562 4953 generic.go:334] "Generic (PLEG): container finished" podID="6d8b6f34-cdf2-4285-ba3e-3a14621430e5" containerID="b063d78bdb8872cd1f53adf7805e16b4e8338bef9d8bd7c26023226e68457eb4" exitCode=0 Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.942586 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6d8b6f34-cdf2-4285-ba3e-3a14621430e5","Type":"ContainerDied","Data":"70bd7d1e341debc2af897dcd018b6b214ea87e3d3d21ceaba6b2b5980b1fd70e"} Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.942634 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6d8b6f34-cdf2-4285-ba3e-3a14621430e5","Type":"ContainerDied","Data":"9677c91cd4dc130ea589d769d53b8e181f14b5902e1b3aa2756e000d3caa2bbf"} Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.942647 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6d8b6f34-cdf2-4285-ba3e-3a14621430e5","Type":"ContainerDied","Data":"b063d78bdb8872cd1f53adf7805e16b4e8338bef9d8bd7c26023226e68457eb4"} Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.946189 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9zdp\" (UniqueName: \"kubernetes.io/projected/586be632-9d3d-46be-9de4-5059e771edcf-kube-api-access-q9zdp\") pod \"dnsmasq-dns-75bfc9b94f-qfmcg\" (UID: \"586be632-9d3d-46be-9de4-5059e771edcf\") " pod="openstack/dnsmasq-dns-75bfc9b94f-qfmcg" Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.961514 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75bfc9b94f-qfmcg" Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.971748 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8df58633-8c06-4dd0-a538-b696f9736f6d-config-data\") pod \"barbican-api-898848ccb-4kkwg\" (UID: \"8df58633-8c06-4dd0-a538-b696f9736f6d\") " pod="openstack/barbican-api-898848ccb-4kkwg" Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.971822 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8df58633-8c06-4dd0-a538-b696f9736f6d-config-data-custom\") pod \"barbican-api-898848ccb-4kkwg\" (UID: \"8df58633-8c06-4dd0-a538-b696f9736f6d\") " pod="openstack/barbican-api-898848ccb-4kkwg" Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.971893 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8df58633-8c06-4dd0-a538-b696f9736f6d-logs\") pod \"barbican-api-898848ccb-4kkwg\" (UID: \"8df58633-8c06-4dd0-a538-b696f9736f6d\") " pod="openstack/barbican-api-898848ccb-4kkwg" Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.971953 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8df58633-8c06-4dd0-a538-b696f9736f6d-combined-ca-bundle\") pod \"barbican-api-898848ccb-4kkwg\" (UID: \"8df58633-8c06-4dd0-a538-b696f9736f6d\") " pod="openstack/barbican-api-898848ccb-4kkwg" Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.972014 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sjngd\" (UniqueName: \"kubernetes.io/projected/8df58633-8c06-4dd0-a538-b696f9736f6d-kube-api-access-sjngd\") pod \"barbican-api-898848ccb-4kkwg\" (UID: \"8df58633-8c06-4dd0-a538-b696f9736f6d\") " pod="openstack/barbican-api-898848ccb-4kkwg" Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.974045 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8df58633-8c06-4dd0-a538-b696f9736f6d-logs\") pod \"barbican-api-898848ccb-4kkwg\" (UID: \"8df58633-8c06-4dd0-a538-b696f9736f6d\") " pod="openstack/barbican-api-898848ccb-4kkwg" Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.977683 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8df58633-8c06-4dd0-a538-b696f9736f6d-config-data\") pod \"barbican-api-898848ccb-4kkwg\" (UID: \"8df58633-8c06-4dd0-a538-b696f9736f6d\") " pod="openstack/barbican-api-898848ccb-4kkwg" Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.979172 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8df58633-8c06-4dd0-a538-b696f9736f6d-config-data-custom\") pod \"barbican-api-898848ccb-4kkwg\" (UID: \"8df58633-8c06-4dd0-a538-b696f9736f6d\") " pod="openstack/barbican-api-898848ccb-4kkwg" Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.983684 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8df58633-8c06-4dd0-a538-b696f9736f6d-combined-ca-bundle\") pod \"barbican-api-898848ccb-4kkwg\" (UID: \"8df58633-8c06-4dd0-a538-b696f9736f6d\") " pod="openstack/barbican-api-898848ccb-4kkwg" Dec 11 10:34:28 crc kubenswrapper[4953]: I1211 10:34:28.999627 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjngd\" (UniqueName: \"kubernetes.io/projected/8df58633-8c06-4dd0-a538-b696f9736f6d-kube-api-access-sjngd\") pod \"barbican-api-898848ccb-4kkwg\" (UID: \"8df58633-8c06-4dd0-a538-b696f9736f6d\") " pod="openstack/barbican-api-898848ccb-4kkwg" Dec 11 10:34:29 crc kubenswrapper[4953]: I1211 10:34:29.272451 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-898848ccb-4kkwg" Dec 11 10:34:29 crc kubenswrapper[4953]: W1211 10:34:29.387704 4953 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6a8bca99_1b39_4c66_9251_659c2feb0b42.slice/crio-821444ba8d29747dbec592c9b63b192232662490c0f68c922bb52433c531e6d5 WatchSource:0}: Error finding container 821444ba8d29747dbec592c9b63b192232662490c0f68c922bb52433c531e6d5: Status 404 returned error can't find the container with id 821444ba8d29747dbec592c9b63b192232662490c0f68c922bb52433c531e6d5 Dec 11 10:34:29 crc kubenswrapper[4953]: I1211 10:34:29.409704 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-9f5756c4f-t7jg6"] Dec 11 10:34:29 crc kubenswrapper[4953]: W1211 10:34:29.418096 4953 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod96eb50ce_78f1_4258_bd16_4a79030a7209.slice/crio-41e74ddc8f74bd623fbbe1c5351ac5046ab3eb7cc50b0aab4ce767b22e3a600b WatchSource:0}: Error finding container 41e74ddc8f74bd623fbbe1c5351ac5046ab3eb7cc50b0aab4ce767b22e3a600b: Status 404 returned error can't find the container with id 41e74ddc8f74bd623fbbe1c5351ac5046ab3eb7cc50b0aab4ce767b22e3a600b Dec 11 10:34:29 crc kubenswrapper[4953]: I1211 10:34:29.454562 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 11 10:34:29 crc kubenswrapper[4953]: I1211 10:34:29.577579 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-6cffd87c8c-wlgnt"] Dec 11 10:34:29 crc kubenswrapper[4953]: I1211 10:34:29.731949 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 11 10:34:29 crc kubenswrapper[4953]: I1211 10:34:29.741497 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-555fcfcf54-sqln7"] Dec 11 10:34:29 crc kubenswrapper[4953]: I1211 10:34:29.828489 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75bfc9b94f-qfmcg"] Dec 11 10:34:29 crc kubenswrapper[4953]: W1211 10:34:29.877210 4953 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod586be632_9d3d_46be_9de4_5059e771edcf.slice/crio-d7f5fceb502709e64de5699368af6f800334811ac687f77bec1862433c0f5f1c WatchSource:0}: Error finding container d7f5fceb502709e64de5699368af6f800334811ac687f77bec1862433c0f5f1c: Status 404 returned error can't find the container with id d7f5fceb502709e64de5699368af6f800334811ac687f77bec1862433c0f5f1c Dec 11 10:34:29 crc kubenswrapper[4953]: I1211 10:34:29.970770 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"96eb50ce-78f1-4258-bd16-4a79030a7209","Type":"ContainerStarted","Data":"41e74ddc8f74bd623fbbe1c5351ac5046ab3eb7cc50b0aab4ce767b22e3a600b"} Dec 11 10:34:29 crc kubenswrapper[4953]: I1211 10:34:29.974386 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75bfc9b94f-qfmcg" event={"ID":"586be632-9d3d-46be-9de4-5059e771edcf","Type":"ContainerStarted","Data":"d7f5fceb502709e64de5699368af6f800334811ac687f77bec1862433c0f5f1c"} Dec 11 10:34:29 crc kubenswrapper[4953]: I1211 10:34:29.980068 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ef1a3525-9696-49d4-9a66-0fae5d1ed2e9","Type":"ContainerStarted","Data":"9424b4c28ae7636cbffccfbac9961a6e67a70c2666e89b46d9dd850af79f2e63"} Dec 11 10:34:29 crc kubenswrapper[4953]: I1211 10:34:29.986949 4953 generic.go:334] "Generic (PLEG): container finished" podID="6a8bca99-1b39-4c66-9251-659c2feb0b42" containerID="a4d646b49eaf4090d0b2fe544cedf68e734397b4cde6a8a6515f3ae3086897a4" exitCode=0 Dec 11 10:34:29 crc kubenswrapper[4953]: I1211 10:34:29.987033 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9f5756c4f-t7jg6" event={"ID":"6a8bca99-1b39-4c66-9251-659c2feb0b42","Type":"ContainerDied","Data":"a4d646b49eaf4090d0b2fe544cedf68e734397b4cde6a8a6515f3ae3086897a4"} Dec 11 10:34:29 crc kubenswrapper[4953]: I1211 10:34:29.987255 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9f5756c4f-t7jg6" event={"ID":"6a8bca99-1b39-4c66-9251-659c2feb0b42","Type":"ContainerStarted","Data":"821444ba8d29747dbec592c9b63b192232662490c0f68c922bb52433c531e6d5"} Dec 11 10:34:29 crc kubenswrapper[4953]: I1211 10:34:29.991106 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 10:34:29 crc kubenswrapper[4953]: I1211 10:34:29.991831 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-555fcfcf54-sqln7" event={"ID":"caec0159-12b1-46f9-952c-10f229948036","Type":"ContainerStarted","Data":"44b47d8eb8cc371d2022c5dadcc674fbdcac5dae0c680bd3bd0f7d9e81ddc2d4"} Dec 11 10:34:29 crc kubenswrapper[4953]: I1211 10:34:29.995206 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6cffd87c8c-wlgnt" event={"ID":"544e1955-4316-4587-90a8-94bac4f81ae5","Type":"ContainerStarted","Data":"e0837a1a89fc8f4183d7e64dada5af8c45d46c64eb05cfb2eaf2de1357dbe535"} Dec 11 10:34:30 crc kubenswrapper[4953]: I1211 10:34:30.004725 4953 generic.go:334] "Generic (PLEG): container finished" podID="6d8b6f34-cdf2-4285-ba3e-3a14621430e5" containerID="cc48d3199b1b2a4c8feaf882b069ef69dca86cf1af56abcb97105af58084c882" exitCode=0 Dec 11 10:34:30 crc kubenswrapper[4953]: I1211 10:34:30.004793 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6d8b6f34-cdf2-4285-ba3e-3a14621430e5","Type":"ContainerDied","Data":"cc48d3199b1b2a4c8feaf882b069ef69dca86cf1af56abcb97105af58084c882"} Dec 11 10:34:30 crc kubenswrapper[4953]: I1211 10:34:30.004856 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6d8b6f34-cdf2-4285-ba3e-3a14621430e5","Type":"ContainerDied","Data":"8c58aebaa8fb1834f5a0d0ccb48322178bcefbe5e68940f2e37715801d0ffcfb"} Dec 11 10:34:30 crc kubenswrapper[4953]: I1211 10:34:30.004881 4953 scope.go:117] "RemoveContainer" containerID="70bd7d1e341debc2af897dcd018b6b214ea87e3d3d21ceaba6b2b5980b1fd70e" Dec 11 10:34:30 crc kubenswrapper[4953]: I1211 10:34:30.005052 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 10:34:30 crc kubenswrapper[4953]: I1211 10:34:30.010844 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-898848ccb-4kkwg"] Dec 11 10:34:30 crc kubenswrapper[4953]: W1211 10:34:30.044059 4953 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8df58633_8c06_4dd0_a538_b696f9736f6d.slice/crio-efbf57ee6cc4e52395d74297cfc67e1fd8c8270f4e8e0befe2f0f91f5aba8ddb WatchSource:0}: Error finding container efbf57ee6cc4e52395d74297cfc67e1fd8c8270f4e8e0befe2f0f91f5aba8ddb: Status 404 returned error can't find the container with id efbf57ee6cc4e52395d74297cfc67e1fd8c8270f4e8e0befe2f0f91f5aba8ddb Dec 11 10:34:30 crc kubenswrapper[4953]: I1211 10:34:30.085758 4953 scope.go:117] "RemoveContainer" containerID="9677c91cd4dc130ea589d769d53b8e181f14b5902e1b3aa2756e000d3caa2bbf" Dec 11 10:34:30 crc kubenswrapper[4953]: I1211 10:34:30.090575 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6d8b6f34-cdf2-4285-ba3e-3a14621430e5-scripts\") pod \"6d8b6f34-cdf2-4285-ba3e-3a14621430e5\" (UID: \"6d8b6f34-cdf2-4285-ba3e-3a14621430e5\") " Dec 11 10:34:30 crc kubenswrapper[4953]: I1211 10:34:30.090654 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d8b6f34-cdf2-4285-ba3e-3a14621430e5-config-data\") pod \"6d8b6f34-cdf2-4285-ba3e-3a14621430e5\" (UID: \"6d8b6f34-cdf2-4285-ba3e-3a14621430e5\") " Dec 11 10:34:30 crc kubenswrapper[4953]: I1211 10:34:30.090722 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d8b6f34-cdf2-4285-ba3e-3a14621430e5-combined-ca-bundle\") pod \"6d8b6f34-cdf2-4285-ba3e-3a14621430e5\" (UID: \"6d8b6f34-cdf2-4285-ba3e-3a14621430e5\") " Dec 11 10:34:30 crc kubenswrapper[4953]: I1211 10:34:30.090857 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6d8b6f34-cdf2-4285-ba3e-3a14621430e5-run-httpd\") pod \"6d8b6f34-cdf2-4285-ba3e-3a14621430e5\" (UID: \"6d8b6f34-cdf2-4285-ba3e-3a14621430e5\") " Dec 11 10:34:30 crc kubenswrapper[4953]: I1211 10:34:30.090974 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6d8b6f34-cdf2-4285-ba3e-3a14621430e5-log-httpd\") pod \"6d8b6f34-cdf2-4285-ba3e-3a14621430e5\" (UID: \"6d8b6f34-cdf2-4285-ba3e-3a14621430e5\") " Dec 11 10:34:30 crc kubenswrapper[4953]: I1211 10:34:30.091082 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cmxmt\" (UniqueName: \"kubernetes.io/projected/6d8b6f34-cdf2-4285-ba3e-3a14621430e5-kube-api-access-cmxmt\") pod \"6d8b6f34-cdf2-4285-ba3e-3a14621430e5\" (UID: \"6d8b6f34-cdf2-4285-ba3e-3a14621430e5\") " Dec 11 10:34:30 crc kubenswrapper[4953]: I1211 10:34:30.091168 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6d8b6f34-cdf2-4285-ba3e-3a14621430e5-sg-core-conf-yaml\") pod \"6d8b6f34-cdf2-4285-ba3e-3a14621430e5\" (UID: \"6d8b6f34-cdf2-4285-ba3e-3a14621430e5\") " Dec 11 10:34:30 crc kubenswrapper[4953]: I1211 10:34:30.092523 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6d8b6f34-cdf2-4285-ba3e-3a14621430e5-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "6d8b6f34-cdf2-4285-ba3e-3a14621430e5" (UID: "6d8b6f34-cdf2-4285-ba3e-3a14621430e5"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:34:30 crc kubenswrapper[4953]: I1211 10:34:30.092770 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6d8b6f34-cdf2-4285-ba3e-3a14621430e5-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "6d8b6f34-cdf2-4285-ba3e-3a14621430e5" (UID: "6d8b6f34-cdf2-4285-ba3e-3a14621430e5"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:34:30 crc kubenswrapper[4953]: I1211 10:34:30.109480 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d8b6f34-cdf2-4285-ba3e-3a14621430e5-scripts" (OuterVolumeSpecName: "scripts") pod "6d8b6f34-cdf2-4285-ba3e-3a14621430e5" (UID: "6d8b6f34-cdf2-4285-ba3e-3a14621430e5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:34:30 crc kubenswrapper[4953]: I1211 10:34:30.109634 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d8b6f34-cdf2-4285-ba3e-3a14621430e5-kube-api-access-cmxmt" (OuterVolumeSpecName: "kube-api-access-cmxmt") pod "6d8b6f34-cdf2-4285-ba3e-3a14621430e5" (UID: "6d8b6f34-cdf2-4285-ba3e-3a14621430e5"). InnerVolumeSpecName "kube-api-access-cmxmt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:34:30 crc kubenswrapper[4953]: I1211 10:34:30.136135 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d8b6f34-cdf2-4285-ba3e-3a14621430e5-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "6d8b6f34-cdf2-4285-ba3e-3a14621430e5" (UID: "6d8b6f34-cdf2-4285-ba3e-3a14621430e5"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:34:30 crc kubenswrapper[4953]: I1211 10:34:30.157644 4953 scope.go:117] "RemoveContainer" containerID="cc48d3199b1b2a4c8feaf882b069ef69dca86cf1af56abcb97105af58084c882" Dec 11 10:34:30 crc kubenswrapper[4953]: I1211 10:34:30.193865 4953 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6d8b6f34-cdf2-4285-ba3e-3a14621430e5-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 10:34:30 crc kubenswrapper[4953]: I1211 10:34:30.193898 4953 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6d8b6f34-cdf2-4285-ba3e-3a14621430e5-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 11 10:34:30 crc kubenswrapper[4953]: I1211 10:34:30.193914 4953 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6d8b6f34-cdf2-4285-ba3e-3a14621430e5-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 11 10:34:30 crc kubenswrapper[4953]: I1211 10:34:30.193926 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cmxmt\" (UniqueName: \"kubernetes.io/projected/6d8b6f34-cdf2-4285-ba3e-3a14621430e5-kube-api-access-cmxmt\") on node \"crc\" DevicePath \"\"" Dec 11 10:34:30 crc kubenswrapper[4953]: I1211 10:34:30.193938 4953 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6d8b6f34-cdf2-4285-ba3e-3a14621430e5-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 11 10:34:30 crc kubenswrapper[4953]: I1211 10:34:30.195962 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d8b6f34-cdf2-4285-ba3e-3a14621430e5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6d8b6f34-cdf2-4285-ba3e-3a14621430e5" (UID: "6d8b6f34-cdf2-4285-ba3e-3a14621430e5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:34:30 crc kubenswrapper[4953]: I1211 10:34:30.212519 4953 scope.go:117] "RemoveContainer" containerID="b063d78bdb8872cd1f53adf7805e16b4e8338bef9d8bd7c26023226e68457eb4" Dec 11 10:34:30 crc kubenswrapper[4953]: I1211 10:34:30.278916 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d8b6f34-cdf2-4285-ba3e-3a14621430e5-config-data" (OuterVolumeSpecName: "config-data") pod "6d8b6f34-cdf2-4285-ba3e-3a14621430e5" (UID: "6d8b6f34-cdf2-4285-ba3e-3a14621430e5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:34:30 crc kubenswrapper[4953]: I1211 10:34:30.295993 4953 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d8b6f34-cdf2-4285-ba3e-3a14621430e5-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 10:34:30 crc kubenswrapper[4953]: I1211 10:34:30.296023 4953 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d8b6f34-cdf2-4285-ba3e-3a14621430e5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 10:34:30 crc kubenswrapper[4953]: I1211 10:34:30.341702 4953 scope.go:117] "RemoveContainer" containerID="70bd7d1e341debc2af897dcd018b6b214ea87e3d3d21ceaba6b2b5980b1fd70e" Dec 11 10:34:30 crc kubenswrapper[4953]: E1211 10:34:30.342695 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"70bd7d1e341debc2af897dcd018b6b214ea87e3d3d21ceaba6b2b5980b1fd70e\": container with ID starting with 70bd7d1e341debc2af897dcd018b6b214ea87e3d3d21ceaba6b2b5980b1fd70e not found: ID does not exist" containerID="70bd7d1e341debc2af897dcd018b6b214ea87e3d3d21ceaba6b2b5980b1fd70e" Dec 11 10:34:30 crc kubenswrapper[4953]: I1211 10:34:30.342725 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70bd7d1e341debc2af897dcd018b6b214ea87e3d3d21ceaba6b2b5980b1fd70e"} err="failed to get container status \"70bd7d1e341debc2af897dcd018b6b214ea87e3d3d21ceaba6b2b5980b1fd70e\": rpc error: code = NotFound desc = could not find container \"70bd7d1e341debc2af897dcd018b6b214ea87e3d3d21ceaba6b2b5980b1fd70e\": container with ID starting with 70bd7d1e341debc2af897dcd018b6b214ea87e3d3d21ceaba6b2b5980b1fd70e not found: ID does not exist" Dec 11 10:34:30 crc kubenswrapper[4953]: I1211 10:34:30.342751 4953 scope.go:117] "RemoveContainer" containerID="9677c91cd4dc130ea589d769d53b8e181f14b5902e1b3aa2756e000d3caa2bbf" Dec 11 10:34:30 crc kubenswrapper[4953]: E1211 10:34:30.343163 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9677c91cd4dc130ea589d769d53b8e181f14b5902e1b3aa2756e000d3caa2bbf\": container with ID starting with 9677c91cd4dc130ea589d769d53b8e181f14b5902e1b3aa2756e000d3caa2bbf not found: ID does not exist" containerID="9677c91cd4dc130ea589d769d53b8e181f14b5902e1b3aa2756e000d3caa2bbf" Dec 11 10:34:30 crc kubenswrapper[4953]: I1211 10:34:30.343186 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9677c91cd4dc130ea589d769d53b8e181f14b5902e1b3aa2756e000d3caa2bbf"} err="failed to get container status \"9677c91cd4dc130ea589d769d53b8e181f14b5902e1b3aa2756e000d3caa2bbf\": rpc error: code = NotFound desc = could not find container \"9677c91cd4dc130ea589d769d53b8e181f14b5902e1b3aa2756e000d3caa2bbf\": container with ID starting with 9677c91cd4dc130ea589d769d53b8e181f14b5902e1b3aa2756e000d3caa2bbf not found: ID does not exist" Dec 11 10:34:30 crc kubenswrapper[4953]: I1211 10:34:30.343201 4953 scope.go:117] "RemoveContainer" containerID="cc48d3199b1b2a4c8feaf882b069ef69dca86cf1af56abcb97105af58084c882" Dec 11 10:34:30 crc kubenswrapper[4953]: E1211 10:34:30.343643 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc48d3199b1b2a4c8feaf882b069ef69dca86cf1af56abcb97105af58084c882\": container with ID starting with cc48d3199b1b2a4c8feaf882b069ef69dca86cf1af56abcb97105af58084c882 not found: ID does not exist" containerID="cc48d3199b1b2a4c8feaf882b069ef69dca86cf1af56abcb97105af58084c882" Dec 11 10:34:30 crc kubenswrapper[4953]: I1211 10:34:30.343680 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc48d3199b1b2a4c8feaf882b069ef69dca86cf1af56abcb97105af58084c882"} err="failed to get container status \"cc48d3199b1b2a4c8feaf882b069ef69dca86cf1af56abcb97105af58084c882\": rpc error: code = NotFound desc = could not find container \"cc48d3199b1b2a4c8feaf882b069ef69dca86cf1af56abcb97105af58084c882\": container with ID starting with cc48d3199b1b2a4c8feaf882b069ef69dca86cf1af56abcb97105af58084c882 not found: ID does not exist" Dec 11 10:34:30 crc kubenswrapper[4953]: I1211 10:34:30.343709 4953 scope.go:117] "RemoveContainer" containerID="b063d78bdb8872cd1f53adf7805e16b4e8338bef9d8bd7c26023226e68457eb4" Dec 11 10:34:30 crc kubenswrapper[4953]: E1211 10:34:30.344360 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b063d78bdb8872cd1f53adf7805e16b4e8338bef9d8bd7c26023226e68457eb4\": container with ID starting with b063d78bdb8872cd1f53adf7805e16b4e8338bef9d8bd7c26023226e68457eb4 not found: ID does not exist" containerID="b063d78bdb8872cd1f53adf7805e16b4e8338bef9d8bd7c26023226e68457eb4" Dec 11 10:34:30 crc kubenswrapper[4953]: I1211 10:34:30.344380 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b063d78bdb8872cd1f53adf7805e16b4e8338bef9d8bd7c26023226e68457eb4"} err="failed to get container status \"b063d78bdb8872cd1f53adf7805e16b4e8338bef9d8bd7c26023226e68457eb4\": rpc error: code = NotFound desc = could not find container \"b063d78bdb8872cd1f53adf7805e16b4e8338bef9d8bd7c26023226e68457eb4\": container with ID starting with b063d78bdb8872cd1f53adf7805e16b4e8338bef9d8bd7c26023226e68457eb4 not found: ID does not exist" Dec 11 10:34:30 crc kubenswrapper[4953]: I1211 10:34:30.347692 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 11 10:34:30 crc kubenswrapper[4953]: I1211 10:34:30.366271 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 11 10:34:30 crc kubenswrapper[4953]: I1211 10:34:30.376980 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 11 10:34:30 crc kubenswrapper[4953]: E1211 10:34:30.377976 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d8b6f34-cdf2-4285-ba3e-3a14621430e5" containerName="proxy-httpd" Dec 11 10:34:30 crc kubenswrapper[4953]: I1211 10:34:30.377995 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d8b6f34-cdf2-4285-ba3e-3a14621430e5" containerName="proxy-httpd" Dec 11 10:34:30 crc kubenswrapper[4953]: E1211 10:34:30.378029 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d8b6f34-cdf2-4285-ba3e-3a14621430e5" containerName="sg-core" Dec 11 10:34:30 crc kubenswrapper[4953]: I1211 10:34:30.378035 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d8b6f34-cdf2-4285-ba3e-3a14621430e5" containerName="sg-core" Dec 11 10:34:30 crc kubenswrapper[4953]: E1211 10:34:30.378052 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d8b6f34-cdf2-4285-ba3e-3a14621430e5" containerName="ceilometer-central-agent" Dec 11 10:34:30 crc kubenswrapper[4953]: I1211 10:34:30.378058 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d8b6f34-cdf2-4285-ba3e-3a14621430e5" containerName="ceilometer-central-agent" Dec 11 10:34:30 crc kubenswrapper[4953]: E1211 10:34:30.378070 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d8b6f34-cdf2-4285-ba3e-3a14621430e5" containerName="ceilometer-notification-agent" Dec 11 10:34:30 crc kubenswrapper[4953]: I1211 10:34:30.378075 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d8b6f34-cdf2-4285-ba3e-3a14621430e5" containerName="ceilometer-notification-agent" Dec 11 10:34:30 crc kubenswrapper[4953]: I1211 10:34:30.378276 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d8b6f34-cdf2-4285-ba3e-3a14621430e5" containerName="ceilometer-notification-agent" Dec 11 10:34:30 crc kubenswrapper[4953]: I1211 10:34:30.378295 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d8b6f34-cdf2-4285-ba3e-3a14621430e5" containerName="proxy-httpd" Dec 11 10:34:30 crc kubenswrapper[4953]: I1211 10:34:30.378304 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d8b6f34-cdf2-4285-ba3e-3a14621430e5" containerName="ceilometer-central-agent" Dec 11 10:34:30 crc kubenswrapper[4953]: I1211 10:34:30.378310 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d8b6f34-cdf2-4285-ba3e-3a14621430e5" containerName="sg-core" Dec 11 10:34:30 crc kubenswrapper[4953]: I1211 10:34:30.379954 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 10:34:30 crc kubenswrapper[4953]: I1211 10:34:30.387483 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 11 10:34:30 crc kubenswrapper[4953]: I1211 10:34:30.387922 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 11 10:34:30 crc kubenswrapper[4953]: I1211 10:34:30.395938 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 11 10:34:30 crc kubenswrapper[4953]: I1211 10:34:30.443468 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9f5756c4f-t7jg6" Dec 11 10:34:30 crc kubenswrapper[4953]: I1211 10:34:30.489886 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d8b6f34-cdf2-4285-ba3e-3a14621430e5" path="/var/lib/kubelet/pods/6d8b6f34-cdf2-4285-ba3e-3a14621430e5/volumes" Dec 11 10:34:30 crc kubenswrapper[4953]: I1211 10:34:30.506475 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aacb6b2c-e7b0-4d0f-84b2-b064cc344258-config-data\") pod \"ceilometer-0\" (UID: \"aacb6b2c-e7b0-4d0f-84b2-b064cc344258\") " pod="openstack/ceilometer-0" Dec 11 10:34:30 crc kubenswrapper[4953]: I1211 10:34:30.506515 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/aacb6b2c-e7b0-4d0f-84b2-b064cc344258-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"aacb6b2c-e7b0-4d0f-84b2-b064cc344258\") " pod="openstack/ceilometer-0" Dec 11 10:34:30 crc kubenswrapper[4953]: I1211 10:34:30.506698 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aacb6b2c-e7b0-4d0f-84b2-b064cc344258-run-httpd\") pod \"ceilometer-0\" (UID: \"aacb6b2c-e7b0-4d0f-84b2-b064cc344258\") " pod="openstack/ceilometer-0" Dec 11 10:34:30 crc kubenswrapper[4953]: I1211 10:34:30.506772 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aacb6b2c-e7b0-4d0f-84b2-b064cc344258-scripts\") pod \"ceilometer-0\" (UID: \"aacb6b2c-e7b0-4d0f-84b2-b064cc344258\") " pod="openstack/ceilometer-0" Dec 11 10:34:30 crc kubenswrapper[4953]: I1211 10:34:30.506921 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aacb6b2c-e7b0-4d0f-84b2-b064cc344258-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"aacb6b2c-e7b0-4d0f-84b2-b064cc344258\") " pod="openstack/ceilometer-0" Dec 11 10:34:30 crc kubenswrapper[4953]: I1211 10:34:30.506951 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6znx\" (UniqueName: \"kubernetes.io/projected/aacb6b2c-e7b0-4d0f-84b2-b064cc344258-kube-api-access-x6znx\") pod \"ceilometer-0\" (UID: \"aacb6b2c-e7b0-4d0f-84b2-b064cc344258\") " pod="openstack/ceilometer-0" Dec 11 10:34:30 crc kubenswrapper[4953]: I1211 10:34:30.507061 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aacb6b2c-e7b0-4d0f-84b2-b064cc344258-log-httpd\") pod \"ceilometer-0\" (UID: \"aacb6b2c-e7b0-4d0f-84b2-b064cc344258\") " pod="openstack/ceilometer-0" Dec 11 10:34:30 crc kubenswrapper[4953]: I1211 10:34:30.607914 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z8ftg\" (UniqueName: \"kubernetes.io/projected/6a8bca99-1b39-4c66-9251-659c2feb0b42-kube-api-access-z8ftg\") pod \"6a8bca99-1b39-4c66-9251-659c2feb0b42\" (UID: \"6a8bca99-1b39-4c66-9251-659c2feb0b42\") " Dec 11 10:34:30 crc kubenswrapper[4953]: I1211 10:34:30.607990 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6a8bca99-1b39-4c66-9251-659c2feb0b42-ovsdbserver-sb\") pod \"6a8bca99-1b39-4c66-9251-659c2feb0b42\" (UID: \"6a8bca99-1b39-4c66-9251-659c2feb0b42\") " Dec 11 10:34:30 crc kubenswrapper[4953]: I1211 10:34:30.608049 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6a8bca99-1b39-4c66-9251-659c2feb0b42-ovsdbserver-nb\") pod \"6a8bca99-1b39-4c66-9251-659c2feb0b42\" (UID: \"6a8bca99-1b39-4c66-9251-659c2feb0b42\") " Dec 11 10:34:30 crc kubenswrapper[4953]: I1211 10:34:30.608076 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6a8bca99-1b39-4c66-9251-659c2feb0b42-dns-swift-storage-0\") pod \"6a8bca99-1b39-4c66-9251-659c2feb0b42\" (UID: \"6a8bca99-1b39-4c66-9251-659c2feb0b42\") " Dec 11 10:34:30 crc kubenswrapper[4953]: I1211 10:34:30.608182 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6a8bca99-1b39-4c66-9251-659c2feb0b42-dns-svc\") pod \"6a8bca99-1b39-4c66-9251-659c2feb0b42\" (UID: \"6a8bca99-1b39-4c66-9251-659c2feb0b42\") " Dec 11 10:34:30 crc kubenswrapper[4953]: I1211 10:34:30.608294 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a8bca99-1b39-4c66-9251-659c2feb0b42-config\") pod \"6a8bca99-1b39-4c66-9251-659c2feb0b42\" (UID: \"6a8bca99-1b39-4c66-9251-659c2feb0b42\") " Dec 11 10:34:30 crc kubenswrapper[4953]: I1211 10:34:30.608519 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aacb6b2c-e7b0-4d0f-84b2-b064cc344258-config-data\") pod \"ceilometer-0\" (UID: \"aacb6b2c-e7b0-4d0f-84b2-b064cc344258\") " pod="openstack/ceilometer-0" Dec 11 10:34:30 crc kubenswrapper[4953]: I1211 10:34:30.608541 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/aacb6b2c-e7b0-4d0f-84b2-b064cc344258-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"aacb6b2c-e7b0-4d0f-84b2-b064cc344258\") " pod="openstack/ceilometer-0" Dec 11 10:34:30 crc kubenswrapper[4953]: I1211 10:34:30.608666 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aacb6b2c-e7b0-4d0f-84b2-b064cc344258-run-httpd\") pod \"ceilometer-0\" (UID: \"aacb6b2c-e7b0-4d0f-84b2-b064cc344258\") " pod="openstack/ceilometer-0" Dec 11 10:34:30 crc kubenswrapper[4953]: I1211 10:34:30.608693 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aacb6b2c-e7b0-4d0f-84b2-b064cc344258-scripts\") pod \"ceilometer-0\" (UID: \"aacb6b2c-e7b0-4d0f-84b2-b064cc344258\") " pod="openstack/ceilometer-0" Dec 11 10:34:30 crc kubenswrapper[4953]: I1211 10:34:30.608730 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aacb6b2c-e7b0-4d0f-84b2-b064cc344258-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"aacb6b2c-e7b0-4d0f-84b2-b064cc344258\") " pod="openstack/ceilometer-0" Dec 11 10:34:30 crc kubenswrapper[4953]: I1211 10:34:30.608748 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6znx\" (UniqueName: \"kubernetes.io/projected/aacb6b2c-e7b0-4d0f-84b2-b064cc344258-kube-api-access-x6znx\") pod \"ceilometer-0\" (UID: \"aacb6b2c-e7b0-4d0f-84b2-b064cc344258\") " pod="openstack/ceilometer-0" Dec 11 10:34:30 crc kubenswrapper[4953]: I1211 10:34:30.608807 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aacb6b2c-e7b0-4d0f-84b2-b064cc344258-log-httpd\") pod \"ceilometer-0\" (UID: \"aacb6b2c-e7b0-4d0f-84b2-b064cc344258\") " pod="openstack/ceilometer-0" Dec 11 10:34:30 crc kubenswrapper[4953]: I1211 10:34:30.609214 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aacb6b2c-e7b0-4d0f-84b2-b064cc344258-log-httpd\") pod \"ceilometer-0\" (UID: \"aacb6b2c-e7b0-4d0f-84b2-b064cc344258\") " pod="openstack/ceilometer-0" Dec 11 10:34:30 crc kubenswrapper[4953]: I1211 10:34:30.611032 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aacb6b2c-e7b0-4d0f-84b2-b064cc344258-run-httpd\") pod \"ceilometer-0\" (UID: \"aacb6b2c-e7b0-4d0f-84b2-b064cc344258\") " pod="openstack/ceilometer-0" Dec 11 10:34:30 crc kubenswrapper[4953]: I1211 10:34:30.619943 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a8bca99-1b39-4c66-9251-659c2feb0b42-kube-api-access-z8ftg" (OuterVolumeSpecName: "kube-api-access-z8ftg") pod "6a8bca99-1b39-4c66-9251-659c2feb0b42" (UID: "6a8bca99-1b39-4c66-9251-659c2feb0b42"). InnerVolumeSpecName "kube-api-access-z8ftg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:34:30 crc kubenswrapper[4953]: I1211 10:34:30.621188 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/aacb6b2c-e7b0-4d0f-84b2-b064cc344258-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"aacb6b2c-e7b0-4d0f-84b2-b064cc344258\") " pod="openstack/ceilometer-0" Dec 11 10:34:30 crc kubenswrapper[4953]: I1211 10:34:30.623042 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aacb6b2c-e7b0-4d0f-84b2-b064cc344258-scripts\") pod \"ceilometer-0\" (UID: \"aacb6b2c-e7b0-4d0f-84b2-b064cc344258\") " pod="openstack/ceilometer-0" Dec 11 10:34:30 crc kubenswrapper[4953]: I1211 10:34:30.624490 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aacb6b2c-e7b0-4d0f-84b2-b064cc344258-config-data\") pod \"ceilometer-0\" (UID: \"aacb6b2c-e7b0-4d0f-84b2-b064cc344258\") " pod="openstack/ceilometer-0" Dec 11 10:34:30 crc kubenswrapper[4953]: I1211 10:34:30.629270 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aacb6b2c-e7b0-4d0f-84b2-b064cc344258-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"aacb6b2c-e7b0-4d0f-84b2-b064cc344258\") " pod="openstack/ceilometer-0" Dec 11 10:34:30 crc kubenswrapper[4953]: I1211 10:34:30.643442 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6znx\" (UniqueName: \"kubernetes.io/projected/aacb6b2c-e7b0-4d0f-84b2-b064cc344258-kube-api-access-x6znx\") pod \"ceilometer-0\" (UID: \"aacb6b2c-e7b0-4d0f-84b2-b064cc344258\") " pod="openstack/ceilometer-0" Dec 11 10:34:30 crc kubenswrapper[4953]: I1211 10:34:30.711339 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z8ftg\" (UniqueName: \"kubernetes.io/projected/6a8bca99-1b39-4c66-9251-659c2feb0b42-kube-api-access-z8ftg\") on node \"crc\" DevicePath \"\"" Dec 11 10:34:30 crc kubenswrapper[4953]: I1211 10:34:30.732993 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 10:34:30 crc kubenswrapper[4953]: I1211 10:34:30.733917 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a8bca99-1b39-4c66-9251-659c2feb0b42-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6a8bca99-1b39-4c66-9251-659c2feb0b42" (UID: "6a8bca99-1b39-4c66-9251-659c2feb0b42"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:34:30 crc kubenswrapper[4953]: I1211 10:34:30.750084 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a8bca99-1b39-4c66-9251-659c2feb0b42-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6a8bca99-1b39-4c66-9251-659c2feb0b42" (UID: "6a8bca99-1b39-4c66-9251-659c2feb0b42"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:34:30 crc kubenswrapper[4953]: I1211 10:34:30.771801 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a8bca99-1b39-4c66-9251-659c2feb0b42-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6a8bca99-1b39-4c66-9251-659c2feb0b42" (UID: "6a8bca99-1b39-4c66-9251-659c2feb0b42"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:34:30 crc kubenswrapper[4953]: I1211 10:34:30.784737 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a8bca99-1b39-4c66-9251-659c2feb0b42-config" (OuterVolumeSpecName: "config") pod "6a8bca99-1b39-4c66-9251-659c2feb0b42" (UID: "6a8bca99-1b39-4c66-9251-659c2feb0b42"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:34:30 crc kubenswrapper[4953]: I1211 10:34:30.805564 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a8bca99-1b39-4c66-9251-659c2feb0b42-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "6a8bca99-1b39-4c66-9251-659c2feb0b42" (UID: "6a8bca99-1b39-4c66-9251-659c2feb0b42"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:34:30 crc kubenswrapper[4953]: I1211 10:34:30.813396 4953 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a8bca99-1b39-4c66-9251-659c2feb0b42-config\") on node \"crc\" DevicePath \"\"" Dec 11 10:34:30 crc kubenswrapper[4953]: I1211 10:34:30.813445 4953 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6a8bca99-1b39-4c66-9251-659c2feb0b42-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 11 10:34:30 crc kubenswrapper[4953]: I1211 10:34:30.813461 4953 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6a8bca99-1b39-4c66-9251-659c2feb0b42-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 11 10:34:30 crc kubenswrapper[4953]: I1211 10:34:30.813473 4953 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6a8bca99-1b39-4c66-9251-659c2feb0b42-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 11 10:34:30 crc kubenswrapper[4953]: I1211 10:34:30.813487 4953 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6a8bca99-1b39-4c66-9251-659c2feb0b42-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 11 10:34:31 crc kubenswrapper[4953]: I1211 10:34:31.052996 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ef1a3525-9696-49d4-9a66-0fae5d1ed2e9","Type":"ContainerStarted","Data":"27d7bd15395c56bdb6dd073de326859835346e90d66b84971afddc6e6d0a67e7"} Dec 11 10:34:31 crc kubenswrapper[4953]: I1211 10:34:31.057433 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9f5756c4f-t7jg6" event={"ID":"6a8bca99-1b39-4c66-9251-659c2feb0b42","Type":"ContainerDied","Data":"821444ba8d29747dbec592c9b63b192232662490c0f68c922bb52433c531e6d5"} Dec 11 10:34:31 crc kubenswrapper[4953]: I1211 10:34:31.057476 4953 scope.go:117] "RemoveContainer" containerID="a4d646b49eaf4090d0b2fe544cedf68e734397b4cde6a8a6515f3ae3086897a4" Dec 11 10:34:31 crc kubenswrapper[4953]: I1211 10:34:31.057718 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9f5756c4f-t7jg6" Dec 11 10:34:31 crc kubenswrapper[4953]: I1211 10:34:31.065492 4953 generic.go:334] "Generic (PLEG): container finished" podID="586be632-9d3d-46be-9de4-5059e771edcf" containerID="6643f662cdbb7192253da376618dc19598e53df9833c526637d583d007913afd" exitCode=0 Dec 11 10:34:31 crc kubenswrapper[4953]: I1211 10:34:31.065546 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75bfc9b94f-qfmcg" event={"ID":"586be632-9d3d-46be-9de4-5059e771edcf","Type":"ContainerDied","Data":"6643f662cdbb7192253da376618dc19598e53df9833c526637d583d007913afd"} Dec 11 10:34:31 crc kubenswrapper[4953]: I1211 10:34:31.090396 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-898848ccb-4kkwg" event={"ID":"8df58633-8c06-4dd0-a538-b696f9736f6d","Type":"ContainerStarted","Data":"320ee940deece2aa26a07f8c43904e8a8100f8c59b64aa05433ca8faa0893d3f"} Dec 11 10:34:31 crc kubenswrapper[4953]: I1211 10:34:31.090440 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-898848ccb-4kkwg" event={"ID":"8df58633-8c06-4dd0-a538-b696f9736f6d","Type":"ContainerStarted","Data":"90fc77b9c3712d164c13bcc6f46d987fe0e34162bdcf315b272f12d817bea344"} Dec 11 10:34:31 crc kubenswrapper[4953]: I1211 10:34:31.090449 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-898848ccb-4kkwg" event={"ID":"8df58633-8c06-4dd0-a538-b696f9736f6d","Type":"ContainerStarted","Data":"efbf57ee6cc4e52395d74297cfc67e1fd8c8270f4e8e0befe2f0f91f5aba8ddb"} Dec 11 10:34:31 crc kubenswrapper[4953]: I1211 10:34:31.091962 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-898848ccb-4kkwg" Dec 11 10:34:31 crc kubenswrapper[4953]: I1211 10:34:31.091999 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-898848ccb-4kkwg" Dec 11 10:34:31 crc kubenswrapper[4953]: I1211 10:34:31.124668 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-898848ccb-4kkwg" podStartSLOduration=3.124630172 podStartE2EDuration="3.124630172s" podCreationTimestamp="2025-12-11 10:34:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:34:31.114293815 +0000 UTC m=+1389.138152858" watchObservedRunningTime="2025-12-11 10:34:31.124630172 +0000 UTC m=+1389.148489205" Dec 11 10:34:31 crc kubenswrapper[4953]: I1211 10:34:31.176100 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Dec 11 10:34:31 crc kubenswrapper[4953]: I1211 10:34:31.215215 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-9f5756c4f-t7jg6"] Dec 11 10:34:31 crc kubenswrapper[4953]: I1211 10:34:31.239356 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-9f5756c4f-t7jg6"] Dec 11 10:34:32 crc kubenswrapper[4953]: I1211 10:34:32.112418 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"96eb50ce-78f1-4258-bd16-4a79030a7209","Type":"ContainerStarted","Data":"ba3311f62e2554c8eb3898a90cc90b8054152d5abdef3b677a548121ef5694f6"} Dec 11 10:34:32 crc kubenswrapper[4953]: I1211 10:34:32.510263 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a8bca99-1b39-4c66-9251-659c2feb0b42" path="/var/lib/kubelet/pods/6a8bca99-1b39-4c66-9251-659c2feb0b42/volumes" Dec 11 10:34:32 crc kubenswrapper[4953]: I1211 10:34:32.638830 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 11 10:34:32 crc kubenswrapper[4953]: W1211 10:34:32.647385 4953 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaacb6b2c_e7b0_4d0f_84b2_b064cc344258.slice/crio-84c06759a531e6739c2b572a0ae4994709974ac0d06c8a8071c09f8624359379 WatchSource:0}: Error finding container 84c06759a531e6739c2b572a0ae4994709974ac0d06c8a8071c09f8624359379: Status 404 returned error can't find the container with id 84c06759a531e6739c2b572a0ae4994709974ac0d06c8a8071c09f8624359379 Dec 11 10:34:33 crc kubenswrapper[4953]: I1211 10:34:33.131137 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aacb6b2c-e7b0-4d0f-84b2-b064cc344258","Type":"ContainerStarted","Data":"84c06759a531e6739c2b572a0ae4994709974ac0d06c8a8071c09f8624359379"} Dec 11 10:34:33 crc kubenswrapper[4953]: I1211 10:34:33.134734 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-555fcfcf54-sqln7" event={"ID":"caec0159-12b1-46f9-952c-10f229948036","Type":"ContainerStarted","Data":"6cb07fdb5e67db9e16c8125784b8b3014f71452b7d478333ae5ae1ede91ec6ff"} Dec 11 10:34:33 crc kubenswrapper[4953]: I1211 10:34:33.134785 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-555fcfcf54-sqln7" event={"ID":"caec0159-12b1-46f9-952c-10f229948036","Type":"ContainerStarted","Data":"7d7961ffaf0fa5639d3e96bbbb7ff1815fd8017ed09d51fdb3f868fc15297c07"} Dec 11 10:34:33 crc kubenswrapper[4953]: I1211 10:34:33.139015 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6cffd87c8c-wlgnt" event={"ID":"544e1955-4316-4587-90a8-94bac4f81ae5","Type":"ContainerStarted","Data":"120e662c3201d0f81e55488f64c74d01e67c74d5af04b0ca903d4ba77213d505"} Dec 11 10:34:33 crc kubenswrapper[4953]: I1211 10:34:33.139040 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6cffd87c8c-wlgnt" event={"ID":"544e1955-4316-4587-90a8-94bac4f81ae5","Type":"ContainerStarted","Data":"9bcdd67ff3f27b165dca3277b206f20442bbecd9d522b5435dc8a058e29f8375"} Dec 11 10:34:33 crc kubenswrapper[4953]: I1211 10:34:33.143926 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75bfc9b94f-qfmcg" event={"ID":"586be632-9d3d-46be-9de4-5059e771edcf","Type":"ContainerStarted","Data":"8380e2880db7a1c1a8b15a48693d500857a01fbf3050e1a2877a5a73f7d990c1"} Dec 11 10:34:33 crc kubenswrapper[4953]: I1211 10:34:33.144613 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-75bfc9b94f-qfmcg" Dec 11 10:34:33 crc kubenswrapper[4953]: I1211 10:34:33.164694 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-555fcfcf54-sqln7" podStartSLOduration=2.680435448 podStartE2EDuration="5.164677364s" podCreationTimestamp="2025-12-11 10:34:28 +0000 UTC" firstStartedPulling="2025-12-11 10:34:29.758835177 +0000 UTC m=+1387.782694210" lastFinishedPulling="2025-12-11 10:34:32.243077093 +0000 UTC m=+1390.266936126" observedRunningTime="2025-12-11 10:34:33.160911672 +0000 UTC m=+1391.184770715" watchObservedRunningTime="2025-12-11 10:34:33.164677364 +0000 UTC m=+1391.188536397" Dec 11 10:34:33 crc kubenswrapper[4953]: I1211 10:34:33.196492 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-75bfc9b94f-qfmcg" podStartSLOduration=5.196475602 podStartE2EDuration="5.196475602s" podCreationTimestamp="2025-12-11 10:34:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:34:33.180207787 +0000 UTC m=+1391.204066830" watchObservedRunningTime="2025-12-11 10:34:33.196475602 +0000 UTC m=+1391.220334635" Dec 11 10:34:33 crc kubenswrapper[4953]: I1211 10:34:33.227170 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-6cffd87c8c-wlgnt" podStartSLOduration=2.614106709 podStartE2EDuration="5.227151767s" podCreationTimestamp="2025-12-11 10:34:28 +0000 UTC" firstStartedPulling="2025-12-11 10:34:29.587480197 +0000 UTC m=+1387.611339230" lastFinishedPulling="2025-12-11 10:34:32.200525255 +0000 UTC m=+1390.224384288" observedRunningTime="2025-12-11 10:34:33.207685106 +0000 UTC m=+1391.231544139" watchObservedRunningTime="2025-12-11 10:34:33.227151767 +0000 UTC m=+1391.251010800" Dec 11 10:34:34 crc kubenswrapper[4953]: I1211 10:34:34.180840 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ef1a3525-9696-49d4-9a66-0fae5d1ed2e9","Type":"ContainerStarted","Data":"86a6ea85715bef0e1049dec4f7f33925924ceffad92a15d0cb4b7672b26d1dbf"} Dec 11 10:34:34 crc kubenswrapper[4953]: I1211 10:34:34.181737 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Dec 11 10:34:34 crc kubenswrapper[4953]: I1211 10:34:34.181191 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="ef1a3525-9696-49d4-9a66-0fae5d1ed2e9" containerName="cinder-api-log" containerID="cri-o://27d7bd15395c56bdb6dd073de326859835346e90d66b84971afddc6e6d0a67e7" gracePeriod=30 Dec 11 10:34:34 crc kubenswrapper[4953]: I1211 10:34:34.181849 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="ef1a3525-9696-49d4-9a66-0fae5d1ed2e9" containerName="cinder-api" containerID="cri-o://86a6ea85715bef0e1049dec4f7f33925924ceffad92a15d0cb4b7672b26d1dbf" gracePeriod=30 Dec 11 10:34:34 crc kubenswrapper[4953]: I1211 10:34:34.192174 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aacb6b2c-e7b0-4d0f-84b2-b064cc344258","Type":"ContainerStarted","Data":"336200d0a637af06e39f89da75fccab7c166e3053ea9acaa781f133b2f9d1013"} Dec 11 10:34:34 crc kubenswrapper[4953]: I1211 10:34:34.206425 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"96eb50ce-78f1-4258-bd16-4a79030a7209","Type":"ContainerStarted","Data":"c6f3b076c763e7a2a444ec082df0bab65d3333f555a08473e67de7f5f0fa838a"} Dec 11 10:34:34 crc kubenswrapper[4953]: I1211 10:34:34.218253 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=6.218231929 podStartE2EDuration="6.218231929s" podCreationTimestamp="2025-12-11 10:34:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:34:34.208296802 +0000 UTC m=+1392.232155835" watchObservedRunningTime="2025-12-11 10:34:34.218231929 +0000 UTC m=+1392.242090962" Dec 11 10:34:34 crc kubenswrapper[4953]: I1211 10:34:34.242244 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=5.232335661 podStartE2EDuration="6.242227124s" podCreationTimestamp="2025-12-11 10:34:28 +0000 UTC" firstStartedPulling="2025-12-11 10:34:29.428356423 +0000 UTC m=+1387.452215466" lastFinishedPulling="2025-12-11 10:34:30.438247896 +0000 UTC m=+1388.462106929" observedRunningTime="2025-12-11 10:34:34.230559107 +0000 UTC m=+1392.254418140" watchObservedRunningTime="2025-12-11 10:34:34.242227124 +0000 UTC m=+1392.266086157" Dec 11 10:34:34 crc kubenswrapper[4953]: I1211 10:34:34.494132 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-75bd4868-pp5tq" Dec 11 10:34:34 crc kubenswrapper[4953]: I1211 10:34:34.689651 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-7c85df7b9d-rdbfq"] Dec 11 10:34:34 crc kubenswrapper[4953]: E1211 10:34:34.690256 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a8bca99-1b39-4c66-9251-659c2feb0b42" containerName="init" Dec 11 10:34:34 crc kubenswrapper[4953]: I1211 10:34:34.690286 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a8bca99-1b39-4c66-9251-659c2feb0b42" containerName="init" Dec 11 10:34:34 crc kubenswrapper[4953]: I1211 10:34:34.690523 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a8bca99-1b39-4c66-9251-659c2feb0b42" containerName="init" Dec 11 10:34:34 crc kubenswrapper[4953]: I1211 10:34:34.691835 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7c85df7b9d-rdbfq" Dec 11 10:34:34 crc kubenswrapper[4953]: I1211 10:34:34.693810 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Dec 11 10:34:34 crc kubenswrapper[4953]: I1211 10:34:34.694221 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Dec 11 10:34:34 crc kubenswrapper[4953]: I1211 10:34:34.731036 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7c85df7b9d-rdbfq"] Dec 11 10:34:34 crc kubenswrapper[4953]: I1211 10:34:34.751867 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/767370a9-f8dd-4370-a2cc-f5baeff52c54-config-data-custom\") pod \"barbican-api-7c85df7b9d-rdbfq\" (UID: \"767370a9-f8dd-4370-a2cc-f5baeff52c54\") " pod="openstack/barbican-api-7c85df7b9d-rdbfq" Dec 11 10:34:34 crc kubenswrapper[4953]: I1211 10:34:34.751934 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqtmj\" (UniqueName: \"kubernetes.io/projected/767370a9-f8dd-4370-a2cc-f5baeff52c54-kube-api-access-dqtmj\") pod \"barbican-api-7c85df7b9d-rdbfq\" (UID: \"767370a9-f8dd-4370-a2cc-f5baeff52c54\") " pod="openstack/barbican-api-7c85df7b9d-rdbfq" Dec 11 10:34:34 crc kubenswrapper[4953]: I1211 10:34:34.752006 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/767370a9-f8dd-4370-a2cc-f5baeff52c54-internal-tls-certs\") pod \"barbican-api-7c85df7b9d-rdbfq\" (UID: \"767370a9-f8dd-4370-a2cc-f5baeff52c54\") " pod="openstack/barbican-api-7c85df7b9d-rdbfq" Dec 11 10:34:34 crc kubenswrapper[4953]: I1211 10:34:34.752041 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/767370a9-f8dd-4370-a2cc-f5baeff52c54-combined-ca-bundle\") pod \"barbican-api-7c85df7b9d-rdbfq\" (UID: \"767370a9-f8dd-4370-a2cc-f5baeff52c54\") " pod="openstack/barbican-api-7c85df7b9d-rdbfq" Dec 11 10:34:34 crc kubenswrapper[4953]: I1211 10:34:34.752071 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/767370a9-f8dd-4370-a2cc-f5baeff52c54-config-data\") pod \"barbican-api-7c85df7b9d-rdbfq\" (UID: \"767370a9-f8dd-4370-a2cc-f5baeff52c54\") " pod="openstack/barbican-api-7c85df7b9d-rdbfq" Dec 11 10:34:34 crc kubenswrapper[4953]: I1211 10:34:34.752094 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/767370a9-f8dd-4370-a2cc-f5baeff52c54-logs\") pod \"barbican-api-7c85df7b9d-rdbfq\" (UID: \"767370a9-f8dd-4370-a2cc-f5baeff52c54\") " pod="openstack/barbican-api-7c85df7b9d-rdbfq" Dec 11 10:34:34 crc kubenswrapper[4953]: I1211 10:34:34.752120 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/767370a9-f8dd-4370-a2cc-f5baeff52c54-public-tls-certs\") pod \"barbican-api-7c85df7b9d-rdbfq\" (UID: \"767370a9-f8dd-4370-a2cc-f5baeff52c54\") " pod="openstack/barbican-api-7c85df7b9d-rdbfq" Dec 11 10:34:34 crc kubenswrapper[4953]: I1211 10:34:34.856661 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dqtmj\" (UniqueName: \"kubernetes.io/projected/767370a9-f8dd-4370-a2cc-f5baeff52c54-kube-api-access-dqtmj\") pod \"barbican-api-7c85df7b9d-rdbfq\" (UID: \"767370a9-f8dd-4370-a2cc-f5baeff52c54\") " pod="openstack/barbican-api-7c85df7b9d-rdbfq" Dec 11 10:34:34 crc kubenswrapper[4953]: I1211 10:34:34.856814 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/767370a9-f8dd-4370-a2cc-f5baeff52c54-internal-tls-certs\") pod \"barbican-api-7c85df7b9d-rdbfq\" (UID: \"767370a9-f8dd-4370-a2cc-f5baeff52c54\") " pod="openstack/barbican-api-7c85df7b9d-rdbfq" Dec 11 10:34:34 crc kubenswrapper[4953]: I1211 10:34:34.856865 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/767370a9-f8dd-4370-a2cc-f5baeff52c54-combined-ca-bundle\") pod \"barbican-api-7c85df7b9d-rdbfq\" (UID: \"767370a9-f8dd-4370-a2cc-f5baeff52c54\") " pod="openstack/barbican-api-7c85df7b9d-rdbfq" Dec 11 10:34:34 crc kubenswrapper[4953]: I1211 10:34:34.856900 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/767370a9-f8dd-4370-a2cc-f5baeff52c54-config-data\") pod \"barbican-api-7c85df7b9d-rdbfq\" (UID: \"767370a9-f8dd-4370-a2cc-f5baeff52c54\") " pod="openstack/barbican-api-7c85df7b9d-rdbfq" Dec 11 10:34:34 crc kubenswrapper[4953]: I1211 10:34:34.856925 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/767370a9-f8dd-4370-a2cc-f5baeff52c54-logs\") pod \"barbican-api-7c85df7b9d-rdbfq\" (UID: \"767370a9-f8dd-4370-a2cc-f5baeff52c54\") " pod="openstack/barbican-api-7c85df7b9d-rdbfq" Dec 11 10:34:34 crc kubenswrapper[4953]: I1211 10:34:34.856950 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/767370a9-f8dd-4370-a2cc-f5baeff52c54-public-tls-certs\") pod \"barbican-api-7c85df7b9d-rdbfq\" (UID: \"767370a9-f8dd-4370-a2cc-f5baeff52c54\") " pod="openstack/barbican-api-7c85df7b9d-rdbfq" Dec 11 10:34:34 crc kubenswrapper[4953]: I1211 10:34:34.856974 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/767370a9-f8dd-4370-a2cc-f5baeff52c54-config-data-custom\") pod \"barbican-api-7c85df7b9d-rdbfq\" (UID: \"767370a9-f8dd-4370-a2cc-f5baeff52c54\") " pod="openstack/barbican-api-7c85df7b9d-rdbfq" Dec 11 10:34:34 crc kubenswrapper[4953]: I1211 10:34:34.864889 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/767370a9-f8dd-4370-a2cc-f5baeff52c54-config-data\") pod \"barbican-api-7c85df7b9d-rdbfq\" (UID: \"767370a9-f8dd-4370-a2cc-f5baeff52c54\") " pod="openstack/barbican-api-7c85df7b9d-rdbfq" Dec 11 10:34:34 crc kubenswrapper[4953]: I1211 10:34:34.869080 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/767370a9-f8dd-4370-a2cc-f5baeff52c54-logs\") pod \"barbican-api-7c85df7b9d-rdbfq\" (UID: \"767370a9-f8dd-4370-a2cc-f5baeff52c54\") " pod="openstack/barbican-api-7c85df7b9d-rdbfq" Dec 11 10:34:34 crc kubenswrapper[4953]: I1211 10:34:34.869653 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/767370a9-f8dd-4370-a2cc-f5baeff52c54-internal-tls-certs\") pod \"barbican-api-7c85df7b9d-rdbfq\" (UID: \"767370a9-f8dd-4370-a2cc-f5baeff52c54\") " pod="openstack/barbican-api-7c85df7b9d-rdbfq" Dec 11 10:34:34 crc kubenswrapper[4953]: I1211 10:34:34.874212 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/767370a9-f8dd-4370-a2cc-f5baeff52c54-combined-ca-bundle\") pod \"barbican-api-7c85df7b9d-rdbfq\" (UID: \"767370a9-f8dd-4370-a2cc-f5baeff52c54\") " pod="openstack/barbican-api-7c85df7b9d-rdbfq" Dec 11 10:34:34 crc kubenswrapper[4953]: I1211 10:34:34.874298 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/767370a9-f8dd-4370-a2cc-f5baeff52c54-public-tls-certs\") pod \"barbican-api-7c85df7b9d-rdbfq\" (UID: \"767370a9-f8dd-4370-a2cc-f5baeff52c54\") " pod="openstack/barbican-api-7c85df7b9d-rdbfq" Dec 11 10:34:34 crc kubenswrapper[4953]: I1211 10:34:34.874509 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/767370a9-f8dd-4370-a2cc-f5baeff52c54-config-data-custom\") pod \"barbican-api-7c85df7b9d-rdbfq\" (UID: \"767370a9-f8dd-4370-a2cc-f5baeff52c54\") " pod="openstack/barbican-api-7c85df7b9d-rdbfq" Dec 11 10:34:34 crc kubenswrapper[4953]: I1211 10:34:34.897441 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqtmj\" (UniqueName: \"kubernetes.io/projected/767370a9-f8dd-4370-a2cc-f5baeff52c54-kube-api-access-dqtmj\") pod \"barbican-api-7c85df7b9d-rdbfq\" (UID: \"767370a9-f8dd-4370-a2cc-f5baeff52c54\") " pod="openstack/barbican-api-7c85df7b9d-rdbfq" Dec 11 10:34:35 crc kubenswrapper[4953]: I1211 10:34:35.029431 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7c85df7b9d-rdbfq" Dec 11 10:34:35 crc kubenswrapper[4953]: I1211 10:34:35.217728 4953 generic.go:334] "Generic (PLEG): container finished" podID="ef1a3525-9696-49d4-9a66-0fae5d1ed2e9" containerID="86a6ea85715bef0e1049dec4f7f33925924ceffad92a15d0cb4b7672b26d1dbf" exitCode=0 Dec 11 10:34:35 crc kubenswrapper[4953]: I1211 10:34:35.217785 4953 generic.go:334] "Generic (PLEG): container finished" podID="ef1a3525-9696-49d4-9a66-0fae5d1ed2e9" containerID="27d7bd15395c56bdb6dd073de326859835346e90d66b84971afddc6e6d0a67e7" exitCode=143 Dec 11 10:34:35 crc kubenswrapper[4953]: I1211 10:34:35.217791 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ef1a3525-9696-49d4-9a66-0fae5d1ed2e9","Type":"ContainerDied","Data":"86a6ea85715bef0e1049dec4f7f33925924ceffad92a15d0cb4b7672b26d1dbf"} Dec 11 10:34:35 crc kubenswrapper[4953]: I1211 10:34:35.217846 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ef1a3525-9696-49d4-9a66-0fae5d1ed2e9","Type":"ContainerDied","Data":"27d7bd15395c56bdb6dd073de326859835346e90d66b84971afddc6e6d0a67e7"} Dec 11 10:34:36 crc kubenswrapper[4953]: I1211 10:34:36.201948 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7c85df7b9d-rdbfq"] Dec 11 10:34:36 crc kubenswrapper[4953]: I1211 10:34:36.244065 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aacb6b2c-e7b0-4d0f-84b2-b064cc344258","Type":"ContainerStarted","Data":"a52096f5fd3e155a4f341d1101e9acc2b73a598a10e142a3ae874728c8a1f732"} Dec 11 10:34:36 crc kubenswrapper[4953]: I1211 10:34:36.245516 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7c85df7b9d-rdbfq" event={"ID":"767370a9-f8dd-4370-a2cc-f5baeff52c54","Type":"ContainerStarted","Data":"938c6088d34c783f9d003005b0a091ac9452749112428e1e65d1b0596807ca34"} Dec 11 10:34:36 crc kubenswrapper[4953]: I1211 10:34:36.370497 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 11 10:34:36 crc kubenswrapper[4953]: I1211 10:34:36.421563 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef1a3525-9696-49d4-9a66-0fae5d1ed2e9-config-data\") pod \"ef1a3525-9696-49d4-9a66-0fae5d1ed2e9\" (UID: \"ef1a3525-9696-49d4-9a66-0fae5d1ed2e9\") " Dec 11 10:34:36 crc kubenswrapper[4953]: I1211 10:34:36.421748 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ef1a3525-9696-49d4-9a66-0fae5d1ed2e9-logs\") pod \"ef1a3525-9696-49d4-9a66-0fae5d1ed2e9\" (UID: \"ef1a3525-9696-49d4-9a66-0fae5d1ed2e9\") " Dec 11 10:34:36 crc kubenswrapper[4953]: I1211 10:34:36.421855 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ef1a3525-9696-49d4-9a66-0fae5d1ed2e9-scripts\") pod \"ef1a3525-9696-49d4-9a66-0fae5d1ed2e9\" (UID: \"ef1a3525-9696-49d4-9a66-0fae5d1ed2e9\") " Dec 11 10:34:36 crc kubenswrapper[4953]: I1211 10:34:36.422094 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ef1a3525-9696-49d4-9a66-0fae5d1ed2e9-etc-machine-id\") pod \"ef1a3525-9696-49d4-9a66-0fae5d1ed2e9\" (UID: \"ef1a3525-9696-49d4-9a66-0fae5d1ed2e9\") " Dec 11 10:34:36 crc kubenswrapper[4953]: I1211 10:34:36.422208 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ef1a3525-9696-49d4-9a66-0fae5d1ed2e9-config-data-custom\") pod \"ef1a3525-9696-49d4-9a66-0fae5d1ed2e9\" (UID: \"ef1a3525-9696-49d4-9a66-0fae5d1ed2e9\") " Dec 11 10:34:36 crc kubenswrapper[4953]: I1211 10:34:36.422344 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xv7dq\" (UniqueName: \"kubernetes.io/projected/ef1a3525-9696-49d4-9a66-0fae5d1ed2e9-kube-api-access-xv7dq\") pod \"ef1a3525-9696-49d4-9a66-0fae5d1ed2e9\" (UID: \"ef1a3525-9696-49d4-9a66-0fae5d1ed2e9\") " Dec 11 10:34:36 crc kubenswrapper[4953]: I1211 10:34:36.422448 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef1a3525-9696-49d4-9a66-0fae5d1ed2e9-combined-ca-bundle\") pod \"ef1a3525-9696-49d4-9a66-0fae5d1ed2e9\" (UID: \"ef1a3525-9696-49d4-9a66-0fae5d1ed2e9\") " Dec 11 10:34:36 crc kubenswrapper[4953]: I1211 10:34:36.422667 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ef1a3525-9696-49d4-9a66-0fae5d1ed2e9-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "ef1a3525-9696-49d4-9a66-0fae5d1ed2e9" (UID: "ef1a3525-9696-49d4-9a66-0fae5d1ed2e9"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 10:34:36 crc kubenswrapper[4953]: I1211 10:34:36.423111 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ef1a3525-9696-49d4-9a66-0fae5d1ed2e9-logs" (OuterVolumeSpecName: "logs") pod "ef1a3525-9696-49d4-9a66-0fae5d1ed2e9" (UID: "ef1a3525-9696-49d4-9a66-0fae5d1ed2e9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:34:36 crc kubenswrapper[4953]: I1211 10:34:36.423594 4953 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ef1a3525-9696-49d4-9a66-0fae5d1ed2e9-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 11 10:34:36 crc kubenswrapper[4953]: I1211 10:34:36.424386 4953 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ef1a3525-9696-49d4-9a66-0fae5d1ed2e9-logs\") on node \"crc\" DevicePath \"\"" Dec 11 10:34:36 crc kubenswrapper[4953]: I1211 10:34:36.430866 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef1a3525-9696-49d4-9a66-0fae5d1ed2e9-scripts" (OuterVolumeSpecName: "scripts") pod "ef1a3525-9696-49d4-9a66-0fae5d1ed2e9" (UID: "ef1a3525-9696-49d4-9a66-0fae5d1ed2e9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:34:36 crc kubenswrapper[4953]: I1211 10:34:36.435826 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef1a3525-9696-49d4-9a66-0fae5d1ed2e9-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "ef1a3525-9696-49d4-9a66-0fae5d1ed2e9" (UID: "ef1a3525-9696-49d4-9a66-0fae5d1ed2e9"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:34:36 crc kubenswrapper[4953]: I1211 10:34:36.442838 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef1a3525-9696-49d4-9a66-0fae5d1ed2e9-kube-api-access-xv7dq" (OuterVolumeSpecName: "kube-api-access-xv7dq") pod "ef1a3525-9696-49d4-9a66-0fae5d1ed2e9" (UID: "ef1a3525-9696-49d4-9a66-0fae5d1ed2e9"). InnerVolumeSpecName "kube-api-access-xv7dq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:34:36 crc kubenswrapper[4953]: I1211 10:34:36.483370 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef1a3525-9696-49d4-9a66-0fae5d1ed2e9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ef1a3525-9696-49d4-9a66-0fae5d1ed2e9" (UID: "ef1a3525-9696-49d4-9a66-0fae5d1ed2e9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:34:36 crc kubenswrapper[4953]: I1211 10:34:36.517185 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef1a3525-9696-49d4-9a66-0fae5d1ed2e9-config-data" (OuterVolumeSpecName: "config-data") pod "ef1a3525-9696-49d4-9a66-0fae5d1ed2e9" (UID: "ef1a3525-9696-49d4-9a66-0fae5d1ed2e9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:34:36 crc kubenswrapper[4953]: I1211 10:34:36.526105 4953 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ef1a3525-9696-49d4-9a66-0fae5d1ed2e9-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 11 10:34:36 crc kubenswrapper[4953]: I1211 10:34:36.526143 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xv7dq\" (UniqueName: \"kubernetes.io/projected/ef1a3525-9696-49d4-9a66-0fae5d1ed2e9-kube-api-access-xv7dq\") on node \"crc\" DevicePath \"\"" Dec 11 10:34:36 crc kubenswrapper[4953]: I1211 10:34:36.526156 4953 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef1a3525-9696-49d4-9a66-0fae5d1ed2e9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 10:34:36 crc kubenswrapper[4953]: I1211 10:34:36.526165 4953 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef1a3525-9696-49d4-9a66-0fae5d1ed2e9-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 10:34:36 crc kubenswrapper[4953]: I1211 10:34:36.526174 4953 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ef1a3525-9696-49d4-9a66-0fae5d1ed2e9-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 10:34:36 crc kubenswrapper[4953]: I1211 10:34:36.608394 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-7567d9469d-rx5dx" Dec 11 10:34:36 crc kubenswrapper[4953]: I1211 10:34:36.621470 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-7567d9469d-rx5dx" Dec 11 10:34:37 crc kubenswrapper[4953]: I1211 10:34:37.257593 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aacb6b2c-e7b0-4d0f-84b2-b064cc344258","Type":"ContainerStarted","Data":"ace0e8874130c5fd96b3799500075fbdca0917c37b866eb6edf4f6b69f412004"} Dec 11 10:34:37 crc kubenswrapper[4953]: I1211 10:34:37.260749 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7c85df7b9d-rdbfq" event={"ID":"767370a9-f8dd-4370-a2cc-f5baeff52c54","Type":"ContainerStarted","Data":"f1f3935cba9d49f468aa48835e818e819d4e1455992846d4cd92a2e960523799"} Dec 11 10:34:37 crc kubenswrapper[4953]: I1211 10:34:37.260848 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7c85df7b9d-rdbfq" event={"ID":"767370a9-f8dd-4370-a2cc-f5baeff52c54","Type":"ContainerStarted","Data":"ae3f22ec9f89b003c85fac5cd8cf0695244934ca68cf1fc2a0f17935650f23bf"} Dec 11 10:34:37 crc kubenswrapper[4953]: I1211 10:34:37.261879 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7c85df7b9d-rdbfq" Dec 11 10:34:37 crc kubenswrapper[4953]: I1211 10:34:37.261922 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7c85df7b9d-rdbfq" Dec 11 10:34:37 crc kubenswrapper[4953]: I1211 10:34:37.263617 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ef1a3525-9696-49d4-9a66-0fae5d1ed2e9","Type":"ContainerDied","Data":"9424b4c28ae7636cbffccfbac9961a6e67a70c2666e89b46d9dd850af79f2e63"} Dec 11 10:34:37 crc kubenswrapper[4953]: I1211 10:34:37.263664 4953 scope.go:117] "RemoveContainer" containerID="86a6ea85715bef0e1049dec4f7f33925924ceffad92a15d0cb4b7672b26d1dbf" Dec 11 10:34:37 crc kubenswrapper[4953]: I1211 10:34:37.263674 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 11 10:34:37 crc kubenswrapper[4953]: I1211 10:34:37.287587 4953 scope.go:117] "RemoveContainer" containerID="27d7bd15395c56bdb6dd073de326859835346e90d66b84971afddc6e6d0a67e7" Dec 11 10:34:37 crc kubenswrapper[4953]: I1211 10:34:37.289255 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-7c85df7b9d-rdbfq" podStartSLOduration=3.2892286410000002 podStartE2EDuration="3.289228641s" podCreationTimestamp="2025-12-11 10:34:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:34:37.28049505 +0000 UTC m=+1395.304354093" watchObservedRunningTime="2025-12-11 10:34:37.289228641 +0000 UTC m=+1395.313087694" Dec 11 10:34:37 crc kubenswrapper[4953]: I1211 10:34:37.313465 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Dec 11 10:34:37 crc kubenswrapper[4953]: I1211 10:34:37.323170 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Dec 11 10:34:37 crc kubenswrapper[4953]: I1211 10:34:37.355075 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Dec 11 10:34:37 crc kubenswrapper[4953]: E1211 10:34:37.355447 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef1a3525-9696-49d4-9a66-0fae5d1ed2e9" containerName="cinder-api-log" Dec 11 10:34:37 crc kubenswrapper[4953]: I1211 10:34:37.355458 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef1a3525-9696-49d4-9a66-0fae5d1ed2e9" containerName="cinder-api-log" Dec 11 10:34:37 crc kubenswrapper[4953]: E1211 10:34:37.355487 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef1a3525-9696-49d4-9a66-0fae5d1ed2e9" containerName="cinder-api" Dec 11 10:34:37 crc kubenswrapper[4953]: I1211 10:34:37.355493 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef1a3525-9696-49d4-9a66-0fae5d1ed2e9" containerName="cinder-api" Dec 11 10:34:37 crc kubenswrapper[4953]: I1211 10:34:37.355662 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef1a3525-9696-49d4-9a66-0fae5d1ed2e9" containerName="cinder-api-log" Dec 11 10:34:37 crc kubenswrapper[4953]: I1211 10:34:37.355672 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef1a3525-9696-49d4-9a66-0fae5d1ed2e9" containerName="cinder-api" Dec 11 10:34:37 crc kubenswrapper[4953]: I1211 10:34:37.356596 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 11 10:34:37 crc kubenswrapper[4953]: I1211 10:34:37.372212 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Dec 11 10:34:37 crc kubenswrapper[4953]: I1211 10:34:37.372670 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Dec 11 10:34:37 crc kubenswrapper[4953]: I1211 10:34:37.372833 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Dec 11 10:34:37 crc kubenswrapper[4953]: I1211 10:34:37.401847 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 11 10:34:37 crc kubenswrapper[4953]: I1211 10:34:37.450636 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4b1b7520-f52c-4a2a-98e5-16ac7460bade-config-data-custom\") pod \"cinder-api-0\" (UID: \"4b1b7520-f52c-4a2a-98e5-16ac7460bade\") " pod="openstack/cinder-api-0" Dec 11 10:34:37 crc kubenswrapper[4953]: I1211 10:34:37.450731 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b1b7520-f52c-4a2a-98e5-16ac7460bade-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"4b1b7520-f52c-4a2a-98e5-16ac7460bade\") " pod="openstack/cinder-api-0" Dec 11 10:34:37 crc kubenswrapper[4953]: I1211 10:34:37.450766 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b1b7520-f52c-4a2a-98e5-16ac7460bade-config-data\") pod \"cinder-api-0\" (UID: \"4b1b7520-f52c-4a2a-98e5-16ac7460bade\") " pod="openstack/cinder-api-0" Dec 11 10:34:37 crc kubenswrapper[4953]: I1211 10:34:37.450794 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b1b7520-f52c-4a2a-98e5-16ac7460bade-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"4b1b7520-f52c-4a2a-98e5-16ac7460bade\") " pod="openstack/cinder-api-0" Dec 11 10:34:37 crc kubenswrapper[4953]: I1211 10:34:37.450829 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4b1b7520-f52c-4a2a-98e5-16ac7460bade-etc-machine-id\") pod \"cinder-api-0\" (UID: \"4b1b7520-f52c-4a2a-98e5-16ac7460bade\") " pod="openstack/cinder-api-0" Dec 11 10:34:37 crc kubenswrapper[4953]: I1211 10:34:37.450852 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b1b7520-f52c-4a2a-98e5-16ac7460bade-scripts\") pod \"cinder-api-0\" (UID: \"4b1b7520-f52c-4a2a-98e5-16ac7460bade\") " pod="openstack/cinder-api-0" Dec 11 10:34:37 crc kubenswrapper[4953]: I1211 10:34:37.450870 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b1b7520-f52c-4a2a-98e5-16ac7460bade-public-tls-certs\") pod \"cinder-api-0\" (UID: \"4b1b7520-f52c-4a2a-98e5-16ac7460bade\") " pod="openstack/cinder-api-0" Dec 11 10:34:37 crc kubenswrapper[4953]: I1211 10:34:37.451004 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b1b7520-f52c-4a2a-98e5-16ac7460bade-logs\") pod \"cinder-api-0\" (UID: \"4b1b7520-f52c-4a2a-98e5-16ac7460bade\") " pod="openstack/cinder-api-0" Dec 11 10:34:37 crc kubenswrapper[4953]: I1211 10:34:37.451067 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lw7zc\" (UniqueName: \"kubernetes.io/projected/4b1b7520-f52c-4a2a-98e5-16ac7460bade-kube-api-access-lw7zc\") pod \"cinder-api-0\" (UID: \"4b1b7520-f52c-4a2a-98e5-16ac7460bade\") " pod="openstack/cinder-api-0" Dec 11 10:34:37 crc kubenswrapper[4953]: I1211 10:34:37.552371 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b1b7520-f52c-4a2a-98e5-16ac7460bade-logs\") pod \"cinder-api-0\" (UID: \"4b1b7520-f52c-4a2a-98e5-16ac7460bade\") " pod="openstack/cinder-api-0" Dec 11 10:34:37 crc kubenswrapper[4953]: I1211 10:34:37.552461 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lw7zc\" (UniqueName: \"kubernetes.io/projected/4b1b7520-f52c-4a2a-98e5-16ac7460bade-kube-api-access-lw7zc\") pod \"cinder-api-0\" (UID: \"4b1b7520-f52c-4a2a-98e5-16ac7460bade\") " pod="openstack/cinder-api-0" Dec 11 10:34:37 crc kubenswrapper[4953]: I1211 10:34:37.552502 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4b1b7520-f52c-4a2a-98e5-16ac7460bade-config-data-custom\") pod \"cinder-api-0\" (UID: \"4b1b7520-f52c-4a2a-98e5-16ac7460bade\") " pod="openstack/cinder-api-0" Dec 11 10:34:37 crc kubenswrapper[4953]: I1211 10:34:37.552535 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b1b7520-f52c-4a2a-98e5-16ac7460bade-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"4b1b7520-f52c-4a2a-98e5-16ac7460bade\") " pod="openstack/cinder-api-0" Dec 11 10:34:37 crc kubenswrapper[4953]: I1211 10:34:37.552557 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b1b7520-f52c-4a2a-98e5-16ac7460bade-config-data\") pod \"cinder-api-0\" (UID: \"4b1b7520-f52c-4a2a-98e5-16ac7460bade\") " pod="openstack/cinder-api-0" Dec 11 10:34:37 crc kubenswrapper[4953]: I1211 10:34:37.552592 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b1b7520-f52c-4a2a-98e5-16ac7460bade-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"4b1b7520-f52c-4a2a-98e5-16ac7460bade\") " pod="openstack/cinder-api-0" Dec 11 10:34:37 crc kubenswrapper[4953]: I1211 10:34:37.552616 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4b1b7520-f52c-4a2a-98e5-16ac7460bade-etc-machine-id\") pod \"cinder-api-0\" (UID: \"4b1b7520-f52c-4a2a-98e5-16ac7460bade\") " pod="openstack/cinder-api-0" Dec 11 10:34:37 crc kubenswrapper[4953]: I1211 10:34:37.552632 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b1b7520-f52c-4a2a-98e5-16ac7460bade-scripts\") pod \"cinder-api-0\" (UID: \"4b1b7520-f52c-4a2a-98e5-16ac7460bade\") " pod="openstack/cinder-api-0" Dec 11 10:34:37 crc kubenswrapper[4953]: I1211 10:34:37.552651 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b1b7520-f52c-4a2a-98e5-16ac7460bade-public-tls-certs\") pod \"cinder-api-0\" (UID: \"4b1b7520-f52c-4a2a-98e5-16ac7460bade\") " pod="openstack/cinder-api-0" Dec 11 10:34:37 crc kubenswrapper[4953]: I1211 10:34:37.554432 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b1b7520-f52c-4a2a-98e5-16ac7460bade-logs\") pod \"cinder-api-0\" (UID: \"4b1b7520-f52c-4a2a-98e5-16ac7460bade\") " pod="openstack/cinder-api-0" Dec 11 10:34:37 crc kubenswrapper[4953]: I1211 10:34:37.556031 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4b1b7520-f52c-4a2a-98e5-16ac7460bade-etc-machine-id\") pod \"cinder-api-0\" (UID: \"4b1b7520-f52c-4a2a-98e5-16ac7460bade\") " pod="openstack/cinder-api-0" Dec 11 10:34:37 crc kubenswrapper[4953]: I1211 10:34:37.559645 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4b1b7520-f52c-4a2a-98e5-16ac7460bade-config-data-custom\") pod \"cinder-api-0\" (UID: \"4b1b7520-f52c-4a2a-98e5-16ac7460bade\") " pod="openstack/cinder-api-0" Dec 11 10:34:37 crc kubenswrapper[4953]: I1211 10:34:37.561822 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b1b7520-f52c-4a2a-98e5-16ac7460bade-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"4b1b7520-f52c-4a2a-98e5-16ac7460bade\") " pod="openstack/cinder-api-0" Dec 11 10:34:37 crc kubenswrapper[4953]: I1211 10:34:37.561788 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b1b7520-f52c-4a2a-98e5-16ac7460bade-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"4b1b7520-f52c-4a2a-98e5-16ac7460bade\") " pod="openstack/cinder-api-0" Dec 11 10:34:37 crc kubenswrapper[4953]: I1211 10:34:37.562796 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b1b7520-f52c-4a2a-98e5-16ac7460bade-config-data\") pod \"cinder-api-0\" (UID: \"4b1b7520-f52c-4a2a-98e5-16ac7460bade\") " pod="openstack/cinder-api-0" Dec 11 10:34:37 crc kubenswrapper[4953]: I1211 10:34:37.563913 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b1b7520-f52c-4a2a-98e5-16ac7460bade-scripts\") pod \"cinder-api-0\" (UID: \"4b1b7520-f52c-4a2a-98e5-16ac7460bade\") " pod="openstack/cinder-api-0" Dec 11 10:34:37 crc kubenswrapper[4953]: I1211 10:34:37.575211 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b1b7520-f52c-4a2a-98e5-16ac7460bade-public-tls-certs\") pod \"cinder-api-0\" (UID: \"4b1b7520-f52c-4a2a-98e5-16ac7460bade\") " pod="openstack/cinder-api-0" Dec 11 10:34:37 crc kubenswrapper[4953]: I1211 10:34:37.587396 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lw7zc\" (UniqueName: \"kubernetes.io/projected/4b1b7520-f52c-4a2a-98e5-16ac7460bade-kube-api-access-lw7zc\") pod \"cinder-api-0\" (UID: \"4b1b7520-f52c-4a2a-98e5-16ac7460bade\") " pod="openstack/cinder-api-0" Dec 11 10:34:37 crc kubenswrapper[4953]: I1211 10:34:37.679030 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 11 10:34:37 crc kubenswrapper[4953]: I1211 10:34:37.882283 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-677c7c8c9c-gh7rd" Dec 11 10:34:37 crc kubenswrapper[4953]: I1211 10:34:37.952345 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-75bd4868-pp5tq"] Dec 11 10:34:37 crc kubenswrapper[4953]: I1211 10:34:37.952566 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-75bd4868-pp5tq" podUID="e06bccd4-aefb-4055-b4eb-ef745234cbcb" containerName="neutron-api" containerID="cri-o://ce6bd3dbc16b31a00af008f4d0a7b764c9419ef9ba4de250ba6d865f380c53fd" gracePeriod=30 Dec 11 10:34:37 crc kubenswrapper[4953]: I1211 10:34:37.953201 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-75bd4868-pp5tq" podUID="e06bccd4-aefb-4055-b4eb-ef745234cbcb" containerName="neutron-httpd" containerID="cri-o://936b2499143aa85b9d93c66ed77301a1ddbc5fb5ce305f425ffb081da5d18a69" gracePeriod=30 Dec 11 10:34:38 crc kubenswrapper[4953]: I1211 10:34:38.188131 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 11 10:34:38 crc kubenswrapper[4953]: I1211 10:34:38.243908 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-55b68558f8-r49n8" Dec 11 10:34:38 crc kubenswrapper[4953]: I1211 10:34:38.294809 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"4b1b7520-f52c-4a2a-98e5-16ac7460bade","Type":"ContainerStarted","Data":"ccc51b63a022f3a56643f7cf5c7d4f3cbcc20bcfcd5e67a451eca507a78e60c3"} Dec 11 10:34:38 crc kubenswrapper[4953]: I1211 10:34:38.309740 4953 generic.go:334] "Generic (PLEG): container finished" podID="e06bccd4-aefb-4055-b4eb-ef745234cbcb" containerID="936b2499143aa85b9d93c66ed77301a1ddbc5fb5ce305f425ffb081da5d18a69" exitCode=0 Dec 11 10:34:38 crc kubenswrapper[4953]: I1211 10:34:38.310733 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-75bd4868-pp5tq" event={"ID":"e06bccd4-aefb-4055-b4eb-ef745234cbcb","Type":"ContainerDied","Data":"936b2499143aa85b9d93c66ed77301a1ddbc5fb5ce305f425ffb081da5d18a69"} Dec 11 10:34:38 crc kubenswrapper[4953]: I1211 10:34:38.450974 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Dec 11 10:34:38 crc kubenswrapper[4953]: I1211 10:34:38.511217 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef1a3525-9696-49d4-9a66-0fae5d1ed2e9" path="/var/lib/kubelet/pods/ef1a3525-9696-49d4-9a66-0fae5d1ed2e9/volumes" Dec 11 10:34:38 crc kubenswrapper[4953]: I1211 10:34:38.868201 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Dec 11 10:34:38 crc kubenswrapper[4953]: I1211 10:34:38.964830 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-75bfc9b94f-qfmcg" Dec 11 10:34:39 crc kubenswrapper[4953]: I1211 10:34:39.241519 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b9c8b59c-h4qws"] Dec 11 10:34:39 crc kubenswrapper[4953]: I1211 10:34:39.242025 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6b9c8b59c-h4qws" podUID="41289b74-d869-42e1-875c-4ba58c7cd4c2" containerName="dnsmasq-dns" containerID="cri-o://9a255d59c1db42c2842d3af9eb3eb3254ee5b3e3e12d4f55efba19b41ef003d4" gracePeriod=10 Dec 11 10:34:39 crc kubenswrapper[4953]: I1211 10:34:39.250983 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Dec 11 10:34:39 crc kubenswrapper[4953]: I1211 10:34:39.252364 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 11 10:34:39 crc kubenswrapper[4953]: I1211 10:34:39.260342 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Dec 11 10:34:39 crc kubenswrapper[4953]: I1211 10:34:39.260778 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Dec 11 10:34:39 crc kubenswrapper[4953]: I1211 10:34:39.260431 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-x7j4h" Dec 11 10:34:39 crc kubenswrapper[4953]: I1211 10:34:39.292414 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Dec 11 10:34:39 crc kubenswrapper[4953]: I1211 10:34:39.349740 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bl2rw\" (UniqueName: \"kubernetes.io/projected/56f7d9a7-e24f-4b47-b829-7adcad2b0a60-kube-api-access-bl2rw\") pod \"openstackclient\" (UID: \"56f7d9a7-e24f-4b47-b829-7adcad2b0a60\") " pod="openstack/openstackclient" Dec 11 10:34:39 crc kubenswrapper[4953]: I1211 10:34:39.349840 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/56f7d9a7-e24f-4b47-b829-7adcad2b0a60-openstack-config\") pod \"openstackclient\" (UID: \"56f7d9a7-e24f-4b47-b829-7adcad2b0a60\") " pod="openstack/openstackclient" Dec 11 10:34:39 crc kubenswrapper[4953]: I1211 10:34:39.349884 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56f7d9a7-e24f-4b47-b829-7adcad2b0a60-combined-ca-bundle\") pod \"openstackclient\" (UID: \"56f7d9a7-e24f-4b47-b829-7adcad2b0a60\") " pod="openstack/openstackclient" Dec 11 10:34:39 crc kubenswrapper[4953]: I1211 10:34:39.350004 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/56f7d9a7-e24f-4b47-b829-7adcad2b0a60-openstack-config-secret\") pod \"openstackclient\" (UID: \"56f7d9a7-e24f-4b47-b829-7adcad2b0a60\") " pod="openstack/openstackclient" Dec 11 10:34:39 crc kubenswrapper[4953]: I1211 10:34:39.371886 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"4b1b7520-f52c-4a2a-98e5-16ac7460bade","Type":"ContainerStarted","Data":"6b9b936ddf7f45582285d8d2e0da2428665ce0beadde06342951de718bfb19dc"} Dec 11 10:34:39 crc kubenswrapper[4953]: I1211 10:34:39.388231 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aacb6b2c-e7b0-4d0f-84b2-b064cc344258","Type":"ContainerStarted","Data":"9d42fb894de3e0dfd7cfbab270564caee6df5efafc65ef9920c27e6a9de364c8"} Dec 11 10:34:39 crc kubenswrapper[4953]: I1211 10:34:39.388274 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 11 10:34:39 crc kubenswrapper[4953]: I1211 10:34:39.430201 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.724852257 podStartE2EDuration="9.43017965s" podCreationTimestamp="2025-12-11 10:34:30 +0000 UTC" firstStartedPulling="2025-12-11 10:34:32.652754919 +0000 UTC m=+1390.676613952" lastFinishedPulling="2025-12-11 10:34:38.358082312 +0000 UTC m=+1396.381941345" observedRunningTime="2025-12-11 10:34:39.417136621 +0000 UTC m=+1397.440995674" watchObservedRunningTime="2025-12-11 10:34:39.43017965 +0000 UTC m=+1397.454038673" Dec 11 10:34:39 crc kubenswrapper[4953]: I1211 10:34:39.453996 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bl2rw\" (UniqueName: \"kubernetes.io/projected/56f7d9a7-e24f-4b47-b829-7adcad2b0a60-kube-api-access-bl2rw\") pod \"openstackclient\" (UID: \"56f7d9a7-e24f-4b47-b829-7adcad2b0a60\") " pod="openstack/openstackclient" Dec 11 10:34:39 crc kubenswrapper[4953]: I1211 10:34:39.454366 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/56f7d9a7-e24f-4b47-b829-7adcad2b0a60-openstack-config\") pod \"openstackclient\" (UID: \"56f7d9a7-e24f-4b47-b829-7adcad2b0a60\") " pod="openstack/openstackclient" Dec 11 10:34:39 crc kubenswrapper[4953]: I1211 10:34:39.461565 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56f7d9a7-e24f-4b47-b829-7adcad2b0a60-combined-ca-bundle\") pod \"openstackclient\" (UID: \"56f7d9a7-e24f-4b47-b829-7adcad2b0a60\") " pod="openstack/openstackclient" Dec 11 10:34:39 crc kubenswrapper[4953]: I1211 10:34:39.462003 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/56f7d9a7-e24f-4b47-b829-7adcad2b0a60-openstack-config-secret\") pod \"openstackclient\" (UID: \"56f7d9a7-e24f-4b47-b829-7adcad2b0a60\") " pod="openstack/openstackclient" Dec 11 10:34:39 crc kubenswrapper[4953]: I1211 10:34:39.458495 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/56f7d9a7-e24f-4b47-b829-7adcad2b0a60-openstack-config\") pod \"openstackclient\" (UID: \"56f7d9a7-e24f-4b47-b829-7adcad2b0a60\") " pod="openstack/openstackclient" Dec 11 10:34:39 crc kubenswrapper[4953]: I1211 10:34:39.493478 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56f7d9a7-e24f-4b47-b829-7adcad2b0a60-combined-ca-bundle\") pod \"openstackclient\" (UID: \"56f7d9a7-e24f-4b47-b829-7adcad2b0a60\") " pod="openstack/openstackclient" Dec 11 10:34:39 crc kubenswrapper[4953]: I1211 10:34:39.503093 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bl2rw\" (UniqueName: \"kubernetes.io/projected/56f7d9a7-e24f-4b47-b829-7adcad2b0a60-kube-api-access-bl2rw\") pod \"openstackclient\" (UID: \"56f7d9a7-e24f-4b47-b829-7adcad2b0a60\") " pod="openstack/openstackclient" Dec 11 10:34:39 crc kubenswrapper[4953]: I1211 10:34:39.513050 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/56f7d9a7-e24f-4b47-b829-7adcad2b0a60-openstack-config-secret\") pod \"openstackclient\" (UID: \"56f7d9a7-e24f-4b47-b829-7adcad2b0a60\") " pod="openstack/openstackclient" Dec 11 10:34:39 crc kubenswrapper[4953]: I1211 10:34:39.585066 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 11 10:34:39 crc kubenswrapper[4953]: I1211 10:34:39.594167 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 11 10:34:40 crc kubenswrapper[4953]: I1211 10:34:40.201891 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b9c8b59c-h4qws" Dec 11 10:34:40 crc kubenswrapper[4953]: I1211 10:34:40.289136 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mtwhv\" (UniqueName: \"kubernetes.io/projected/41289b74-d869-42e1-875c-4ba58c7cd4c2-kube-api-access-mtwhv\") pod \"41289b74-d869-42e1-875c-4ba58c7cd4c2\" (UID: \"41289b74-d869-42e1-875c-4ba58c7cd4c2\") " Dec 11 10:34:40 crc kubenswrapper[4953]: I1211 10:34:40.289195 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/41289b74-d869-42e1-875c-4ba58c7cd4c2-ovsdbserver-sb\") pod \"41289b74-d869-42e1-875c-4ba58c7cd4c2\" (UID: \"41289b74-d869-42e1-875c-4ba58c7cd4c2\") " Dec 11 10:34:40 crc kubenswrapper[4953]: I1211 10:34:40.289272 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/41289b74-d869-42e1-875c-4ba58c7cd4c2-ovsdbserver-nb\") pod \"41289b74-d869-42e1-875c-4ba58c7cd4c2\" (UID: \"41289b74-d869-42e1-875c-4ba58c7cd4c2\") " Dec 11 10:34:40 crc kubenswrapper[4953]: I1211 10:34:40.289358 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/41289b74-d869-42e1-875c-4ba58c7cd4c2-dns-svc\") pod \"41289b74-d869-42e1-875c-4ba58c7cd4c2\" (UID: \"41289b74-d869-42e1-875c-4ba58c7cd4c2\") " Dec 11 10:34:40 crc kubenswrapper[4953]: I1211 10:34:40.289469 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/41289b74-d869-42e1-875c-4ba58c7cd4c2-dns-swift-storage-0\") pod \"41289b74-d869-42e1-875c-4ba58c7cd4c2\" (UID: \"41289b74-d869-42e1-875c-4ba58c7cd4c2\") " Dec 11 10:34:40 crc kubenswrapper[4953]: I1211 10:34:40.289537 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41289b74-d869-42e1-875c-4ba58c7cd4c2-config\") pod \"41289b74-d869-42e1-875c-4ba58c7cd4c2\" (UID: \"41289b74-d869-42e1-875c-4ba58c7cd4c2\") " Dec 11 10:34:40 crc kubenswrapper[4953]: I1211 10:34:40.337939 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41289b74-d869-42e1-875c-4ba58c7cd4c2-kube-api-access-mtwhv" (OuterVolumeSpecName: "kube-api-access-mtwhv") pod "41289b74-d869-42e1-875c-4ba58c7cd4c2" (UID: "41289b74-d869-42e1-875c-4ba58c7cd4c2"). InnerVolumeSpecName "kube-api-access-mtwhv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:34:40 crc kubenswrapper[4953]: I1211 10:34:40.392371 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mtwhv\" (UniqueName: \"kubernetes.io/projected/41289b74-d869-42e1-875c-4ba58c7cd4c2-kube-api-access-mtwhv\") on node \"crc\" DevicePath \"\"" Dec 11 10:34:40 crc kubenswrapper[4953]: I1211 10:34:40.394539 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Dec 11 10:34:40 crc kubenswrapper[4953]: I1211 10:34:40.400338 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41289b74-d869-42e1-875c-4ba58c7cd4c2-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "41289b74-d869-42e1-875c-4ba58c7cd4c2" (UID: "41289b74-d869-42e1-875c-4ba58c7cd4c2"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:34:40 crc kubenswrapper[4953]: I1211 10:34:40.411834 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41289b74-d869-42e1-875c-4ba58c7cd4c2-config" (OuterVolumeSpecName: "config") pod "41289b74-d869-42e1-875c-4ba58c7cd4c2" (UID: "41289b74-d869-42e1-875c-4ba58c7cd4c2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:34:40 crc kubenswrapper[4953]: I1211 10:34:40.415933 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41289b74-d869-42e1-875c-4ba58c7cd4c2-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "41289b74-d869-42e1-875c-4ba58c7cd4c2" (UID: "41289b74-d869-42e1-875c-4ba58c7cd4c2"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:34:40 crc kubenswrapper[4953]: W1211 10:34:40.436085 4953 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod56f7d9a7_e24f_4b47_b829_7adcad2b0a60.slice/crio-7ad95884c9e6cd458d8d2d1382a0d296c8df1881c8a52ff46091a322ca610ead WatchSource:0}: Error finding container 7ad95884c9e6cd458d8d2d1382a0d296c8df1881c8a52ff46091a322ca610ead: Status 404 returned error can't find the container with id 7ad95884c9e6cd458d8d2d1382a0d296c8df1881c8a52ff46091a322ca610ead Dec 11 10:34:40 crc kubenswrapper[4953]: I1211 10:34:40.436758 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41289b74-d869-42e1-875c-4ba58c7cd4c2-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "41289b74-d869-42e1-875c-4ba58c7cd4c2" (UID: "41289b74-d869-42e1-875c-4ba58c7cd4c2"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:34:40 crc kubenswrapper[4953]: I1211 10:34:40.449711 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b9c8b59c-h4qws" Dec 11 10:34:40 crc kubenswrapper[4953]: I1211 10:34:40.449016 4953 generic.go:334] "Generic (PLEG): container finished" podID="41289b74-d869-42e1-875c-4ba58c7cd4c2" containerID="9a255d59c1db42c2842d3af9eb3eb3254ee5b3e3e12d4f55efba19b41ef003d4" exitCode=0 Dec 11 10:34:40 crc kubenswrapper[4953]: I1211 10:34:40.450260 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b9c8b59c-h4qws" event={"ID":"41289b74-d869-42e1-875c-4ba58c7cd4c2","Type":"ContainerDied","Data":"9a255d59c1db42c2842d3af9eb3eb3254ee5b3e3e12d4f55efba19b41ef003d4"} Dec 11 10:34:40 crc kubenswrapper[4953]: I1211 10:34:40.450301 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b9c8b59c-h4qws" event={"ID":"41289b74-d869-42e1-875c-4ba58c7cd4c2","Type":"ContainerDied","Data":"468b237f2754b5b1deadc73ca419702adc05fea149b09441ce56e5c05edf6be0"} Dec 11 10:34:40 crc kubenswrapper[4953]: I1211 10:34:40.450320 4953 scope.go:117] "RemoveContainer" containerID="9a255d59c1db42c2842d3af9eb3eb3254ee5b3e3e12d4f55efba19b41ef003d4" Dec 11 10:34:40 crc kubenswrapper[4953]: I1211 10:34:40.450340 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="96eb50ce-78f1-4258-bd16-4a79030a7209" containerName="cinder-scheduler" containerID="cri-o://ba3311f62e2554c8eb3898a90cc90b8054152d5abdef3b677a548121ef5694f6" gracePeriod=30 Dec 11 10:34:40 crc kubenswrapper[4953]: I1211 10:34:40.450457 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="96eb50ce-78f1-4258-bd16-4a79030a7209" containerName="probe" containerID="cri-o://c6f3b076c763e7a2a444ec082df0bab65d3333f555a08473e67de7f5f0fa838a" gracePeriod=30 Dec 11 10:34:40 crc kubenswrapper[4953]: I1211 10:34:40.456246 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41289b74-d869-42e1-875c-4ba58c7cd4c2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "41289b74-d869-42e1-875c-4ba58c7cd4c2" (UID: "41289b74-d869-42e1-875c-4ba58c7cd4c2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:34:40 crc kubenswrapper[4953]: I1211 10:34:40.495423 4953 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/41289b74-d869-42e1-875c-4ba58c7cd4c2-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 11 10:34:40 crc kubenswrapper[4953]: I1211 10:34:40.495456 4953 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/41289b74-d869-42e1-875c-4ba58c7cd4c2-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 11 10:34:40 crc kubenswrapper[4953]: I1211 10:34:40.495469 4953 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41289b74-d869-42e1-875c-4ba58c7cd4c2-config\") on node \"crc\" DevicePath \"\"" Dec 11 10:34:40 crc kubenswrapper[4953]: I1211 10:34:40.495480 4953 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/41289b74-d869-42e1-875c-4ba58c7cd4c2-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 11 10:34:40 crc kubenswrapper[4953]: I1211 10:34:40.495489 4953 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/41289b74-d869-42e1-875c-4ba58c7cd4c2-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 11 10:34:40 crc kubenswrapper[4953]: I1211 10:34:40.512777 4953 scope.go:117] "RemoveContainer" containerID="ebec616faad41e4c980a50c5c12b2c9ee0e89e06b3fde358ccfdbe02d5b9dabf" Dec 11 10:34:40 crc kubenswrapper[4953]: I1211 10:34:40.555237 4953 scope.go:117] "RemoveContainer" containerID="9a255d59c1db42c2842d3af9eb3eb3254ee5b3e3e12d4f55efba19b41ef003d4" Dec 11 10:34:40 crc kubenswrapper[4953]: E1211 10:34:40.556878 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a255d59c1db42c2842d3af9eb3eb3254ee5b3e3e12d4f55efba19b41ef003d4\": container with ID starting with 9a255d59c1db42c2842d3af9eb3eb3254ee5b3e3e12d4f55efba19b41ef003d4 not found: ID does not exist" containerID="9a255d59c1db42c2842d3af9eb3eb3254ee5b3e3e12d4f55efba19b41ef003d4" Dec 11 10:34:40 crc kubenswrapper[4953]: I1211 10:34:40.556946 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a255d59c1db42c2842d3af9eb3eb3254ee5b3e3e12d4f55efba19b41ef003d4"} err="failed to get container status \"9a255d59c1db42c2842d3af9eb3eb3254ee5b3e3e12d4f55efba19b41ef003d4\": rpc error: code = NotFound desc = could not find container \"9a255d59c1db42c2842d3af9eb3eb3254ee5b3e3e12d4f55efba19b41ef003d4\": container with ID starting with 9a255d59c1db42c2842d3af9eb3eb3254ee5b3e3e12d4f55efba19b41ef003d4 not found: ID does not exist" Dec 11 10:34:40 crc kubenswrapper[4953]: I1211 10:34:40.556983 4953 scope.go:117] "RemoveContainer" containerID="ebec616faad41e4c980a50c5c12b2c9ee0e89e06b3fde358ccfdbe02d5b9dabf" Dec 11 10:34:40 crc kubenswrapper[4953]: E1211 10:34:40.557748 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ebec616faad41e4c980a50c5c12b2c9ee0e89e06b3fde358ccfdbe02d5b9dabf\": container with ID starting with ebec616faad41e4c980a50c5c12b2c9ee0e89e06b3fde358ccfdbe02d5b9dabf not found: ID does not exist" containerID="ebec616faad41e4c980a50c5c12b2c9ee0e89e06b3fde358ccfdbe02d5b9dabf" Dec 11 10:34:40 crc kubenswrapper[4953]: I1211 10:34:40.557781 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ebec616faad41e4c980a50c5c12b2c9ee0e89e06b3fde358ccfdbe02d5b9dabf"} err="failed to get container status \"ebec616faad41e4c980a50c5c12b2c9ee0e89e06b3fde358ccfdbe02d5b9dabf\": rpc error: code = NotFound desc = could not find container \"ebec616faad41e4c980a50c5c12b2c9ee0e89e06b3fde358ccfdbe02d5b9dabf\": container with ID starting with ebec616faad41e4c980a50c5c12b2c9ee0e89e06b3fde358ccfdbe02d5b9dabf not found: ID does not exist" Dec 11 10:34:40 crc kubenswrapper[4953]: I1211 10:34:40.773896 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b9c8b59c-h4qws"] Dec 11 10:34:40 crc kubenswrapper[4953]: I1211 10:34:40.785106 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6b9c8b59c-h4qws"] Dec 11 10:34:41 crc kubenswrapper[4953]: I1211 10:34:41.125642 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-898848ccb-4kkwg" Dec 11 10:34:41 crc kubenswrapper[4953]: I1211 10:34:41.493932 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"56f7d9a7-e24f-4b47-b829-7adcad2b0a60","Type":"ContainerStarted","Data":"7ad95884c9e6cd458d8d2d1382a0d296c8df1881c8a52ff46091a322ca610ead"} Dec 11 10:34:41 crc kubenswrapper[4953]: I1211 10:34:41.504366 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"4b1b7520-f52c-4a2a-98e5-16ac7460bade","Type":"ContainerStarted","Data":"7be2bbaefa3e689cb3eb71687b4eaaaa7ace9bf5c6191bc5de9d655c138598a0"} Dec 11 10:34:41 crc kubenswrapper[4953]: I1211 10:34:41.506273 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Dec 11 10:34:41 crc kubenswrapper[4953]: I1211 10:34:41.556093 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.55606568 podStartE2EDuration="4.55606568s" podCreationTimestamp="2025-12-11 10:34:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:34:41.53528039 +0000 UTC m=+1399.559139433" watchObservedRunningTime="2025-12-11 10:34:41.55606568 +0000 UTC m=+1399.579924723" Dec 11 10:34:41 crc kubenswrapper[4953]: I1211 10:34:41.971142 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-898848ccb-4kkwg" Dec 11 10:34:42 crc kubenswrapper[4953]: I1211 10:34:42.501955 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41289b74-d869-42e1-875c-4ba58c7cd4c2" path="/var/lib/kubelet/pods/41289b74-d869-42e1-875c-4ba58c7cd4c2/volumes" Dec 11 10:34:42 crc kubenswrapper[4953]: I1211 10:34:42.546264 4953 generic.go:334] "Generic (PLEG): container finished" podID="96eb50ce-78f1-4258-bd16-4a79030a7209" containerID="c6f3b076c763e7a2a444ec082df0bab65d3333f555a08473e67de7f5f0fa838a" exitCode=0 Dec 11 10:34:42 crc kubenswrapper[4953]: I1211 10:34:42.547599 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"96eb50ce-78f1-4258-bd16-4a79030a7209","Type":"ContainerDied","Data":"c6f3b076c763e7a2a444ec082df0bab65d3333f555a08473e67de7f5f0fa838a"} Dec 11 10:34:43 crc kubenswrapper[4953]: I1211 10:34:43.203309 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 11 10:34:43 crc kubenswrapper[4953]: I1211 10:34:43.343479 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96eb50ce-78f1-4258-bd16-4a79030a7209-combined-ca-bundle\") pod \"96eb50ce-78f1-4258-bd16-4a79030a7209\" (UID: \"96eb50ce-78f1-4258-bd16-4a79030a7209\") " Dec 11 10:34:43 crc kubenswrapper[4953]: I1211 10:34:43.343752 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j8zs2\" (UniqueName: \"kubernetes.io/projected/96eb50ce-78f1-4258-bd16-4a79030a7209-kube-api-access-j8zs2\") pod \"96eb50ce-78f1-4258-bd16-4a79030a7209\" (UID: \"96eb50ce-78f1-4258-bd16-4a79030a7209\") " Dec 11 10:34:43 crc kubenswrapper[4953]: I1211 10:34:43.343837 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/96eb50ce-78f1-4258-bd16-4a79030a7209-scripts\") pod \"96eb50ce-78f1-4258-bd16-4a79030a7209\" (UID: \"96eb50ce-78f1-4258-bd16-4a79030a7209\") " Dec 11 10:34:43 crc kubenswrapper[4953]: I1211 10:34:43.343980 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96eb50ce-78f1-4258-bd16-4a79030a7209-config-data\") pod \"96eb50ce-78f1-4258-bd16-4a79030a7209\" (UID: \"96eb50ce-78f1-4258-bd16-4a79030a7209\") " Dec 11 10:34:43 crc kubenswrapper[4953]: I1211 10:34:43.344146 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/96eb50ce-78f1-4258-bd16-4a79030a7209-config-data-custom\") pod \"96eb50ce-78f1-4258-bd16-4a79030a7209\" (UID: \"96eb50ce-78f1-4258-bd16-4a79030a7209\") " Dec 11 10:34:43 crc kubenswrapper[4953]: I1211 10:34:43.344265 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/96eb50ce-78f1-4258-bd16-4a79030a7209-etc-machine-id\") pod \"96eb50ce-78f1-4258-bd16-4a79030a7209\" (UID: \"96eb50ce-78f1-4258-bd16-4a79030a7209\") " Dec 11 10:34:43 crc kubenswrapper[4953]: I1211 10:34:43.344872 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/96eb50ce-78f1-4258-bd16-4a79030a7209-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "96eb50ce-78f1-4258-bd16-4a79030a7209" (UID: "96eb50ce-78f1-4258-bd16-4a79030a7209"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 10:34:43 crc kubenswrapper[4953]: I1211 10:34:43.354742 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96eb50ce-78f1-4258-bd16-4a79030a7209-kube-api-access-j8zs2" (OuterVolumeSpecName: "kube-api-access-j8zs2") pod "96eb50ce-78f1-4258-bd16-4a79030a7209" (UID: "96eb50ce-78f1-4258-bd16-4a79030a7209"). InnerVolumeSpecName "kube-api-access-j8zs2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:34:43 crc kubenswrapper[4953]: I1211 10:34:43.357706 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96eb50ce-78f1-4258-bd16-4a79030a7209-scripts" (OuterVolumeSpecName: "scripts") pod "96eb50ce-78f1-4258-bd16-4a79030a7209" (UID: "96eb50ce-78f1-4258-bd16-4a79030a7209"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:34:43 crc kubenswrapper[4953]: I1211 10:34:43.365785 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96eb50ce-78f1-4258-bd16-4a79030a7209-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "96eb50ce-78f1-4258-bd16-4a79030a7209" (UID: "96eb50ce-78f1-4258-bd16-4a79030a7209"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:34:43 crc kubenswrapper[4953]: I1211 10:34:43.450799 4953 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/96eb50ce-78f1-4258-bd16-4a79030a7209-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 11 10:34:43 crc kubenswrapper[4953]: I1211 10:34:43.451042 4953 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/96eb50ce-78f1-4258-bd16-4a79030a7209-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 11 10:34:43 crc kubenswrapper[4953]: I1211 10:34:43.451104 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j8zs2\" (UniqueName: \"kubernetes.io/projected/96eb50ce-78f1-4258-bd16-4a79030a7209-kube-api-access-j8zs2\") on node \"crc\" DevicePath \"\"" Dec 11 10:34:43 crc kubenswrapper[4953]: I1211 10:34:43.451088 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96eb50ce-78f1-4258-bd16-4a79030a7209-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "96eb50ce-78f1-4258-bd16-4a79030a7209" (UID: "96eb50ce-78f1-4258-bd16-4a79030a7209"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:34:43 crc kubenswrapper[4953]: I1211 10:34:43.452734 4953 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/96eb50ce-78f1-4258-bd16-4a79030a7209-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 10:34:43 crc kubenswrapper[4953]: I1211 10:34:43.501068 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96eb50ce-78f1-4258-bd16-4a79030a7209-config-data" (OuterVolumeSpecName: "config-data") pod "96eb50ce-78f1-4258-bd16-4a79030a7209" (UID: "96eb50ce-78f1-4258-bd16-4a79030a7209"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:34:43 crc kubenswrapper[4953]: I1211 10:34:43.555548 4953 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96eb50ce-78f1-4258-bd16-4a79030a7209-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 10:34:43 crc kubenswrapper[4953]: I1211 10:34:43.555612 4953 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96eb50ce-78f1-4258-bd16-4a79030a7209-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 10:34:43 crc kubenswrapper[4953]: I1211 10:34:43.559758 4953 generic.go:334] "Generic (PLEG): container finished" podID="96eb50ce-78f1-4258-bd16-4a79030a7209" containerID="ba3311f62e2554c8eb3898a90cc90b8054152d5abdef3b677a548121ef5694f6" exitCode=0 Dec 11 10:34:43 crc kubenswrapper[4953]: I1211 10:34:43.560137 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"96eb50ce-78f1-4258-bd16-4a79030a7209","Type":"ContainerDied","Data":"ba3311f62e2554c8eb3898a90cc90b8054152d5abdef3b677a548121ef5694f6"} Dec 11 10:34:43 crc kubenswrapper[4953]: I1211 10:34:43.560158 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 11 10:34:43 crc kubenswrapper[4953]: I1211 10:34:43.560181 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"96eb50ce-78f1-4258-bd16-4a79030a7209","Type":"ContainerDied","Data":"41e74ddc8f74bd623fbbe1c5351ac5046ab3eb7cc50b0aab4ce767b22e3a600b"} Dec 11 10:34:43 crc kubenswrapper[4953]: I1211 10:34:43.560206 4953 scope.go:117] "RemoveContainer" containerID="c6f3b076c763e7a2a444ec082df0bab65d3333f555a08473e67de7f5f0fa838a" Dec 11 10:34:43 crc kubenswrapper[4953]: I1211 10:34:43.604245 4953 scope.go:117] "RemoveContainer" containerID="ba3311f62e2554c8eb3898a90cc90b8054152d5abdef3b677a548121ef5694f6" Dec 11 10:34:43 crc kubenswrapper[4953]: I1211 10:34:43.687256 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 11 10:34:43 crc kubenswrapper[4953]: I1211 10:34:43.705203 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 11 10:34:43 crc kubenswrapper[4953]: I1211 10:34:43.711795 4953 scope.go:117] "RemoveContainer" containerID="c6f3b076c763e7a2a444ec082df0bab65d3333f555a08473e67de7f5f0fa838a" Dec 11 10:34:43 crc kubenswrapper[4953]: E1211 10:34:43.714344 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6f3b076c763e7a2a444ec082df0bab65d3333f555a08473e67de7f5f0fa838a\": container with ID starting with c6f3b076c763e7a2a444ec082df0bab65d3333f555a08473e67de7f5f0fa838a not found: ID does not exist" containerID="c6f3b076c763e7a2a444ec082df0bab65d3333f555a08473e67de7f5f0fa838a" Dec 11 10:34:43 crc kubenswrapper[4953]: I1211 10:34:43.714386 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6f3b076c763e7a2a444ec082df0bab65d3333f555a08473e67de7f5f0fa838a"} err="failed to get container status \"c6f3b076c763e7a2a444ec082df0bab65d3333f555a08473e67de7f5f0fa838a\": rpc error: code = NotFound desc = could not find container \"c6f3b076c763e7a2a444ec082df0bab65d3333f555a08473e67de7f5f0fa838a\": container with ID starting with c6f3b076c763e7a2a444ec082df0bab65d3333f555a08473e67de7f5f0fa838a not found: ID does not exist" Dec 11 10:34:43 crc kubenswrapper[4953]: I1211 10:34:43.714412 4953 scope.go:117] "RemoveContainer" containerID="ba3311f62e2554c8eb3898a90cc90b8054152d5abdef3b677a548121ef5694f6" Dec 11 10:34:43 crc kubenswrapper[4953]: I1211 10:34:43.715228 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Dec 11 10:34:43 crc kubenswrapper[4953]: E1211 10:34:43.715633 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41289b74-d869-42e1-875c-4ba58c7cd4c2" containerName="dnsmasq-dns" Dec 11 10:34:43 crc kubenswrapper[4953]: I1211 10:34:43.715787 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="41289b74-d869-42e1-875c-4ba58c7cd4c2" containerName="dnsmasq-dns" Dec 11 10:34:43 crc kubenswrapper[4953]: E1211 10:34:43.715822 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96eb50ce-78f1-4258-bd16-4a79030a7209" containerName="probe" Dec 11 10:34:43 crc kubenswrapper[4953]: I1211 10:34:43.715828 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="96eb50ce-78f1-4258-bd16-4a79030a7209" containerName="probe" Dec 11 10:34:43 crc kubenswrapper[4953]: E1211 10:34:43.715836 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41289b74-d869-42e1-875c-4ba58c7cd4c2" containerName="init" Dec 11 10:34:43 crc kubenswrapper[4953]: I1211 10:34:43.715843 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="41289b74-d869-42e1-875c-4ba58c7cd4c2" containerName="init" Dec 11 10:34:43 crc kubenswrapper[4953]: E1211 10:34:43.715853 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96eb50ce-78f1-4258-bd16-4a79030a7209" containerName="cinder-scheduler" Dec 11 10:34:43 crc kubenswrapper[4953]: I1211 10:34:43.715861 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="96eb50ce-78f1-4258-bd16-4a79030a7209" containerName="cinder-scheduler" Dec 11 10:34:43 crc kubenswrapper[4953]: I1211 10:34:43.716040 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="41289b74-d869-42e1-875c-4ba58c7cd4c2" containerName="dnsmasq-dns" Dec 11 10:34:43 crc kubenswrapper[4953]: I1211 10:34:43.716055 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="96eb50ce-78f1-4258-bd16-4a79030a7209" containerName="probe" Dec 11 10:34:43 crc kubenswrapper[4953]: I1211 10:34:43.716071 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="96eb50ce-78f1-4258-bd16-4a79030a7209" containerName="cinder-scheduler" Dec 11 10:34:43 crc kubenswrapper[4953]: I1211 10:34:43.718202 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 11 10:34:43 crc kubenswrapper[4953]: E1211 10:34:43.718279 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba3311f62e2554c8eb3898a90cc90b8054152d5abdef3b677a548121ef5694f6\": container with ID starting with ba3311f62e2554c8eb3898a90cc90b8054152d5abdef3b677a548121ef5694f6 not found: ID does not exist" containerID="ba3311f62e2554c8eb3898a90cc90b8054152d5abdef3b677a548121ef5694f6" Dec 11 10:34:43 crc kubenswrapper[4953]: I1211 10:34:43.718352 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba3311f62e2554c8eb3898a90cc90b8054152d5abdef3b677a548121ef5694f6"} err="failed to get container status \"ba3311f62e2554c8eb3898a90cc90b8054152d5abdef3b677a548121ef5694f6\": rpc error: code = NotFound desc = could not find container \"ba3311f62e2554c8eb3898a90cc90b8054152d5abdef3b677a548121ef5694f6\": container with ID starting with ba3311f62e2554c8eb3898a90cc90b8054152d5abdef3b677a548121ef5694f6 not found: ID does not exist" Dec 11 10:34:43 crc kubenswrapper[4953]: I1211 10:34:43.721147 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Dec 11 10:34:43 crc kubenswrapper[4953]: I1211 10:34:43.729058 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 11 10:34:43 crc kubenswrapper[4953]: I1211 10:34:43.882909 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mghl\" (UniqueName: \"kubernetes.io/projected/d1833793-1408-450f-8a7e-e01e6048edd5-kube-api-access-7mghl\") pod \"cinder-scheduler-0\" (UID: \"d1833793-1408-450f-8a7e-e01e6048edd5\") " pod="openstack/cinder-scheduler-0" Dec 11 10:34:43 crc kubenswrapper[4953]: I1211 10:34:43.883006 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d1833793-1408-450f-8a7e-e01e6048edd5-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"d1833793-1408-450f-8a7e-e01e6048edd5\") " pod="openstack/cinder-scheduler-0" Dec 11 10:34:43 crc kubenswrapper[4953]: I1211 10:34:43.884651 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1833793-1408-450f-8a7e-e01e6048edd5-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"d1833793-1408-450f-8a7e-e01e6048edd5\") " pod="openstack/cinder-scheduler-0" Dec 11 10:34:43 crc kubenswrapper[4953]: I1211 10:34:43.884808 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1833793-1408-450f-8a7e-e01e6048edd5-config-data\") pod \"cinder-scheduler-0\" (UID: \"d1833793-1408-450f-8a7e-e01e6048edd5\") " pod="openstack/cinder-scheduler-0" Dec 11 10:34:43 crc kubenswrapper[4953]: I1211 10:34:43.884836 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1833793-1408-450f-8a7e-e01e6048edd5-scripts\") pod \"cinder-scheduler-0\" (UID: \"d1833793-1408-450f-8a7e-e01e6048edd5\") " pod="openstack/cinder-scheduler-0" Dec 11 10:34:43 crc kubenswrapper[4953]: I1211 10:34:43.884864 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d1833793-1408-450f-8a7e-e01e6048edd5-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"d1833793-1408-450f-8a7e-e01e6048edd5\") " pod="openstack/cinder-scheduler-0" Dec 11 10:34:43 crc kubenswrapper[4953]: I1211 10:34:43.986266 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1833793-1408-450f-8a7e-e01e6048edd5-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"d1833793-1408-450f-8a7e-e01e6048edd5\") " pod="openstack/cinder-scheduler-0" Dec 11 10:34:43 crc kubenswrapper[4953]: I1211 10:34:43.986329 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1833793-1408-450f-8a7e-e01e6048edd5-config-data\") pod \"cinder-scheduler-0\" (UID: \"d1833793-1408-450f-8a7e-e01e6048edd5\") " pod="openstack/cinder-scheduler-0" Dec 11 10:34:43 crc kubenswrapper[4953]: I1211 10:34:43.986351 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1833793-1408-450f-8a7e-e01e6048edd5-scripts\") pod \"cinder-scheduler-0\" (UID: \"d1833793-1408-450f-8a7e-e01e6048edd5\") " pod="openstack/cinder-scheduler-0" Dec 11 10:34:43 crc kubenswrapper[4953]: I1211 10:34:43.986379 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d1833793-1408-450f-8a7e-e01e6048edd5-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"d1833793-1408-450f-8a7e-e01e6048edd5\") " pod="openstack/cinder-scheduler-0" Dec 11 10:34:43 crc kubenswrapper[4953]: I1211 10:34:43.986413 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7mghl\" (UniqueName: \"kubernetes.io/projected/d1833793-1408-450f-8a7e-e01e6048edd5-kube-api-access-7mghl\") pod \"cinder-scheduler-0\" (UID: \"d1833793-1408-450f-8a7e-e01e6048edd5\") " pod="openstack/cinder-scheduler-0" Dec 11 10:34:43 crc kubenswrapper[4953]: I1211 10:34:43.986461 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d1833793-1408-450f-8a7e-e01e6048edd5-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"d1833793-1408-450f-8a7e-e01e6048edd5\") " pod="openstack/cinder-scheduler-0" Dec 11 10:34:43 crc kubenswrapper[4953]: I1211 10:34:43.986650 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d1833793-1408-450f-8a7e-e01e6048edd5-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"d1833793-1408-450f-8a7e-e01e6048edd5\") " pod="openstack/cinder-scheduler-0" Dec 11 10:34:43 crc kubenswrapper[4953]: I1211 10:34:43.992069 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1833793-1408-450f-8a7e-e01e6048edd5-config-data\") pod \"cinder-scheduler-0\" (UID: \"d1833793-1408-450f-8a7e-e01e6048edd5\") " pod="openstack/cinder-scheduler-0" Dec 11 10:34:43 crc kubenswrapper[4953]: I1211 10:34:43.992361 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1833793-1408-450f-8a7e-e01e6048edd5-scripts\") pod \"cinder-scheduler-0\" (UID: \"d1833793-1408-450f-8a7e-e01e6048edd5\") " pod="openstack/cinder-scheduler-0" Dec 11 10:34:43 crc kubenswrapper[4953]: I1211 10:34:43.993601 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d1833793-1408-450f-8a7e-e01e6048edd5-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"d1833793-1408-450f-8a7e-e01e6048edd5\") " pod="openstack/cinder-scheduler-0" Dec 11 10:34:43 crc kubenswrapper[4953]: I1211 10:34:43.998843 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1833793-1408-450f-8a7e-e01e6048edd5-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"d1833793-1408-450f-8a7e-e01e6048edd5\") " pod="openstack/cinder-scheduler-0" Dec 11 10:34:44 crc kubenswrapper[4953]: I1211 10:34:44.016263 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7mghl\" (UniqueName: \"kubernetes.io/projected/d1833793-1408-450f-8a7e-e01e6048edd5-kube-api-access-7mghl\") pod \"cinder-scheduler-0\" (UID: \"d1833793-1408-450f-8a7e-e01e6048edd5\") " pod="openstack/cinder-scheduler-0" Dec 11 10:34:44 crc kubenswrapper[4953]: I1211 10:34:44.040823 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 11 10:34:44 crc kubenswrapper[4953]: I1211 10:34:44.486180 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96eb50ce-78f1-4258-bd16-4a79030a7209" path="/var/lib/kubelet/pods/96eb50ce-78f1-4258-bd16-4a79030a7209/volumes" Dec 11 10:34:44 crc kubenswrapper[4953]: I1211 10:34:44.633163 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 11 10:34:45 crc kubenswrapper[4953]: I1211 10:34:45.692839 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d1833793-1408-450f-8a7e-e01e6048edd5","Type":"ContainerStarted","Data":"6633d2d60118f289461651ca377abc04f8eae490967bd314f612d43a8c179596"} Dec 11 10:34:45 crc kubenswrapper[4953]: I1211 10:34:45.694328 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d1833793-1408-450f-8a7e-e01e6048edd5","Type":"ContainerStarted","Data":"54cfaf9c4ca8b8cfd09d72616fb4df347ee6589d72cb8e24c0770de7b158f9dc"} Dec 11 10:34:47 crc kubenswrapper[4953]: I1211 10:34:47.558979 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7c85df7b9d-rdbfq" Dec 11 10:34:47 crc kubenswrapper[4953]: I1211 10:34:47.804490 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d1833793-1408-450f-8a7e-e01e6048edd5","Type":"ContainerStarted","Data":"a1dd894fb738f43b760b8725bd438e6786b826d8bd5ea6ec40ebf1c67bee2cc0"} Dec 11 10:34:47 crc kubenswrapper[4953]: I1211 10:34:47.835603 4953 generic.go:334] "Generic (PLEG): container finished" podID="e06bccd4-aefb-4055-b4eb-ef745234cbcb" containerID="ce6bd3dbc16b31a00af008f4d0a7b764c9419ef9ba4de250ba6d865f380c53fd" exitCode=0 Dec 11 10:34:47 crc kubenswrapper[4953]: I1211 10:34:47.835644 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-75bd4868-pp5tq" event={"ID":"e06bccd4-aefb-4055-b4eb-ef745234cbcb","Type":"ContainerDied","Data":"ce6bd3dbc16b31a00af008f4d0a7b764c9419ef9ba4de250ba6d865f380c53fd"} Dec 11 10:34:47 crc kubenswrapper[4953]: I1211 10:34:47.843321 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.843303084 podStartE2EDuration="4.843303084s" podCreationTimestamp="2025-12-11 10:34:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:34:47.838788649 +0000 UTC m=+1405.862647682" watchObservedRunningTime="2025-12-11 10:34:47.843303084 +0000 UTC m=+1405.867162117" Dec 11 10:34:48 crc kubenswrapper[4953]: I1211 10:34:48.038883 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-75bd4868-pp5tq" Dec 11 10:34:48 crc kubenswrapper[4953]: I1211 10:34:48.183163 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e06bccd4-aefb-4055-b4eb-ef745234cbcb-httpd-config\") pod \"e06bccd4-aefb-4055-b4eb-ef745234cbcb\" (UID: \"e06bccd4-aefb-4055-b4eb-ef745234cbcb\") " Dec 11 10:34:48 crc kubenswrapper[4953]: I1211 10:34:48.183514 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e06bccd4-aefb-4055-b4eb-ef745234cbcb-combined-ca-bundle\") pod \"e06bccd4-aefb-4055-b4eb-ef745234cbcb\" (UID: \"e06bccd4-aefb-4055-b4eb-ef745234cbcb\") " Dec 11 10:34:48 crc kubenswrapper[4953]: I1211 10:34:48.183705 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e06bccd4-aefb-4055-b4eb-ef745234cbcb-ovndb-tls-certs\") pod \"e06bccd4-aefb-4055-b4eb-ef745234cbcb\" (UID: \"e06bccd4-aefb-4055-b4eb-ef745234cbcb\") " Dec 11 10:34:48 crc kubenswrapper[4953]: I1211 10:34:48.183813 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9ls5d\" (UniqueName: \"kubernetes.io/projected/e06bccd4-aefb-4055-b4eb-ef745234cbcb-kube-api-access-9ls5d\") pod \"e06bccd4-aefb-4055-b4eb-ef745234cbcb\" (UID: \"e06bccd4-aefb-4055-b4eb-ef745234cbcb\") " Dec 11 10:34:48 crc kubenswrapper[4953]: I1211 10:34:48.183904 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e06bccd4-aefb-4055-b4eb-ef745234cbcb-config\") pod \"e06bccd4-aefb-4055-b4eb-ef745234cbcb\" (UID: \"e06bccd4-aefb-4055-b4eb-ef745234cbcb\") " Dec 11 10:34:48 crc kubenswrapper[4953]: I1211 10:34:48.191811 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e06bccd4-aefb-4055-b4eb-ef745234cbcb-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "e06bccd4-aefb-4055-b4eb-ef745234cbcb" (UID: "e06bccd4-aefb-4055-b4eb-ef745234cbcb"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:34:48 crc kubenswrapper[4953]: I1211 10:34:48.194076 4953 patch_prober.go:28] interesting pod/machine-config-daemon-q2898 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 10:34:48 crc kubenswrapper[4953]: I1211 10:34:48.194134 4953 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 10:34:48 crc kubenswrapper[4953]: I1211 10:34:48.194194 4953 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q2898" Dec 11 10:34:48 crc kubenswrapper[4953]: I1211 10:34:48.195237 4953 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3a6e85260ff84ef604c5e7d3682ea7027e5daf751b9330364d08387a0213f214"} pod="openshift-machine-config-operator/machine-config-daemon-q2898" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 11 10:34:48 crc kubenswrapper[4953]: I1211 10:34:48.195303 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" containerName="machine-config-daemon" containerID="cri-o://3a6e85260ff84ef604c5e7d3682ea7027e5daf751b9330364d08387a0213f214" gracePeriod=600 Dec 11 10:34:48 crc kubenswrapper[4953]: I1211 10:34:48.209965 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e06bccd4-aefb-4055-b4eb-ef745234cbcb-kube-api-access-9ls5d" (OuterVolumeSpecName: "kube-api-access-9ls5d") pod "e06bccd4-aefb-4055-b4eb-ef745234cbcb" (UID: "e06bccd4-aefb-4055-b4eb-ef745234cbcb"). InnerVolumeSpecName "kube-api-access-9ls5d". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:34:48 crc kubenswrapper[4953]: I1211 10:34:48.245608 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e06bccd4-aefb-4055-b4eb-ef745234cbcb-config" (OuterVolumeSpecName: "config") pod "e06bccd4-aefb-4055-b4eb-ef745234cbcb" (UID: "e06bccd4-aefb-4055-b4eb-ef745234cbcb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:34:48 crc kubenswrapper[4953]: I1211 10:34:48.331627 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9ls5d\" (UniqueName: \"kubernetes.io/projected/e06bccd4-aefb-4055-b4eb-ef745234cbcb-kube-api-access-9ls5d\") on node \"crc\" DevicePath \"\"" Dec 11 10:34:48 crc kubenswrapper[4953]: I1211 10:34:48.331672 4953 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/e06bccd4-aefb-4055-b4eb-ef745234cbcb-config\") on node \"crc\" DevicePath \"\"" Dec 11 10:34:48 crc kubenswrapper[4953]: I1211 10:34:48.331685 4953 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e06bccd4-aefb-4055-b4eb-ef745234cbcb-httpd-config\") on node \"crc\" DevicePath \"\"" Dec 11 10:34:48 crc kubenswrapper[4953]: I1211 10:34:48.341716 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e06bccd4-aefb-4055-b4eb-ef745234cbcb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e06bccd4-aefb-4055-b4eb-ef745234cbcb" (UID: "e06bccd4-aefb-4055-b4eb-ef745234cbcb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:34:48 crc kubenswrapper[4953]: I1211 10:34:48.373839 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e06bccd4-aefb-4055-b4eb-ef745234cbcb-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "e06bccd4-aefb-4055-b4eb-ef745234cbcb" (UID: "e06bccd4-aefb-4055-b4eb-ef745234cbcb"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:34:48 crc kubenswrapper[4953]: I1211 10:34:48.433698 4953 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e06bccd4-aefb-4055-b4eb-ef745234cbcb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 10:34:48 crc kubenswrapper[4953]: I1211 10:34:48.433736 4953 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e06bccd4-aefb-4055-b4eb-ef745234cbcb-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 11 10:34:48 crc kubenswrapper[4953]: I1211 10:34:48.822394 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7c85df7b9d-rdbfq" Dec 11 10:34:48 crc kubenswrapper[4953]: I1211 10:34:48.876211 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-75bd4868-pp5tq" event={"ID":"e06bccd4-aefb-4055-b4eb-ef745234cbcb","Type":"ContainerDied","Data":"5927711a51eb04b60afec3c91ee723574f00db82e0124df7790c58fd1d1531b5"} Dec 11 10:34:48 crc kubenswrapper[4953]: I1211 10:34:48.876671 4953 scope.go:117] "RemoveContainer" containerID="936b2499143aa85b9d93c66ed77301a1ddbc5fb5ce305f425ffb081da5d18a69" Dec 11 10:34:48 crc kubenswrapper[4953]: I1211 10:34:48.876890 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-75bd4868-pp5tq" Dec 11 10:34:48 crc kubenswrapper[4953]: I1211 10:34:48.892883 4953 generic.go:334] "Generic (PLEG): container finished" podID="ed741fb7-1326-48b7-a713-17c9f0243eac" containerID="3a6e85260ff84ef604c5e7d3682ea7027e5daf751b9330364d08387a0213f214" exitCode=0 Dec 11 10:34:48 crc kubenswrapper[4953]: I1211 10:34:48.893629 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q2898" event={"ID":"ed741fb7-1326-48b7-a713-17c9f0243eac","Type":"ContainerDied","Data":"3a6e85260ff84ef604c5e7d3682ea7027e5daf751b9330364d08387a0213f214"} Dec 11 10:34:49 crc kubenswrapper[4953]: I1211 10:34:49.041558 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Dec 11 10:34:49 crc kubenswrapper[4953]: I1211 10:34:49.060845 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-898848ccb-4kkwg"] Dec 11 10:34:49 crc kubenswrapper[4953]: I1211 10:34:49.061387 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-898848ccb-4kkwg" podUID="8df58633-8c06-4dd0-a538-b696f9736f6d" containerName="barbican-api-log" containerID="cri-o://90fc77b9c3712d164c13bcc6f46d987fe0e34162bdcf315b272f12d817bea344" gracePeriod=30 Dec 11 10:34:49 crc kubenswrapper[4953]: I1211 10:34:49.062037 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-898848ccb-4kkwg" podUID="8df58633-8c06-4dd0-a538-b696f9736f6d" containerName="barbican-api" containerID="cri-o://320ee940deece2aa26a07f8c43904e8a8100f8c59b64aa05433ca8faa0893d3f" gracePeriod=30 Dec 11 10:34:49 crc kubenswrapper[4953]: I1211 10:34:49.106049 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-75bd4868-pp5tq"] Dec 11 10:34:49 crc kubenswrapper[4953]: I1211 10:34:49.111848 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-75bd4868-pp5tq"] Dec 11 10:34:49 crc kubenswrapper[4953]: I1211 10:34:49.147461 4953 scope.go:117] "RemoveContainer" containerID="ce6bd3dbc16b31a00af008f4d0a7b764c9419ef9ba4de250ba6d865f380c53fd" Dec 11 10:34:49 crc kubenswrapper[4953]: I1211 10:34:49.373140 4953 scope.go:117] "RemoveContainer" containerID="d7aacf4c14bd2bc98ec833613461a09282ac2ac960a4b2c012b1862a1a65908a" Dec 11 10:34:49 crc kubenswrapper[4953]: I1211 10:34:49.378736 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-78f5cf7bd5-24fm8"] Dec 11 10:34:49 crc kubenswrapper[4953]: E1211 10:34:49.379783 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e06bccd4-aefb-4055-b4eb-ef745234cbcb" containerName="neutron-httpd" Dec 11 10:34:49 crc kubenswrapper[4953]: I1211 10:34:49.379812 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="e06bccd4-aefb-4055-b4eb-ef745234cbcb" containerName="neutron-httpd" Dec 11 10:34:49 crc kubenswrapper[4953]: E1211 10:34:49.379854 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e06bccd4-aefb-4055-b4eb-ef745234cbcb" containerName="neutron-api" Dec 11 10:34:49 crc kubenswrapper[4953]: I1211 10:34:49.379860 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="e06bccd4-aefb-4055-b4eb-ef745234cbcb" containerName="neutron-api" Dec 11 10:34:49 crc kubenswrapper[4953]: I1211 10:34:49.380193 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="e06bccd4-aefb-4055-b4eb-ef745234cbcb" containerName="neutron-httpd" Dec 11 10:34:49 crc kubenswrapper[4953]: I1211 10:34:49.380209 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="e06bccd4-aefb-4055-b4eb-ef745234cbcb" containerName="neutron-api" Dec 11 10:34:49 crc kubenswrapper[4953]: I1211 10:34:49.391234 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-78f5cf7bd5-24fm8"] Dec 11 10:34:49 crc kubenswrapper[4953]: I1211 10:34:49.391407 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-78f5cf7bd5-24fm8" Dec 11 10:34:49 crc kubenswrapper[4953]: I1211 10:34:49.397037 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Dec 11 10:34:49 crc kubenswrapper[4953]: I1211 10:34:49.397267 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Dec 11 10:34:49 crc kubenswrapper[4953]: I1211 10:34:49.398307 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Dec 11 10:34:49 crc kubenswrapper[4953]: I1211 10:34:49.514287 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8521d832-efe5-4653-8c0e-8921f916e10f-public-tls-certs\") pod \"swift-proxy-78f5cf7bd5-24fm8\" (UID: \"8521d832-efe5-4653-8c0e-8921f916e10f\") " pod="openstack/swift-proxy-78f5cf7bd5-24fm8" Dec 11 10:34:49 crc kubenswrapper[4953]: I1211 10:34:49.514509 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8521d832-efe5-4653-8c0e-8921f916e10f-run-httpd\") pod \"swift-proxy-78f5cf7bd5-24fm8\" (UID: \"8521d832-efe5-4653-8c0e-8921f916e10f\") " pod="openstack/swift-proxy-78f5cf7bd5-24fm8" Dec 11 10:34:49 crc kubenswrapper[4953]: I1211 10:34:49.514596 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jg5gl\" (UniqueName: \"kubernetes.io/projected/8521d832-efe5-4653-8c0e-8921f916e10f-kube-api-access-jg5gl\") pod \"swift-proxy-78f5cf7bd5-24fm8\" (UID: \"8521d832-efe5-4653-8c0e-8921f916e10f\") " pod="openstack/swift-proxy-78f5cf7bd5-24fm8" Dec 11 10:34:49 crc kubenswrapper[4953]: I1211 10:34:49.514703 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8521d832-efe5-4653-8c0e-8921f916e10f-config-data\") pod \"swift-proxy-78f5cf7bd5-24fm8\" (UID: \"8521d832-efe5-4653-8c0e-8921f916e10f\") " pod="openstack/swift-proxy-78f5cf7bd5-24fm8" Dec 11 10:34:49 crc kubenswrapper[4953]: I1211 10:34:49.514787 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8521d832-efe5-4653-8c0e-8921f916e10f-combined-ca-bundle\") pod \"swift-proxy-78f5cf7bd5-24fm8\" (UID: \"8521d832-efe5-4653-8c0e-8921f916e10f\") " pod="openstack/swift-proxy-78f5cf7bd5-24fm8" Dec 11 10:34:49 crc kubenswrapper[4953]: I1211 10:34:49.514810 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8521d832-efe5-4653-8c0e-8921f916e10f-internal-tls-certs\") pod \"swift-proxy-78f5cf7bd5-24fm8\" (UID: \"8521d832-efe5-4653-8c0e-8921f916e10f\") " pod="openstack/swift-proxy-78f5cf7bd5-24fm8" Dec 11 10:34:49 crc kubenswrapper[4953]: I1211 10:34:49.514860 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8521d832-efe5-4653-8c0e-8921f916e10f-etc-swift\") pod \"swift-proxy-78f5cf7bd5-24fm8\" (UID: \"8521d832-efe5-4653-8c0e-8921f916e10f\") " pod="openstack/swift-proxy-78f5cf7bd5-24fm8" Dec 11 10:34:49 crc kubenswrapper[4953]: I1211 10:34:49.514912 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8521d832-efe5-4653-8c0e-8921f916e10f-log-httpd\") pod \"swift-proxy-78f5cf7bd5-24fm8\" (UID: \"8521d832-efe5-4653-8c0e-8921f916e10f\") " pod="openstack/swift-proxy-78f5cf7bd5-24fm8" Dec 11 10:34:49 crc kubenswrapper[4953]: I1211 10:34:49.616904 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8521d832-efe5-4653-8c0e-8921f916e10f-public-tls-certs\") pod \"swift-proxy-78f5cf7bd5-24fm8\" (UID: \"8521d832-efe5-4653-8c0e-8921f916e10f\") " pod="openstack/swift-proxy-78f5cf7bd5-24fm8" Dec 11 10:34:49 crc kubenswrapper[4953]: I1211 10:34:49.616964 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8521d832-efe5-4653-8c0e-8921f916e10f-run-httpd\") pod \"swift-proxy-78f5cf7bd5-24fm8\" (UID: \"8521d832-efe5-4653-8c0e-8921f916e10f\") " pod="openstack/swift-proxy-78f5cf7bd5-24fm8" Dec 11 10:34:49 crc kubenswrapper[4953]: I1211 10:34:49.617024 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jg5gl\" (UniqueName: \"kubernetes.io/projected/8521d832-efe5-4653-8c0e-8921f916e10f-kube-api-access-jg5gl\") pod \"swift-proxy-78f5cf7bd5-24fm8\" (UID: \"8521d832-efe5-4653-8c0e-8921f916e10f\") " pod="openstack/swift-proxy-78f5cf7bd5-24fm8" Dec 11 10:34:49 crc kubenswrapper[4953]: I1211 10:34:49.617167 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8521d832-efe5-4653-8c0e-8921f916e10f-config-data\") pod \"swift-proxy-78f5cf7bd5-24fm8\" (UID: \"8521d832-efe5-4653-8c0e-8921f916e10f\") " pod="openstack/swift-proxy-78f5cf7bd5-24fm8" Dec 11 10:34:49 crc kubenswrapper[4953]: I1211 10:34:49.617214 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8521d832-efe5-4653-8c0e-8921f916e10f-combined-ca-bundle\") pod \"swift-proxy-78f5cf7bd5-24fm8\" (UID: \"8521d832-efe5-4653-8c0e-8921f916e10f\") " pod="openstack/swift-proxy-78f5cf7bd5-24fm8" Dec 11 10:34:49 crc kubenswrapper[4953]: I1211 10:34:49.617237 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8521d832-efe5-4653-8c0e-8921f916e10f-internal-tls-certs\") pod \"swift-proxy-78f5cf7bd5-24fm8\" (UID: \"8521d832-efe5-4653-8c0e-8921f916e10f\") " pod="openstack/swift-proxy-78f5cf7bd5-24fm8" Dec 11 10:34:49 crc kubenswrapper[4953]: I1211 10:34:49.617268 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8521d832-efe5-4653-8c0e-8921f916e10f-etc-swift\") pod \"swift-proxy-78f5cf7bd5-24fm8\" (UID: \"8521d832-efe5-4653-8c0e-8921f916e10f\") " pod="openstack/swift-proxy-78f5cf7bd5-24fm8" Dec 11 10:34:49 crc kubenswrapper[4953]: I1211 10:34:49.617314 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8521d832-efe5-4653-8c0e-8921f916e10f-log-httpd\") pod \"swift-proxy-78f5cf7bd5-24fm8\" (UID: \"8521d832-efe5-4653-8c0e-8921f916e10f\") " pod="openstack/swift-proxy-78f5cf7bd5-24fm8" Dec 11 10:34:49 crc kubenswrapper[4953]: I1211 10:34:49.617949 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8521d832-efe5-4653-8c0e-8921f916e10f-log-httpd\") pod \"swift-proxy-78f5cf7bd5-24fm8\" (UID: \"8521d832-efe5-4653-8c0e-8921f916e10f\") " pod="openstack/swift-proxy-78f5cf7bd5-24fm8" Dec 11 10:34:49 crc kubenswrapper[4953]: I1211 10:34:49.618318 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8521d832-efe5-4653-8c0e-8921f916e10f-run-httpd\") pod \"swift-proxy-78f5cf7bd5-24fm8\" (UID: \"8521d832-efe5-4653-8c0e-8921f916e10f\") " pod="openstack/swift-proxy-78f5cf7bd5-24fm8" Dec 11 10:34:49 crc kubenswrapper[4953]: I1211 10:34:49.700144 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8521d832-efe5-4653-8c0e-8921f916e10f-public-tls-certs\") pod \"swift-proxy-78f5cf7bd5-24fm8\" (UID: \"8521d832-efe5-4653-8c0e-8921f916e10f\") " pod="openstack/swift-proxy-78f5cf7bd5-24fm8" Dec 11 10:34:49 crc kubenswrapper[4953]: I1211 10:34:49.701266 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8521d832-efe5-4653-8c0e-8921f916e10f-combined-ca-bundle\") pod \"swift-proxy-78f5cf7bd5-24fm8\" (UID: \"8521d832-efe5-4653-8c0e-8921f916e10f\") " pod="openstack/swift-proxy-78f5cf7bd5-24fm8" Dec 11 10:34:49 crc kubenswrapper[4953]: I1211 10:34:49.710333 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jg5gl\" (UniqueName: \"kubernetes.io/projected/8521d832-efe5-4653-8c0e-8921f916e10f-kube-api-access-jg5gl\") pod \"swift-proxy-78f5cf7bd5-24fm8\" (UID: \"8521d832-efe5-4653-8c0e-8921f916e10f\") " pod="openstack/swift-proxy-78f5cf7bd5-24fm8" Dec 11 10:34:49 crc kubenswrapper[4953]: I1211 10:34:49.716932 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8521d832-efe5-4653-8c0e-8921f916e10f-etc-swift\") pod \"swift-proxy-78f5cf7bd5-24fm8\" (UID: \"8521d832-efe5-4653-8c0e-8921f916e10f\") " pod="openstack/swift-proxy-78f5cf7bd5-24fm8" Dec 11 10:34:49 crc kubenswrapper[4953]: I1211 10:34:49.729303 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8521d832-efe5-4653-8c0e-8921f916e10f-config-data\") pod \"swift-proxy-78f5cf7bd5-24fm8\" (UID: \"8521d832-efe5-4653-8c0e-8921f916e10f\") " pod="openstack/swift-proxy-78f5cf7bd5-24fm8" Dec 11 10:34:49 crc kubenswrapper[4953]: I1211 10:34:49.730090 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8521d832-efe5-4653-8c0e-8921f916e10f-internal-tls-certs\") pod \"swift-proxy-78f5cf7bd5-24fm8\" (UID: \"8521d832-efe5-4653-8c0e-8921f916e10f\") " pod="openstack/swift-proxy-78f5cf7bd5-24fm8" Dec 11 10:34:49 crc kubenswrapper[4953]: I1211 10:34:49.911410 4953 generic.go:334] "Generic (PLEG): container finished" podID="8df58633-8c06-4dd0-a538-b696f9736f6d" containerID="90fc77b9c3712d164c13bcc6f46d987fe0e34162bdcf315b272f12d817bea344" exitCode=143 Dec 11 10:34:49 crc kubenswrapper[4953]: I1211 10:34:49.911479 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-898848ccb-4kkwg" event={"ID":"8df58633-8c06-4dd0-a538-b696f9736f6d","Type":"ContainerDied","Data":"90fc77b9c3712d164c13bcc6f46d987fe0e34162bdcf315b272f12d817bea344"} Dec 11 10:34:49 crc kubenswrapper[4953]: I1211 10:34:49.917010 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q2898" event={"ID":"ed741fb7-1326-48b7-a713-17c9f0243eac","Type":"ContainerStarted","Data":"53d5bf4beeeacbda3dba3d57562ea4385d09cf6341585a459bb0c495199b914c"} Dec 11 10:34:50 crc kubenswrapper[4953]: I1211 10:34:50.015506 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-78f5cf7bd5-24fm8" Dec 11 10:34:50 crc kubenswrapper[4953]: I1211 10:34:50.562935 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e06bccd4-aefb-4055-b4eb-ef745234cbcb" path="/var/lib/kubelet/pods/e06bccd4-aefb-4055-b4eb-ef745234cbcb/volumes" Dec 11 10:34:50 crc kubenswrapper[4953]: I1211 10:34:50.764834 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-78f5cf7bd5-24fm8"] Dec 11 10:34:50 crc kubenswrapper[4953]: I1211 10:34:50.931152 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-78f5cf7bd5-24fm8" event={"ID":"8521d832-efe5-4653-8c0e-8921f916e10f","Type":"ContainerStarted","Data":"a25f0695175849fb98b2a8082989c5a96391ec883dd5d17ff8ccc5433e8dbec9"} Dec 11 10:34:51 crc kubenswrapper[4953]: I1211 10:34:51.126834 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 11 10:34:51 crc kubenswrapper[4953]: I1211 10:34:51.127488 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="aacb6b2c-e7b0-4d0f-84b2-b064cc344258" containerName="ceilometer-central-agent" containerID="cri-o://336200d0a637af06e39f89da75fccab7c166e3053ea9acaa781f133b2f9d1013" gracePeriod=30 Dec 11 10:34:51 crc kubenswrapper[4953]: I1211 10:34:51.127566 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="aacb6b2c-e7b0-4d0f-84b2-b064cc344258" containerName="proxy-httpd" containerID="cri-o://9d42fb894de3e0dfd7cfbab270564caee6df5efafc65ef9920c27e6a9de364c8" gracePeriod=30 Dec 11 10:34:51 crc kubenswrapper[4953]: I1211 10:34:51.127658 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="aacb6b2c-e7b0-4d0f-84b2-b064cc344258" containerName="ceilometer-notification-agent" containerID="cri-o://a52096f5fd3e155a4f341d1101e9acc2b73a598a10e142a3ae874728c8a1f732" gracePeriod=30 Dec 11 10:34:51 crc kubenswrapper[4953]: I1211 10:34:51.127642 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="aacb6b2c-e7b0-4d0f-84b2-b064cc344258" containerName="sg-core" containerID="cri-o://ace0e8874130c5fd96b3799500075fbdca0917c37b866eb6edf4f6b69f412004" gracePeriod=30 Dec 11 10:34:51 crc kubenswrapper[4953]: I1211 10:34:51.141875 4953 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="aacb6b2c-e7b0-4d0f-84b2-b064cc344258" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.163:3000/\": EOF" Dec 11 10:34:51 crc kubenswrapper[4953]: I1211 10:34:51.360736 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Dec 11 10:34:51 crc kubenswrapper[4953]: I1211 10:34:51.947303 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-78f5cf7bd5-24fm8" event={"ID":"8521d832-efe5-4653-8c0e-8921f916e10f","Type":"ContainerStarted","Data":"ae1625ae9b7343e79bf1b390eabfcbbde5a933a354f8b01e304d5b2edc571afd"} Dec 11 10:34:51 crc kubenswrapper[4953]: I1211 10:34:51.949874 4953 generic.go:334] "Generic (PLEG): container finished" podID="aacb6b2c-e7b0-4d0f-84b2-b064cc344258" containerID="9d42fb894de3e0dfd7cfbab270564caee6df5efafc65ef9920c27e6a9de364c8" exitCode=0 Dec 11 10:34:51 crc kubenswrapper[4953]: I1211 10:34:51.949901 4953 generic.go:334] "Generic (PLEG): container finished" podID="aacb6b2c-e7b0-4d0f-84b2-b064cc344258" containerID="ace0e8874130c5fd96b3799500075fbdca0917c37b866eb6edf4f6b69f412004" exitCode=2 Dec 11 10:34:51 crc kubenswrapper[4953]: I1211 10:34:51.949910 4953 generic.go:334] "Generic (PLEG): container finished" podID="aacb6b2c-e7b0-4d0f-84b2-b064cc344258" containerID="336200d0a637af06e39f89da75fccab7c166e3053ea9acaa781f133b2f9d1013" exitCode=0 Dec 11 10:34:51 crc kubenswrapper[4953]: I1211 10:34:51.949925 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aacb6b2c-e7b0-4d0f-84b2-b064cc344258","Type":"ContainerDied","Data":"9d42fb894de3e0dfd7cfbab270564caee6df5efafc65ef9920c27e6a9de364c8"} Dec 11 10:34:51 crc kubenswrapper[4953]: I1211 10:34:51.949941 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aacb6b2c-e7b0-4d0f-84b2-b064cc344258","Type":"ContainerDied","Data":"ace0e8874130c5fd96b3799500075fbdca0917c37b866eb6edf4f6b69f412004"} Dec 11 10:34:51 crc kubenswrapper[4953]: I1211 10:34:51.949952 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aacb6b2c-e7b0-4d0f-84b2-b064cc344258","Type":"ContainerDied","Data":"336200d0a637af06e39f89da75fccab7c166e3053ea9acaa781f133b2f9d1013"} Dec 11 10:34:52 crc kubenswrapper[4953]: I1211 10:34:52.584963 4953 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-898848ccb-4kkwg" podUID="8df58633-8c06-4dd0-a538-b696f9736f6d" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.162:9311/healthcheck\": read tcp 10.217.0.2:55792->10.217.0.162:9311: read: connection reset by peer" Dec 11 10:34:52 crc kubenswrapper[4953]: I1211 10:34:52.584975 4953 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-898848ccb-4kkwg" podUID="8df58633-8c06-4dd0-a538-b696f9736f6d" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.162:9311/healthcheck\": read tcp 10.217.0.2:55790->10.217.0.162:9311: read: connection reset by peer" Dec 11 10:34:52 crc kubenswrapper[4953]: I1211 10:34:52.964665 4953 generic.go:334] "Generic (PLEG): container finished" podID="8df58633-8c06-4dd0-a538-b696f9736f6d" containerID="320ee940deece2aa26a07f8c43904e8a8100f8c59b64aa05433ca8faa0893d3f" exitCode=0 Dec 11 10:34:52 crc kubenswrapper[4953]: I1211 10:34:52.964807 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-898848ccb-4kkwg" event={"ID":"8df58633-8c06-4dd0-a538-b696f9736f6d","Type":"ContainerDied","Data":"320ee940deece2aa26a07f8c43904e8a8100f8c59b64aa05433ca8faa0893d3f"} Dec 11 10:34:52 crc kubenswrapper[4953]: I1211 10:34:52.968961 4953 generic.go:334] "Generic (PLEG): container finished" podID="aacb6b2c-e7b0-4d0f-84b2-b064cc344258" containerID="a52096f5fd3e155a4f341d1101e9acc2b73a598a10e142a3ae874728c8a1f732" exitCode=0 Dec 11 10:34:52 crc kubenswrapper[4953]: I1211 10:34:52.968995 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aacb6b2c-e7b0-4d0f-84b2-b064cc344258","Type":"ContainerDied","Data":"a52096f5fd3e155a4f341d1101e9acc2b73a598a10e142a3ae874728c8a1f732"} Dec 11 10:34:54 crc kubenswrapper[4953]: I1211 10:34:54.273298 4953 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-898848ccb-4kkwg" podUID="8df58633-8c06-4dd0-a538-b696f9736f6d" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.162:9311/healthcheck\": dial tcp 10.217.0.162:9311: connect: connection refused" Dec 11 10:34:54 crc kubenswrapper[4953]: I1211 10:34:54.273298 4953 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-898848ccb-4kkwg" podUID="8df58633-8c06-4dd0-a538-b696f9736f6d" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.162:9311/healthcheck\": dial tcp 10.217.0.162:9311: connect: connection refused" Dec 11 10:34:54 crc kubenswrapper[4953]: I1211 10:34:54.405966 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Dec 11 10:34:59 crc kubenswrapper[4953]: I1211 10:34:59.189651 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 10:34:59 crc kubenswrapper[4953]: I1211 10:34:59.227036 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aacb6b2c-e7b0-4d0f-84b2-b064cc344258-config-data\") pod \"aacb6b2c-e7b0-4d0f-84b2-b064cc344258\" (UID: \"aacb6b2c-e7b0-4d0f-84b2-b064cc344258\") " Dec 11 10:34:59 crc kubenswrapper[4953]: I1211 10:34:59.227084 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aacb6b2c-e7b0-4d0f-84b2-b064cc344258-combined-ca-bundle\") pod \"aacb6b2c-e7b0-4d0f-84b2-b064cc344258\" (UID: \"aacb6b2c-e7b0-4d0f-84b2-b064cc344258\") " Dec 11 10:34:59 crc kubenswrapper[4953]: I1211 10:34:59.227113 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aacb6b2c-e7b0-4d0f-84b2-b064cc344258-scripts\") pod \"aacb6b2c-e7b0-4d0f-84b2-b064cc344258\" (UID: \"aacb6b2c-e7b0-4d0f-84b2-b064cc344258\") " Dec 11 10:34:59 crc kubenswrapper[4953]: I1211 10:34:59.227186 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aacb6b2c-e7b0-4d0f-84b2-b064cc344258-log-httpd\") pod \"aacb6b2c-e7b0-4d0f-84b2-b064cc344258\" (UID: \"aacb6b2c-e7b0-4d0f-84b2-b064cc344258\") " Dec 11 10:34:59 crc kubenswrapper[4953]: I1211 10:34:59.227208 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aacb6b2c-e7b0-4d0f-84b2-b064cc344258-run-httpd\") pod \"aacb6b2c-e7b0-4d0f-84b2-b064cc344258\" (UID: \"aacb6b2c-e7b0-4d0f-84b2-b064cc344258\") " Dec 11 10:34:59 crc kubenswrapper[4953]: I1211 10:34:59.227234 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x6znx\" (UniqueName: \"kubernetes.io/projected/aacb6b2c-e7b0-4d0f-84b2-b064cc344258-kube-api-access-x6znx\") pod \"aacb6b2c-e7b0-4d0f-84b2-b064cc344258\" (UID: \"aacb6b2c-e7b0-4d0f-84b2-b064cc344258\") " Dec 11 10:34:59 crc kubenswrapper[4953]: I1211 10:34:59.227262 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/aacb6b2c-e7b0-4d0f-84b2-b064cc344258-sg-core-conf-yaml\") pod \"aacb6b2c-e7b0-4d0f-84b2-b064cc344258\" (UID: \"aacb6b2c-e7b0-4d0f-84b2-b064cc344258\") " Dec 11 10:34:59 crc kubenswrapper[4953]: I1211 10:34:59.229887 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aacb6b2c-e7b0-4d0f-84b2-b064cc344258-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "aacb6b2c-e7b0-4d0f-84b2-b064cc344258" (UID: "aacb6b2c-e7b0-4d0f-84b2-b064cc344258"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:34:59 crc kubenswrapper[4953]: I1211 10:34:59.230373 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aacb6b2c-e7b0-4d0f-84b2-b064cc344258-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "aacb6b2c-e7b0-4d0f-84b2-b064cc344258" (UID: "aacb6b2c-e7b0-4d0f-84b2-b064cc344258"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:34:59 crc kubenswrapper[4953]: I1211 10:34:59.234227 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aacb6b2c-e7b0-4d0f-84b2-b064cc344258-scripts" (OuterVolumeSpecName: "scripts") pod "aacb6b2c-e7b0-4d0f-84b2-b064cc344258" (UID: "aacb6b2c-e7b0-4d0f-84b2-b064cc344258"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:34:59 crc kubenswrapper[4953]: I1211 10:34:59.235059 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aacb6b2c-e7b0-4d0f-84b2-b064cc344258-kube-api-access-x6znx" (OuterVolumeSpecName: "kube-api-access-x6znx") pod "aacb6b2c-e7b0-4d0f-84b2-b064cc344258" (UID: "aacb6b2c-e7b0-4d0f-84b2-b064cc344258"). InnerVolumeSpecName "kube-api-access-x6znx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:34:59 crc kubenswrapper[4953]: I1211 10:34:59.277547 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aacb6b2c-e7b0-4d0f-84b2-b064cc344258-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "aacb6b2c-e7b0-4d0f-84b2-b064cc344258" (UID: "aacb6b2c-e7b0-4d0f-84b2-b064cc344258"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:34:59 crc kubenswrapper[4953]: I1211 10:34:59.291907 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-898848ccb-4kkwg" Dec 11 10:34:59 crc kubenswrapper[4953]: I1211 10:34:59.330046 4953 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aacb6b2c-e7b0-4d0f-84b2-b064cc344258-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 10:34:59 crc kubenswrapper[4953]: I1211 10:34:59.330075 4953 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aacb6b2c-e7b0-4d0f-84b2-b064cc344258-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 11 10:34:59 crc kubenswrapper[4953]: I1211 10:34:59.330084 4953 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aacb6b2c-e7b0-4d0f-84b2-b064cc344258-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 11 10:34:59 crc kubenswrapper[4953]: I1211 10:34:59.330102 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x6znx\" (UniqueName: \"kubernetes.io/projected/aacb6b2c-e7b0-4d0f-84b2-b064cc344258-kube-api-access-x6znx\") on node \"crc\" DevicePath \"\"" Dec 11 10:34:59 crc kubenswrapper[4953]: I1211 10:34:59.330112 4953 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/aacb6b2c-e7b0-4d0f-84b2-b064cc344258-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 11 10:34:59 crc kubenswrapper[4953]: I1211 10:34:59.335629 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aacb6b2c-e7b0-4d0f-84b2-b064cc344258-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "aacb6b2c-e7b0-4d0f-84b2-b064cc344258" (UID: "aacb6b2c-e7b0-4d0f-84b2-b064cc344258"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:34:59 crc kubenswrapper[4953]: I1211 10:34:59.367295 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aacb6b2c-e7b0-4d0f-84b2-b064cc344258-config-data" (OuterVolumeSpecName: "config-data") pod "aacb6b2c-e7b0-4d0f-84b2-b064cc344258" (UID: "aacb6b2c-e7b0-4d0f-84b2-b064cc344258"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:34:59 crc kubenswrapper[4953]: I1211 10:34:59.431123 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8df58633-8c06-4dd0-a538-b696f9736f6d-config-data-custom\") pod \"8df58633-8c06-4dd0-a538-b696f9736f6d\" (UID: \"8df58633-8c06-4dd0-a538-b696f9736f6d\") " Dec 11 10:34:59 crc kubenswrapper[4953]: I1211 10:34:59.431205 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8df58633-8c06-4dd0-a538-b696f9736f6d-config-data\") pod \"8df58633-8c06-4dd0-a538-b696f9736f6d\" (UID: \"8df58633-8c06-4dd0-a538-b696f9736f6d\") " Dec 11 10:34:59 crc kubenswrapper[4953]: I1211 10:34:59.431347 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8df58633-8c06-4dd0-a538-b696f9736f6d-combined-ca-bundle\") pod \"8df58633-8c06-4dd0-a538-b696f9736f6d\" (UID: \"8df58633-8c06-4dd0-a538-b696f9736f6d\") " Dec 11 10:34:59 crc kubenswrapper[4953]: I1211 10:34:59.431370 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sjngd\" (UniqueName: \"kubernetes.io/projected/8df58633-8c06-4dd0-a538-b696f9736f6d-kube-api-access-sjngd\") pod \"8df58633-8c06-4dd0-a538-b696f9736f6d\" (UID: \"8df58633-8c06-4dd0-a538-b696f9736f6d\") " Dec 11 10:34:59 crc kubenswrapper[4953]: I1211 10:34:59.431458 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8df58633-8c06-4dd0-a538-b696f9736f6d-logs\") pod \"8df58633-8c06-4dd0-a538-b696f9736f6d\" (UID: \"8df58633-8c06-4dd0-a538-b696f9736f6d\") " Dec 11 10:34:59 crc kubenswrapper[4953]: I1211 10:34:59.431914 4953 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aacb6b2c-e7b0-4d0f-84b2-b064cc344258-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 10:34:59 crc kubenswrapper[4953]: I1211 10:34:59.431931 4953 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aacb6b2c-e7b0-4d0f-84b2-b064cc344258-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 10:34:59 crc kubenswrapper[4953]: I1211 10:34:59.432285 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8df58633-8c06-4dd0-a538-b696f9736f6d-logs" (OuterVolumeSpecName: "logs") pod "8df58633-8c06-4dd0-a538-b696f9736f6d" (UID: "8df58633-8c06-4dd0-a538-b696f9736f6d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:34:59 crc kubenswrapper[4953]: I1211 10:34:59.436424 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8df58633-8c06-4dd0-a538-b696f9736f6d-kube-api-access-sjngd" (OuterVolumeSpecName: "kube-api-access-sjngd") pod "8df58633-8c06-4dd0-a538-b696f9736f6d" (UID: "8df58633-8c06-4dd0-a538-b696f9736f6d"). InnerVolumeSpecName "kube-api-access-sjngd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:34:59 crc kubenswrapper[4953]: I1211 10:34:59.437191 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8df58633-8c06-4dd0-a538-b696f9736f6d-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "8df58633-8c06-4dd0-a538-b696f9736f6d" (UID: "8df58633-8c06-4dd0-a538-b696f9736f6d"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:34:59 crc kubenswrapper[4953]: I1211 10:34:59.464155 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8df58633-8c06-4dd0-a538-b696f9736f6d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8df58633-8c06-4dd0-a538-b696f9736f6d" (UID: "8df58633-8c06-4dd0-a538-b696f9736f6d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:34:59 crc kubenswrapper[4953]: I1211 10:34:59.539413 4953 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8df58633-8c06-4dd0-a538-b696f9736f6d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 10:34:59 crc kubenswrapper[4953]: I1211 10:34:59.539441 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sjngd\" (UniqueName: \"kubernetes.io/projected/8df58633-8c06-4dd0-a538-b696f9736f6d-kube-api-access-sjngd\") on node \"crc\" DevicePath \"\"" Dec 11 10:34:59 crc kubenswrapper[4953]: I1211 10:34:59.539453 4953 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8df58633-8c06-4dd0-a538-b696f9736f6d-logs\") on node \"crc\" DevicePath \"\"" Dec 11 10:34:59 crc kubenswrapper[4953]: I1211 10:34:59.539475 4953 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8df58633-8c06-4dd0-a538-b696f9736f6d-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 11 10:34:59 crc kubenswrapper[4953]: I1211 10:34:59.561642 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8df58633-8c06-4dd0-a538-b696f9736f6d-config-data" (OuterVolumeSpecName: "config-data") pod "8df58633-8c06-4dd0-a538-b696f9736f6d" (UID: "8df58633-8c06-4dd0-a538-b696f9736f6d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:34:59 crc kubenswrapper[4953]: I1211 10:34:59.641368 4953 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8df58633-8c06-4dd0-a538-b696f9736f6d-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 10:35:00 crc kubenswrapper[4953]: I1211 10:35:00.149549 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-898848ccb-4kkwg" event={"ID":"8df58633-8c06-4dd0-a538-b696f9736f6d","Type":"ContainerDied","Data":"efbf57ee6cc4e52395d74297cfc67e1fd8c8270f4e8e0befe2f0f91f5aba8ddb"} Dec 11 10:35:00 crc kubenswrapper[4953]: I1211 10:35:00.149627 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-898848ccb-4kkwg" Dec 11 10:35:00 crc kubenswrapper[4953]: I1211 10:35:00.150807 4953 scope.go:117] "RemoveContainer" containerID="320ee940deece2aa26a07f8c43904e8a8100f8c59b64aa05433ca8faa0893d3f" Dec 11 10:35:00 crc kubenswrapper[4953]: I1211 10:35:00.155330 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aacb6b2c-e7b0-4d0f-84b2-b064cc344258","Type":"ContainerDied","Data":"84c06759a531e6739c2b572a0ae4994709974ac0d06c8a8071c09f8624359379"} Dec 11 10:35:00 crc kubenswrapper[4953]: I1211 10:35:00.155463 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 10:35:00 crc kubenswrapper[4953]: I1211 10:35:00.167810 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-78f5cf7bd5-24fm8" event={"ID":"8521d832-efe5-4653-8c0e-8921f916e10f","Type":"ContainerStarted","Data":"d900251d830ca62ae055c9f9a2f8078dbd3d1545f50142030a3209a17a071070"} Dec 11 10:35:00 crc kubenswrapper[4953]: I1211 10:35:00.169418 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-78f5cf7bd5-24fm8" Dec 11 10:35:00 crc kubenswrapper[4953]: I1211 10:35:00.169448 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-78f5cf7bd5-24fm8" Dec 11 10:35:00 crc kubenswrapper[4953]: I1211 10:35:00.174946 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"56f7d9a7-e24f-4b47-b829-7adcad2b0a60","Type":"ContainerStarted","Data":"34c2fd04e3b65f3a279e40c0af6591784d2ebe01fd4833cd51539e8756e2eea7"} Dec 11 10:35:00 crc kubenswrapper[4953]: I1211 10:35:00.180120 4953 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-78f5cf7bd5-24fm8" podUID="8521d832-efe5-4653-8c0e-8921f916e10f" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Dec 11 10:35:00 crc kubenswrapper[4953]: I1211 10:35:00.190016 4953 scope.go:117] "RemoveContainer" containerID="90fc77b9c3712d164c13bcc6f46d987fe0e34162bdcf315b272f12d817bea344" Dec 11 10:35:00 crc kubenswrapper[4953]: I1211 10:35:00.244447 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-78f5cf7bd5-24fm8" podStartSLOduration=11.244423102 podStartE2EDuration="11.244423102s" podCreationTimestamp="2025-12-11 10:34:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:35:00.189704551 +0000 UTC m=+1418.213563604" watchObservedRunningTime="2025-12-11 10:35:00.244423102 +0000 UTC m=+1418.268282135" Dec 11 10:35:00 crc kubenswrapper[4953]: I1211 10:35:00.260396 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-898848ccb-4kkwg"] Dec 11 10:35:00 crc kubenswrapper[4953]: I1211 10:35:00.260509 4953 scope.go:117] "RemoveContainer" containerID="9d42fb894de3e0dfd7cfbab270564caee6df5efafc65ef9920c27e6a9de364c8" Dec 11 10:35:00 crc kubenswrapper[4953]: I1211 10:35:00.277641 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-898848ccb-4kkwg"] Dec 11 10:35:00 crc kubenswrapper[4953]: I1211 10:35:00.285247 4953 scope.go:117] "RemoveContainer" containerID="ace0e8874130c5fd96b3799500075fbdca0917c37b866eb6edf4f6b69f412004" Dec 11 10:35:00 crc kubenswrapper[4953]: I1211 10:35:00.291809 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 11 10:35:00 crc kubenswrapper[4953]: I1211 10:35:00.304270 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 11 10:35:00 crc kubenswrapper[4953]: I1211 10:35:00.309462 4953 scope.go:117] "RemoveContainer" containerID="a52096f5fd3e155a4f341d1101e9acc2b73a598a10e142a3ae874728c8a1f732" Dec 11 10:35:00 crc kubenswrapper[4953]: I1211 10:35:00.313808 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 11 10:35:00 crc kubenswrapper[4953]: E1211 10:35:00.314405 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8df58633-8c06-4dd0-a538-b696f9736f6d" containerName="barbican-api-log" Dec 11 10:35:00 crc kubenswrapper[4953]: I1211 10:35:00.314436 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="8df58633-8c06-4dd0-a538-b696f9736f6d" containerName="barbican-api-log" Dec 11 10:35:00 crc kubenswrapper[4953]: E1211 10:35:00.314457 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8df58633-8c06-4dd0-a538-b696f9736f6d" containerName="barbican-api" Dec 11 10:35:00 crc kubenswrapper[4953]: I1211 10:35:00.314469 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="8df58633-8c06-4dd0-a538-b696f9736f6d" containerName="barbican-api" Dec 11 10:35:00 crc kubenswrapper[4953]: E1211 10:35:00.314506 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aacb6b2c-e7b0-4d0f-84b2-b064cc344258" containerName="proxy-httpd" Dec 11 10:35:00 crc kubenswrapper[4953]: I1211 10:35:00.314517 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="aacb6b2c-e7b0-4d0f-84b2-b064cc344258" containerName="proxy-httpd" Dec 11 10:35:00 crc kubenswrapper[4953]: E1211 10:35:00.314532 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aacb6b2c-e7b0-4d0f-84b2-b064cc344258" containerName="ceilometer-central-agent" Dec 11 10:35:00 crc kubenswrapper[4953]: I1211 10:35:00.314540 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="aacb6b2c-e7b0-4d0f-84b2-b064cc344258" containerName="ceilometer-central-agent" Dec 11 10:35:00 crc kubenswrapper[4953]: E1211 10:35:00.314567 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aacb6b2c-e7b0-4d0f-84b2-b064cc344258" containerName="ceilometer-notification-agent" Dec 11 10:35:00 crc kubenswrapper[4953]: I1211 10:35:00.314597 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="aacb6b2c-e7b0-4d0f-84b2-b064cc344258" containerName="ceilometer-notification-agent" Dec 11 10:35:00 crc kubenswrapper[4953]: E1211 10:35:00.314614 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aacb6b2c-e7b0-4d0f-84b2-b064cc344258" containerName="sg-core" Dec 11 10:35:00 crc kubenswrapper[4953]: I1211 10:35:00.314621 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="aacb6b2c-e7b0-4d0f-84b2-b064cc344258" containerName="sg-core" Dec 11 10:35:00 crc kubenswrapper[4953]: I1211 10:35:00.314858 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="aacb6b2c-e7b0-4d0f-84b2-b064cc344258" containerName="ceilometer-central-agent" Dec 11 10:35:00 crc kubenswrapper[4953]: I1211 10:35:00.314875 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="aacb6b2c-e7b0-4d0f-84b2-b064cc344258" containerName="proxy-httpd" Dec 11 10:35:00 crc kubenswrapper[4953]: I1211 10:35:00.314889 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="8df58633-8c06-4dd0-a538-b696f9736f6d" containerName="barbican-api-log" Dec 11 10:35:00 crc kubenswrapper[4953]: I1211 10:35:00.314915 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="8df58633-8c06-4dd0-a538-b696f9736f6d" containerName="barbican-api" Dec 11 10:35:00 crc kubenswrapper[4953]: I1211 10:35:00.314940 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="aacb6b2c-e7b0-4d0f-84b2-b064cc344258" containerName="ceilometer-notification-agent" Dec 11 10:35:00 crc kubenswrapper[4953]: I1211 10:35:00.314957 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="aacb6b2c-e7b0-4d0f-84b2-b064cc344258" containerName="sg-core" Dec 11 10:35:00 crc kubenswrapper[4953]: I1211 10:35:00.320697 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 10:35:00 crc kubenswrapper[4953]: I1211 10:35:00.322617 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 11 10:35:00 crc kubenswrapper[4953]: I1211 10:35:00.323835 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 11 10:35:00 crc kubenswrapper[4953]: I1211 10:35:00.325997 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.974193007 podStartE2EDuration="21.325972513s" podCreationTimestamp="2025-12-11 10:34:39 +0000 UTC" firstStartedPulling="2025-12-11 10:34:40.44766575 +0000 UTC m=+1398.471524783" lastFinishedPulling="2025-12-11 10:34:58.799445266 +0000 UTC m=+1416.823304289" observedRunningTime="2025-12-11 10:35:00.278086696 +0000 UTC m=+1418.301945729" watchObservedRunningTime="2025-12-11 10:35:00.325972513 +0000 UTC m=+1418.349831546" Dec 11 10:35:00 crc kubenswrapper[4953]: I1211 10:35:00.338050 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 11 10:35:00 crc kubenswrapper[4953]: I1211 10:35:00.359657 4953 scope.go:117] "RemoveContainer" containerID="336200d0a637af06e39f89da75fccab7c166e3053ea9acaa781f133b2f9d1013" Dec 11 10:35:00 crc kubenswrapper[4953]: I1211 10:35:00.453483 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb9a4077-40c8-4683-92a3-66cca2323d8f-config-data\") pod \"ceilometer-0\" (UID: \"bb9a4077-40c8-4683-92a3-66cca2323d8f\") " pod="openstack/ceilometer-0" Dec 11 10:35:00 crc kubenswrapper[4953]: I1211 10:35:00.453584 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bb9a4077-40c8-4683-92a3-66cca2323d8f-log-httpd\") pod \"ceilometer-0\" (UID: \"bb9a4077-40c8-4683-92a3-66cca2323d8f\") " pod="openstack/ceilometer-0" Dec 11 10:35:00 crc kubenswrapper[4953]: I1211 10:35:00.453700 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb9a4077-40c8-4683-92a3-66cca2323d8f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bb9a4077-40c8-4683-92a3-66cca2323d8f\") " pod="openstack/ceilometer-0" Dec 11 10:35:00 crc kubenswrapper[4953]: I1211 10:35:00.453823 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bb9a4077-40c8-4683-92a3-66cca2323d8f-run-httpd\") pod \"ceilometer-0\" (UID: \"bb9a4077-40c8-4683-92a3-66cca2323d8f\") " pod="openstack/ceilometer-0" Dec 11 10:35:00 crc kubenswrapper[4953]: I1211 10:35:00.453889 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb9a4077-40c8-4683-92a3-66cca2323d8f-scripts\") pod \"ceilometer-0\" (UID: \"bb9a4077-40c8-4683-92a3-66cca2323d8f\") " pod="openstack/ceilometer-0" Dec 11 10:35:00 crc kubenswrapper[4953]: I1211 10:35:00.453962 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77fq8\" (UniqueName: \"kubernetes.io/projected/bb9a4077-40c8-4683-92a3-66cca2323d8f-kube-api-access-77fq8\") pod \"ceilometer-0\" (UID: \"bb9a4077-40c8-4683-92a3-66cca2323d8f\") " pod="openstack/ceilometer-0" Dec 11 10:35:00 crc kubenswrapper[4953]: I1211 10:35:00.454079 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bb9a4077-40c8-4683-92a3-66cca2323d8f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bb9a4077-40c8-4683-92a3-66cca2323d8f\") " pod="openstack/ceilometer-0" Dec 11 10:35:00 crc kubenswrapper[4953]: I1211 10:35:00.570954 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bb9a4077-40c8-4683-92a3-66cca2323d8f-run-httpd\") pod \"ceilometer-0\" (UID: \"bb9a4077-40c8-4683-92a3-66cca2323d8f\") " pod="openstack/ceilometer-0" Dec 11 10:35:00 crc kubenswrapper[4953]: I1211 10:35:00.571034 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb9a4077-40c8-4683-92a3-66cca2323d8f-scripts\") pod \"ceilometer-0\" (UID: \"bb9a4077-40c8-4683-92a3-66cca2323d8f\") " pod="openstack/ceilometer-0" Dec 11 10:35:00 crc kubenswrapper[4953]: I1211 10:35:00.571097 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-77fq8\" (UniqueName: \"kubernetes.io/projected/bb9a4077-40c8-4683-92a3-66cca2323d8f-kube-api-access-77fq8\") pod \"ceilometer-0\" (UID: \"bb9a4077-40c8-4683-92a3-66cca2323d8f\") " pod="openstack/ceilometer-0" Dec 11 10:35:00 crc kubenswrapper[4953]: I1211 10:35:00.571151 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bb9a4077-40c8-4683-92a3-66cca2323d8f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bb9a4077-40c8-4683-92a3-66cca2323d8f\") " pod="openstack/ceilometer-0" Dec 11 10:35:00 crc kubenswrapper[4953]: I1211 10:35:00.571199 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb9a4077-40c8-4683-92a3-66cca2323d8f-config-data\") pod \"ceilometer-0\" (UID: \"bb9a4077-40c8-4683-92a3-66cca2323d8f\") " pod="openstack/ceilometer-0" Dec 11 10:35:00 crc kubenswrapper[4953]: I1211 10:35:00.571240 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bb9a4077-40c8-4683-92a3-66cca2323d8f-log-httpd\") pod \"ceilometer-0\" (UID: \"bb9a4077-40c8-4683-92a3-66cca2323d8f\") " pod="openstack/ceilometer-0" Dec 11 10:35:00 crc kubenswrapper[4953]: I1211 10:35:00.571354 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb9a4077-40c8-4683-92a3-66cca2323d8f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bb9a4077-40c8-4683-92a3-66cca2323d8f\") " pod="openstack/ceilometer-0" Dec 11 10:35:00 crc kubenswrapper[4953]: I1211 10:35:00.572833 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bb9a4077-40c8-4683-92a3-66cca2323d8f-run-httpd\") pod \"ceilometer-0\" (UID: \"bb9a4077-40c8-4683-92a3-66cca2323d8f\") " pod="openstack/ceilometer-0" Dec 11 10:35:00 crc kubenswrapper[4953]: I1211 10:35:00.573903 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8df58633-8c06-4dd0-a538-b696f9736f6d" path="/var/lib/kubelet/pods/8df58633-8c06-4dd0-a538-b696f9736f6d/volumes" Dec 11 10:35:00 crc kubenswrapper[4953]: I1211 10:35:00.574915 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aacb6b2c-e7b0-4d0f-84b2-b064cc344258" path="/var/lib/kubelet/pods/aacb6b2c-e7b0-4d0f-84b2-b064cc344258/volumes" Dec 11 10:35:00 crc kubenswrapper[4953]: I1211 10:35:00.588047 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bb9a4077-40c8-4683-92a3-66cca2323d8f-log-httpd\") pod \"ceilometer-0\" (UID: \"bb9a4077-40c8-4683-92a3-66cca2323d8f\") " pod="openstack/ceilometer-0" Dec 11 10:35:00 crc kubenswrapper[4953]: I1211 10:35:00.593406 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb9a4077-40c8-4683-92a3-66cca2323d8f-scripts\") pod \"ceilometer-0\" (UID: \"bb9a4077-40c8-4683-92a3-66cca2323d8f\") " pod="openstack/ceilometer-0" Dec 11 10:35:00 crc kubenswrapper[4953]: I1211 10:35:00.593851 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb9a4077-40c8-4683-92a3-66cca2323d8f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bb9a4077-40c8-4683-92a3-66cca2323d8f\") " pod="openstack/ceilometer-0" Dec 11 10:35:00 crc kubenswrapper[4953]: I1211 10:35:00.594033 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bb9a4077-40c8-4683-92a3-66cca2323d8f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bb9a4077-40c8-4683-92a3-66cca2323d8f\") " pod="openstack/ceilometer-0" Dec 11 10:35:00 crc kubenswrapper[4953]: I1211 10:35:00.633640 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-77fq8\" (UniqueName: \"kubernetes.io/projected/bb9a4077-40c8-4683-92a3-66cca2323d8f-kube-api-access-77fq8\") pod \"ceilometer-0\" (UID: \"bb9a4077-40c8-4683-92a3-66cca2323d8f\") " pod="openstack/ceilometer-0" Dec 11 10:35:00 crc kubenswrapper[4953]: I1211 10:35:00.667840 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb9a4077-40c8-4683-92a3-66cca2323d8f-config-data\") pod \"ceilometer-0\" (UID: \"bb9a4077-40c8-4683-92a3-66cca2323d8f\") " pod="openstack/ceilometer-0" Dec 11 10:35:00 crc kubenswrapper[4953]: I1211 10:35:00.949607 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 10:35:01 crc kubenswrapper[4953]: I1211 10:35:01.203078 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-78f5cf7bd5-24fm8" Dec 11 10:35:01 crc kubenswrapper[4953]: I1211 10:35:01.541320 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 11 10:35:01 crc kubenswrapper[4953]: W1211 10:35:01.545459 4953 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbb9a4077_40c8_4683_92a3_66cca2323d8f.slice/crio-75f764a3b3290c191d42e652307df3c29e53907c1733cd250d0b23ddbf0ed868 WatchSource:0}: Error finding container 75f764a3b3290c191d42e652307df3c29e53907c1733cd250d0b23ddbf0ed868: Status 404 returned error can't find the container with id 75f764a3b3290c191d42e652307df3c29e53907c1733cd250d0b23ddbf0ed868 Dec 11 10:35:02 crc kubenswrapper[4953]: I1211 10:35:02.204004 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bb9a4077-40c8-4683-92a3-66cca2323d8f","Type":"ContainerStarted","Data":"75f764a3b3290c191d42e652307df3c29e53907c1733cd250d0b23ddbf0ed868"} Dec 11 10:35:04 crc kubenswrapper[4953]: I1211 10:35:04.224863 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bb9a4077-40c8-4683-92a3-66cca2323d8f","Type":"ContainerStarted","Data":"f23ead92fc17ff83aa4994cae9bcbecba6fb00a4242bbd3dab321315bb0aa0d8"} Dec 11 10:35:04 crc kubenswrapper[4953]: I1211 10:35:04.274496 4953 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-898848ccb-4kkwg" podUID="8df58633-8c06-4dd0-a538-b696f9736f6d" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.162:9311/healthcheck\": dial tcp 10.217.0.162:9311: i/o timeout (Client.Timeout exceeded while awaiting headers)" Dec 11 10:35:04 crc kubenswrapper[4953]: I1211 10:35:04.274541 4953 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-898848ccb-4kkwg" podUID="8df58633-8c06-4dd0-a538-b696f9736f6d" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.162:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 10:35:05 crc kubenswrapper[4953]: I1211 10:35:05.050115 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-78f5cf7bd5-24fm8" Dec 11 10:35:05 crc kubenswrapper[4953]: I1211 10:35:05.253756 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bb9a4077-40c8-4683-92a3-66cca2323d8f","Type":"ContainerStarted","Data":"4e6e34f7d1407a8aa2ca9d0e2713b040d14d78ee2286bb20a3aa716615c6add4"} Dec 11 10:35:06 crc kubenswrapper[4953]: I1211 10:35:06.264655 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bb9a4077-40c8-4683-92a3-66cca2323d8f","Type":"ContainerStarted","Data":"c4d5259e24ff587e9cb2ec69b82a4cb1cf9af64fc79cfbb0cba8ec65f325a8a3"} Dec 11 10:35:08 crc kubenswrapper[4953]: I1211 10:35:08.311597 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bb9a4077-40c8-4683-92a3-66cca2323d8f","Type":"ContainerStarted","Data":"af4a4081d6ad49974a98b00fab7e79067d06c853cb35c7c45182e87b4416c9e8"} Dec 11 10:35:08 crc kubenswrapper[4953]: I1211 10:35:08.312088 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 11 10:35:08 crc kubenswrapper[4953]: I1211 10:35:08.338537 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.413629648 podStartE2EDuration="8.338515297s" podCreationTimestamp="2025-12-11 10:35:00 +0000 UTC" firstStartedPulling="2025-12-11 10:35:01.547822937 +0000 UTC m=+1419.571681970" lastFinishedPulling="2025-12-11 10:35:07.472708596 +0000 UTC m=+1425.496567619" observedRunningTime="2025-12-11 10:35:08.332213638 +0000 UTC m=+1426.356072681" watchObservedRunningTime="2025-12-11 10:35:08.338515297 +0000 UTC m=+1426.362374340" Dec 11 10:35:10 crc kubenswrapper[4953]: I1211 10:35:10.706768 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 11 10:35:10 crc kubenswrapper[4953]: I1211 10:35:10.707318 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bb9a4077-40c8-4683-92a3-66cca2323d8f" containerName="ceilometer-central-agent" containerID="cri-o://f23ead92fc17ff83aa4994cae9bcbecba6fb00a4242bbd3dab321315bb0aa0d8" gracePeriod=30 Dec 11 10:35:10 crc kubenswrapper[4953]: I1211 10:35:10.707461 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bb9a4077-40c8-4683-92a3-66cca2323d8f" containerName="proxy-httpd" containerID="cri-o://af4a4081d6ad49974a98b00fab7e79067d06c853cb35c7c45182e87b4416c9e8" gracePeriod=30 Dec 11 10:35:10 crc kubenswrapper[4953]: I1211 10:35:10.707499 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bb9a4077-40c8-4683-92a3-66cca2323d8f" containerName="sg-core" containerID="cri-o://c4d5259e24ff587e9cb2ec69b82a4cb1cf9af64fc79cfbb0cba8ec65f325a8a3" gracePeriod=30 Dec 11 10:35:10 crc kubenswrapper[4953]: I1211 10:35:10.707534 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bb9a4077-40c8-4683-92a3-66cca2323d8f" containerName="ceilometer-notification-agent" containerID="cri-o://4e6e34f7d1407a8aa2ca9d0e2713b040d14d78ee2286bb20a3aa716615c6add4" gracePeriod=30 Dec 11 10:35:11 crc kubenswrapper[4953]: E1211 10:35:11.100322 4953 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbb9a4077_40c8_4683_92a3_66cca2323d8f.slice/crio-conmon-af4a4081d6ad49974a98b00fab7e79067d06c853cb35c7c45182e87b4416c9e8.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbb9a4077_40c8_4683_92a3_66cca2323d8f.slice/crio-4e6e34f7d1407a8aa2ca9d0e2713b040d14d78ee2286bb20a3aa716615c6add4.scope\": RecentStats: unable to find data in memory cache]" Dec 11 10:35:11 crc kubenswrapper[4953]: I1211 10:35:11.399128 4953 generic.go:334] "Generic (PLEG): container finished" podID="bb9a4077-40c8-4683-92a3-66cca2323d8f" containerID="af4a4081d6ad49974a98b00fab7e79067d06c853cb35c7c45182e87b4416c9e8" exitCode=0 Dec 11 10:35:11 crc kubenswrapper[4953]: I1211 10:35:11.399611 4953 generic.go:334] "Generic (PLEG): container finished" podID="bb9a4077-40c8-4683-92a3-66cca2323d8f" containerID="c4d5259e24ff587e9cb2ec69b82a4cb1cf9af64fc79cfbb0cba8ec65f325a8a3" exitCode=2 Dec 11 10:35:11 crc kubenswrapper[4953]: I1211 10:35:11.399628 4953 generic.go:334] "Generic (PLEG): container finished" podID="bb9a4077-40c8-4683-92a3-66cca2323d8f" containerID="4e6e34f7d1407a8aa2ca9d0e2713b040d14d78ee2286bb20a3aa716615c6add4" exitCode=0 Dec 11 10:35:11 crc kubenswrapper[4953]: I1211 10:35:11.399637 4953 generic.go:334] "Generic (PLEG): container finished" podID="bb9a4077-40c8-4683-92a3-66cca2323d8f" containerID="f23ead92fc17ff83aa4994cae9bcbecba6fb00a4242bbd3dab321315bb0aa0d8" exitCode=0 Dec 11 10:35:11 crc kubenswrapper[4953]: I1211 10:35:11.399185 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bb9a4077-40c8-4683-92a3-66cca2323d8f","Type":"ContainerDied","Data":"af4a4081d6ad49974a98b00fab7e79067d06c853cb35c7c45182e87b4416c9e8"} Dec 11 10:35:11 crc kubenswrapper[4953]: I1211 10:35:11.399716 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bb9a4077-40c8-4683-92a3-66cca2323d8f","Type":"ContainerDied","Data":"c4d5259e24ff587e9cb2ec69b82a4cb1cf9af64fc79cfbb0cba8ec65f325a8a3"} Dec 11 10:35:11 crc kubenswrapper[4953]: I1211 10:35:11.399742 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bb9a4077-40c8-4683-92a3-66cca2323d8f","Type":"ContainerDied","Data":"4e6e34f7d1407a8aa2ca9d0e2713b040d14d78ee2286bb20a3aa716615c6add4"} Dec 11 10:35:11 crc kubenswrapper[4953]: I1211 10:35:11.399757 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bb9a4077-40c8-4683-92a3-66cca2323d8f","Type":"ContainerDied","Data":"f23ead92fc17ff83aa4994cae9bcbecba6fb00a4242bbd3dab321315bb0aa0d8"} Dec 11 10:35:11 crc kubenswrapper[4953]: I1211 10:35:11.625945 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 10:35:11 crc kubenswrapper[4953]: I1211 10:35:11.667335 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb9a4077-40c8-4683-92a3-66cca2323d8f-config-data\") pod \"bb9a4077-40c8-4683-92a3-66cca2323d8f\" (UID: \"bb9a4077-40c8-4683-92a3-66cca2323d8f\") " Dec 11 10:35:11 crc kubenswrapper[4953]: I1211 10:35:11.667440 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bb9a4077-40c8-4683-92a3-66cca2323d8f-run-httpd\") pod \"bb9a4077-40c8-4683-92a3-66cca2323d8f\" (UID: \"bb9a4077-40c8-4683-92a3-66cca2323d8f\") " Dec 11 10:35:11 crc kubenswrapper[4953]: I1211 10:35:11.667493 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bb9a4077-40c8-4683-92a3-66cca2323d8f-log-httpd\") pod \"bb9a4077-40c8-4683-92a3-66cca2323d8f\" (UID: \"bb9a4077-40c8-4683-92a3-66cca2323d8f\") " Dec 11 10:35:11 crc kubenswrapper[4953]: I1211 10:35:11.667567 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bb9a4077-40c8-4683-92a3-66cca2323d8f-sg-core-conf-yaml\") pod \"bb9a4077-40c8-4683-92a3-66cca2323d8f\" (UID: \"bb9a4077-40c8-4683-92a3-66cca2323d8f\") " Dec 11 10:35:11 crc kubenswrapper[4953]: I1211 10:35:11.667758 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-77fq8\" (UniqueName: \"kubernetes.io/projected/bb9a4077-40c8-4683-92a3-66cca2323d8f-kube-api-access-77fq8\") pod \"bb9a4077-40c8-4683-92a3-66cca2323d8f\" (UID: \"bb9a4077-40c8-4683-92a3-66cca2323d8f\") " Dec 11 10:35:11 crc kubenswrapper[4953]: I1211 10:35:11.667779 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb9a4077-40c8-4683-92a3-66cca2323d8f-combined-ca-bundle\") pod \"bb9a4077-40c8-4683-92a3-66cca2323d8f\" (UID: \"bb9a4077-40c8-4683-92a3-66cca2323d8f\") " Dec 11 10:35:11 crc kubenswrapper[4953]: I1211 10:35:11.667800 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb9a4077-40c8-4683-92a3-66cca2323d8f-scripts\") pod \"bb9a4077-40c8-4683-92a3-66cca2323d8f\" (UID: \"bb9a4077-40c8-4683-92a3-66cca2323d8f\") " Dec 11 10:35:11 crc kubenswrapper[4953]: I1211 10:35:11.668334 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb9a4077-40c8-4683-92a3-66cca2323d8f-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "bb9a4077-40c8-4683-92a3-66cca2323d8f" (UID: "bb9a4077-40c8-4683-92a3-66cca2323d8f"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:35:11 crc kubenswrapper[4953]: I1211 10:35:11.668444 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb9a4077-40c8-4683-92a3-66cca2323d8f-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "bb9a4077-40c8-4683-92a3-66cca2323d8f" (UID: "bb9a4077-40c8-4683-92a3-66cca2323d8f"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:35:11 crc kubenswrapper[4953]: I1211 10:35:11.675234 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb9a4077-40c8-4683-92a3-66cca2323d8f-scripts" (OuterVolumeSpecName: "scripts") pod "bb9a4077-40c8-4683-92a3-66cca2323d8f" (UID: "bb9a4077-40c8-4683-92a3-66cca2323d8f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:35:11 crc kubenswrapper[4953]: I1211 10:35:11.690289 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb9a4077-40c8-4683-92a3-66cca2323d8f-kube-api-access-77fq8" (OuterVolumeSpecName: "kube-api-access-77fq8") pod "bb9a4077-40c8-4683-92a3-66cca2323d8f" (UID: "bb9a4077-40c8-4683-92a3-66cca2323d8f"). InnerVolumeSpecName "kube-api-access-77fq8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:35:11 crc kubenswrapper[4953]: I1211 10:35:11.704031 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb9a4077-40c8-4683-92a3-66cca2323d8f-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "bb9a4077-40c8-4683-92a3-66cca2323d8f" (UID: "bb9a4077-40c8-4683-92a3-66cca2323d8f"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:35:11 crc kubenswrapper[4953]: I1211 10:35:11.770356 4953 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bb9a4077-40c8-4683-92a3-66cca2323d8f-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 11 10:35:11 crc kubenswrapper[4953]: I1211 10:35:11.771503 4953 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bb9a4077-40c8-4683-92a3-66cca2323d8f-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 11 10:35:11 crc kubenswrapper[4953]: I1211 10:35:11.771647 4953 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bb9a4077-40c8-4683-92a3-66cca2323d8f-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 11 10:35:11 crc kubenswrapper[4953]: I1211 10:35:11.771735 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-77fq8\" (UniqueName: \"kubernetes.io/projected/bb9a4077-40c8-4683-92a3-66cca2323d8f-kube-api-access-77fq8\") on node \"crc\" DevicePath \"\"" Dec 11 10:35:11 crc kubenswrapper[4953]: I1211 10:35:11.771827 4953 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb9a4077-40c8-4683-92a3-66cca2323d8f-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 10:35:11 crc kubenswrapper[4953]: I1211 10:35:11.780516 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb9a4077-40c8-4683-92a3-66cca2323d8f-config-data" (OuterVolumeSpecName: "config-data") pod "bb9a4077-40c8-4683-92a3-66cca2323d8f" (UID: "bb9a4077-40c8-4683-92a3-66cca2323d8f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:35:11 crc kubenswrapper[4953]: I1211 10:35:11.781890 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb9a4077-40c8-4683-92a3-66cca2323d8f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bb9a4077-40c8-4683-92a3-66cca2323d8f" (UID: "bb9a4077-40c8-4683-92a3-66cca2323d8f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:35:11 crc kubenswrapper[4953]: I1211 10:35:11.874167 4953 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb9a4077-40c8-4683-92a3-66cca2323d8f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 10:35:11 crc kubenswrapper[4953]: I1211 10:35:11.874421 4953 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb9a4077-40c8-4683-92a3-66cca2323d8f-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 10:35:12 crc kubenswrapper[4953]: I1211 10:35:12.412371 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bb9a4077-40c8-4683-92a3-66cca2323d8f","Type":"ContainerDied","Data":"75f764a3b3290c191d42e652307df3c29e53907c1733cd250d0b23ddbf0ed868"} Dec 11 10:35:12 crc kubenswrapper[4953]: I1211 10:35:12.412741 4953 scope.go:117] "RemoveContainer" containerID="af4a4081d6ad49974a98b00fab7e79067d06c853cb35c7c45182e87b4416c9e8" Dec 11 10:35:12 crc kubenswrapper[4953]: I1211 10:35:12.412604 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 10:35:12 crc kubenswrapper[4953]: I1211 10:35:12.437870 4953 scope.go:117] "RemoveContainer" containerID="c4d5259e24ff587e9cb2ec69b82a4cb1cf9af64fc79cfbb0cba8ec65f325a8a3" Dec 11 10:35:12 crc kubenswrapper[4953]: I1211 10:35:12.450272 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 11 10:35:12 crc kubenswrapper[4953]: I1211 10:35:12.462319 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 11 10:35:12 crc kubenswrapper[4953]: I1211 10:35:12.466695 4953 scope.go:117] "RemoveContainer" containerID="4e6e34f7d1407a8aa2ca9d0e2713b040d14d78ee2286bb20a3aa716615c6add4" Dec 11 10:35:12 crc kubenswrapper[4953]: I1211 10:35:12.498542 4953 scope.go:117] "RemoveContainer" containerID="f23ead92fc17ff83aa4994cae9bcbecba6fb00a4242bbd3dab321315bb0aa0d8" Dec 11 10:35:12 crc kubenswrapper[4953]: I1211 10:35:12.500932 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb9a4077-40c8-4683-92a3-66cca2323d8f" path="/var/lib/kubelet/pods/bb9a4077-40c8-4683-92a3-66cca2323d8f/volumes" Dec 11 10:35:12 crc kubenswrapper[4953]: I1211 10:35:12.502032 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 11 10:35:12 crc kubenswrapper[4953]: E1211 10:35:12.502384 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb9a4077-40c8-4683-92a3-66cca2323d8f" containerName="ceilometer-central-agent" Dec 11 10:35:12 crc kubenswrapper[4953]: I1211 10:35:12.502398 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb9a4077-40c8-4683-92a3-66cca2323d8f" containerName="ceilometer-central-agent" Dec 11 10:35:12 crc kubenswrapper[4953]: E1211 10:35:12.502441 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb9a4077-40c8-4683-92a3-66cca2323d8f" containerName="proxy-httpd" Dec 11 10:35:12 crc kubenswrapper[4953]: I1211 10:35:12.502448 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb9a4077-40c8-4683-92a3-66cca2323d8f" containerName="proxy-httpd" Dec 11 10:35:12 crc kubenswrapper[4953]: E1211 10:35:12.502463 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb9a4077-40c8-4683-92a3-66cca2323d8f" containerName="ceilometer-notification-agent" Dec 11 10:35:12 crc kubenswrapper[4953]: I1211 10:35:12.502470 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb9a4077-40c8-4683-92a3-66cca2323d8f" containerName="ceilometer-notification-agent" Dec 11 10:35:12 crc kubenswrapper[4953]: E1211 10:35:12.502494 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb9a4077-40c8-4683-92a3-66cca2323d8f" containerName="sg-core" Dec 11 10:35:12 crc kubenswrapper[4953]: I1211 10:35:12.502503 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb9a4077-40c8-4683-92a3-66cca2323d8f" containerName="sg-core" Dec 11 10:35:12 crc kubenswrapper[4953]: I1211 10:35:12.503655 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb9a4077-40c8-4683-92a3-66cca2323d8f" containerName="proxy-httpd" Dec 11 10:35:12 crc kubenswrapper[4953]: I1211 10:35:12.503692 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb9a4077-40c8-4683-92a3-66cca2323d8f" containerName="ceilometer-notification-agent" Dec 11 10:35:12 crc kubenswrapper[4953]: I1211 10:35:12.503713 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb9a4077-40c8-4683-92a3-66cca2323d8f" containerName="sg-core" Dec 11 10:35:12 crc kubenswrapper[4953]: I1211 10:35:12.503734 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb9a4077-40c8-4683-92a3-66cca2323d8f" containerName="ceilometer-central-agent" Dec 11 10:35:12 crc kubenswrapper[4953]: I1211 10:35:12.506627 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 10:35:12 crc kubenswrapper[4953]: I1211 10:35:12.508287 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 11 10:35:12 crc kubenswrapper[4953]: I1211 10:35:12.512797 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 11 10:35:12 crc kubenswrapper[4953]: I1211 10:35:12.519133 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 11 10:35:12 crc kubenswrapper[4953]: I1211 10:35:12.590554 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e6a86b70-2c86-45dc-a446-004affe33e67-run-httpd\") pod \"ceilometer-0\" (UID: \"e6a86b70-2c86-45dc-a446-004affe33e67\") " pod="openstack/ceilometer-0" Dec 11 10:35:12 crc kubenswrapper[4953]: I1211 10:35:12.590662 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6a86b70-2c86-45dc-a446-004affe33e67-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e6a86b70-2c86-45dc-a446-004affe33e67\") " pod="openstack/ceilometer-0" Dec 11 10:35:12 crc kubenswrapper[4953]: I1211 10:35:12.590689 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e6a86b70-2c86-45dc-a446-004affe33e67-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e6a86b70-2c86-45dc-a446-004affe33e67\") " pod="openstack/ceilometer-0" Dec 11 10:35:12 crc kubenswrapper[4953]: I1211 10:35:12.590780 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e6a86b70-2c86-45dc-a446-004affe33e67-scripts\") pod \"ceilometer-0\" (UID: \"e6a86b70-2c86-45dc-a446-004affe33e67\") " pod="openstack/ceilometer-0" Dec 11 10:35:12 crc kubenswrapper[4953]: I1211 10:35:12.590865 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e6a86b70-2c86-45dc-a446-004affe33e67-log-httpd\") pod \"ceilometer-0\" (UID: \"e6a86b70-2c86-45dc-a446-004affe33e67\") " pod="openstack/ceilometer-0" Dec 11 10:35:12 crc kubenswrapper[4953]: I1211 10:35:12.590934 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6a86b70-2c86-45dc-a446-004affe33e67-config-data\") pod \"ceilometer-0\" (UID: \"e6a86b70-2c86-45dc-a446-004affe33e67\") " pod="openstack/ceilometer-0" Dec 11 10:35:12 crc kubenswrapper[4953]: I1211 10:35:12.591214 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lrqj\" (UniqueName: \"kubernetes.io/projected/e6a86b70-2c86-45dc-a446-004affe33e67-kube-api-access-7lrqj\") pod \"ceilometer-0\" (UID: \"e6a86b70-2c86-45dc-a446-004affe33e67\") " pod="openstack/ceilometer-0" Dec 11 10:35:12 crc kubenswrapper[4953]: I1211 10:35:12.692933 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6a86b70-2c86-45dc-a446-004affe33e67-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e6a86b70-2c86-45dc-a446-004affe33e67\") " pod="openstack/ceilometer-0" Dec 11 10:35:12 crc kubenswrapper[4953]: I1211 10:35:12.692983 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e6a86b70-2c86-45dc-a446-004affe33e67-run-httpd\") pod \"ceilometer-0\" (UID: \"e6a86b70-2c86-45dc-a446-004affe33e67\") " pod="openstack/ceilometer-0" Dec 11 10:35:12 crc kubenswrapper[4953]: I1211 10:35:12.693007 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e6a86b70-2c86-45dc-a446-004affe33e67-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e6a86b70-2c86-45dc-a446-004affe33e67\") " pod="openstack/ceilometer-0" Dec 11 10:35:12 crc kubenswrapper[4953]: I1211 10:35:12.693034 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e6a86b70-2c86-45dc-a446-004affe33e67-scripts\") pod \"ceilometer-0\" (UID: \"e6a86b70-2c86-45dc-a446-004affe33e67\") " pod="openstack/ceilometer-0" Dec 11 10:35:12 crc kubenswrapper[4953]: I1211 10:35:12.693086 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e6a86b70-2c86-45dc-a446-004affe33e67-log-httpd\") pod \"ceilometer-0\" (UID: \"e6a86b70-2c86-45dc-a446-004affe33e67\") " pod="openstack/ceilometer-0" Dec 11 10:35:12 crc kubenswrapper[4953]: I1211 10:35:12.693159 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6a86b70-2c86-45dc-a446-004affe33e67-config-data\") pod \"ceilometer-0\" (UID: \"e6a86b70-2c86-45dc-a446-004affe33e67\") " pod="openstack/ceilometer-0" Dec 11 10:35:12 crc kubenswrapper[4953]: I1211 10:35:12.693210 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7lrqj\" (UniqueName: \"kubernetes.io/projected/e6a86b70-2c86-45dc-a446-004affe33e67-kube-api-access-7lrqj\") pod \"ceilometer-0\" (UID: \"e6a86b70-2c86-45dc-a446-004affe33e67\") " pod="openstack/ceilometer-0" Dec 11 10:35:12 crc kubenswrapper[4953]: I1211 10:35:12.693952 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e6a86b70-2c86-45dc-a446-004affe33e67-run-httpd\") pod \"ceilometer-0\" (UID: \"e6a86b70-2c86-45dc-a446-004affe33e67\") " pod="openstack/ceilometer-0" Dec 11 10:35:12 crc kubenswrapper[4953]: I1211 10:35:12.694288 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e6a86b70-2c86-45dc-a446-004affe33e67-log-httpd\") pod \"ceilometer-0\" (UID: \"e6a86b70-2c86-45dc-a446-004affe33e67\") " pod="openstack/ceilometer-0" Dec 11 10:35:12 crc kubenswrapper[4953]: I1211 10:35:12.698287 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6a86b70-2c86-45dc-a446-004affe33e67-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e6a86b70-2c86-45dc-a446-004affe33e67\") " pod="openstack/ceilometer-0" Dec 11 10:35:12 crc kubenswrapper[4953]: I1211 10:35:12.699251 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e6a86b70-2c86-45dc-a446-004affe33e67-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e6a86b70-2c86-45dc-a446-004affe33e67\") " pod="openstack/ceilometer-0" Dec 11 10:35:12 crc kubenswrapper[4953]: I1211 10:35:12.702829 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e6a86b70-2c86-45dc-a446-004affe33e67-scripts\") pod \"ceilometer-0\" (UID: \"e6a86b70-2c86-45dc-a446-004affe33e67\") " pod="openstack/ceilometer-0" Dec 11 10:35:12 crc kubenswrapper[4953]: I1211 10:35:12.705950 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6a86b70-2c86-45dc-a446-004affe33e67-config-data\") pod \"ceilometer-0\" (UID: \"e6a86b70-2c86-45dc-a446-004affe33e67\") " pod="openstack/ceilometer-0" Dec 11 10:35:12 crc kubenswrapper[4953]: I1211 10:35:12.714810 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7lrqj\" (UniqueName: \"kubernetes.io/projected/e6a86b70-2c86-45dc-a446-004affe33e67-kube-api-access-7lrqj\") pod \"ceilometer-0\" (UID: \"e6a86b70-2c86-45dc-a446-004affe33e67\") " pod="openstack/ceilometer-0" Dec 11 10:35:12 crc kubenswrapper[4953]: I1211 10:35:12.835190 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 10:35:12 crc kubenswrapper[4953]: I1211 10:35:12.926559 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 11 10:35:13 crc kubenswrapper[4953]: I1211 10:35:13.369231 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 11 10:35:13 crc kubenswrapper[4953]: I1211 10:35:13.422657 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e6a86b70-2c86-45dc-a446-004affe33e67","Type":"ContainerStarted","Data":"4e829e90163c6bd17a9b6a4e54ef0f78231009f09ed45753ae92b20599125133"} Dec 11 10:35:17 crc kubenswrapper[4953]: I1211 10:35:17.460162 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e6a86b70-2c86-45dc-a446-004affe33e67","Type":"ContainerStarted","Data":"88836a26d94c1e06dfec99dab514224230bb38e5d142ea483ddeb5e930200792"} Dec 11 10:35:18 crc kubenswrapper[4953]: I1211 10:35:18.471284 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e6a86b70-2c86-45dc-a446-004affe33e67","Type":"ContainerStarted","Data":"e00b05052f4dfeb34195ff75719c94b66b12e27fa51e40774f72ad6e607e2086"} Dec 11 10:35:18 crc kubenswrapper[4953]: I1211 10:35:18.471643 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e6a86b70-2c86-45dc-a446-004affe33e67","Type":"ContainerStarted","Data":"e67d999cea9a46e7f2a2d118af0e0e80cc90eb09935fd36400921f3869b0b99a"} Dec 11 10:35:20 crc kubenswrapper[4953]: I1211 10:35:20.315207 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 11 10:35:20 crc kubenswrapper[4953]: I1211 10:35:20.316015 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="755305c4-518a-48f8-b732-a825b32487f6" containerName="glance-log" containerID="cri-o://3cc49f527dc719457e9eb36e597ae7042fff0664634dbe3b3be02b7b0b78227b" gracePeriod=30 Dec 11 10:35:20 crc kubenswrapper[4953]: I1211 10:35:20.316177 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="755305c4-518a-48f8-b732-a825b32487f6" containerName="glance-httpd" containerID="cri-o://45336e0c31787cbf6b006a00a75f728e5a2a56c26216027dc3655f6abff8a706" gracePeriod=30 Dec 11 10:35:20 crc kubenswrapper[4953]: I1211 10:35:20.508101 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e6a86b70-2c86-45dc-a446-004affe33e67","Type":"ContainerStarted","Data":"05f0d441546a6589e8391f79f24192efc727d811c4df82dc4fe6bbc931c59ae8"} Dec 11 10:35:20 crc kubenswrapper[4953]: I1211 10:35:20.508318 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e6a86b70-2c86-45dc-a446-004affe33e67" containerName="ceilometer-central-agent" containerID="cri-o://88836a26d94c1e06dfec99dab514224230bb38e5d142ea483ddeb5e930200792" gracePeriod=30 Dec 11 10:35:20 crc kubenswrapper[4953]: I1211 10:35:20.508821 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 11 10:35:20 crc kubenswrapper[4953]: I1211 10:35:20.509166 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e6a86b70-2c86-45dc-a446-004affe33e67" containerName="proxy-httpd" containerID="cri-o://05f0d441546a6589e8391f79f24192efc727d811c4df82dc4fe6bbc931c59ae8" gracePeriod=30 Dec 11 10:35:20 crc kubenswrapper[4953]: I1211 10:35:20.509217 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e6a86b70-2c86-45dc-a446-004affe33e67" containerName="sg-core" containerID="cri-o://e00b05052f4dfeb34195ff75719c94b66b12e27fa51e40774f72ad6e607e2086" gracePeriod=30 Dec 11 10:35:20 crc kubenswrapper[4953]: I1211 10:35:20.509251 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e6a86b70-2c86-45dc-a446-004affe33e67" containerName="ceilometer-notification-agent" containerID="cri-o://e67d999cea9a46e7f2a2d118af0e0e80cc90eb09935fd36400921f3869b0b99a" gracePeriod=30 Dec 11 10:35:20 crc kubenswrapper[4953]: I1211 10:35:20.519318 4953 generic.go:334] "Generic (PLEG): container finished" podID="755305c4-518a-48f8-b732-a825b32487f6" containerID="3cc49f527dc719457e9eb36e597ae7042fff0664634dbe3b3be02b7b0b78227b" exitCode=143 Dec 11 10:35:20 crc kubenswrapper[4953]: I1211 10:35:20.519375 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"755305c4-518a-48f8-b732-a825b32487f6","Type":"ContainerDied","Data":"3cc49f527dc719457e9eb36e597ae7042fff0664634dbe3b3be02b7b0b78227b"} Dec 11 10:35:20 crc kubenswrapper[4953]: I1211 10:35:20.557699 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.513275595 podStartE2EDuration="8.557672331s" podCreationTimestamp="2025-12-11 10:35:12 +0000 UTC" firstStartedPulling="2025-12-11 10:35:13.366785454 +0000 UTC m=+1431.390644487" lastFinishedPulling="2025-12-11 10:35:19.41118219 +0000 UTC m=+1437.435041223" observedRunningTime="2025-12-11 10:35:20.538664801 +0000 UTC m=+1438.562523844" watchObservedRunningTime="2025-12-11 10:35:20.557672331 +0000 UTC m=+1438.581531364" Dec 11 10:35:21 crc kubenswrapper[4953]: I1211 10:35:21.528249 4953 generic.go:334] "Generic (PLEG): container finished" podID="e6a86b70-2c86-45dc-a446-004affe33e67" containerID="05f0d441546a6589e8391f79f24192efc727d811c4df82dc4fe6bbc931c59ae8" exitCode=0 Dec 11 10:35:21 crc kubenswrapper[4953]: I1211 10:35:21.528510 4953 generic.go:334] "Generic (PLEG): container finished" podID="e6a86b70-2c86-45dc-a446-004affe33e67" containerID="e00b05052f4dfeb34195ff75719c94b66b12e27fa51e40774f72ad6e607e2086" exitCode=2 Dec 11 10:35:21 crc kubenswrapper[4953]: I1211 10:35:21.528520 4953 generic.go:334] "Generic (PLEG): container finished" podID="e6a86b70-2c86-45dc-a446-004affe33e67" containerID="e67d999cea9a46e7f2a2d118af0e0e80cc90eb09935fd36400921f3869b0b99a" exitCode=0 Dec 11 10:35:21 crc kubenswrapper[4953]: I1211 10:35:21.528317 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e6a86b70-2c86-45dc-a446-004affe33e67","Type":"ContainerDied","Data":"05f0d441546a6589e8391f79f24192efc727d811c4df82dc4fe6bbc931c59ae8"} Dec 11 10:35:21 crc kubenswrapper[4953]: I1211 10:35:21.528554 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e6a86b70-2c86-45dc-a446-004affe33e67","Type":"ContainerDied","Data":"e00b05052f4dfeb34195ff75719c94b66b12e27fa51e40774f72ad6e607e2086"} Dec 11 10:35:21 crc kubenswrapper[4953]: I1211 10:35:21.528567 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e6a86b70-2c86-45dc-a446-004affe33e67","Type":"ContainerDied","Data":"e67d999cea9a46e7f2a2d118af0e0e80cc90eb09935fd36400921f3869b0b99a"} Dec 11 10:35:22 crc kubenswrapper[4953]: I1211 10:35:22.037771 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 11 10:35:22 crc kubenswrapper[4953]: I1211 10:35:22.038009 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="b8c04c52-6e9d-4254-a222-85f06c186b92" containerName="glance-log" containerID="cri-o://178ccd876435208341ba5464adb24cfa2cf54bd9fcc3241af2650d6d14702f82" gracePeriod=30 Dec 11 10:35:22 crc kubenswrapper[4953]: I1211 10:35:22.038102 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="b8c04c52-6e9d-4254-a222-85f06c186b92" containerName="glance-httpd" containerID="cri-o://affd56e178058ed46144c2bfb37fc7368d599e663b8408df548ed9b1be736499" gracePeriod=30 Dec 11 10:35:22 crc kubenswrapper[4953]: I1211 10:35:22.538695 4953 generic.go:334] "Generic (PLEG): container finished" podID="b8c04c52-6e9d-4254-a222-85f06c186b92" containerID="178ccd876435208341ba5464adb24cfa2cf54bd9fcc3241af2650d6d14702f82" exitCode=143 Dec 11 10:35:22 crc kubenswrapper[4953]: I1211 10:35:22.538782 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b8c04c52-6e9d-4254-a222-85f06c186b92","Type":"ContainerDied","Data":"178ccd876435208341ba5464adb24cfa2cf54bd9fcc3241af2650d6d14702f82"} Dec 11 10:35:23 crc kubenswrapper[4953]: I1211 10:35:23.588285 4953 generic.go:334] "Generic (PLEG): container finished" podID="755305c4-518a-48f8-b732-a825b32487f6" containerID="45336e0c31787cbf6b006a00a75f728e5a2a56c26216027dc3655f6abff8a706" exitCode=0 Dec 11 10:35:23 crc kubenswrapper[4953]: I1211 10:35:23.588363 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"755305c4-518a-48f8-b732-a825b32487f6","Type":"ContainerDied","Data":"45336e0c31787cbf6b006a00a75f728e5a2a56c26216027dc3655f6abff8a706"} Dec 11 10:35:24 crc kubenswrapper[4953]: I1211 10:35:24.070910 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 11 10:35:24 crc kubenswrapper[4953]: I1211 10:35:24.256333 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/755305c4-518a-48f8-b732-a825b32487f6-httpd-run\") pod \"755305c4-518a-48f8-b732-a825b32487f6\" (UID: \"755305c4-518a-48f8-b732-a825b32487f6\") " Dec 11 10:35:24 crc kubenswrapper[4953]: I1211 10:35:24.256488 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"755305c4-518a-48f8-b732-a825b32487f6\" (UID: \"755305c4-518a-48f8-b732-a825b32487f6\") " Dec 11 10:35:24 crc kubenswrapper[4953]: I1211 10:35:24.256542 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/755305c4-518a-48f8-b732-a825b32487f6-config-data\") pod \"755305c4-518a-48f8-b732-a825b32487f6\" (UID: \"755305c4-518a-48f8-b732-a825b32487f6\") " Dec 11 10:35:24 crc kubenswrapper[4953]: I1211 10:35:24.256632 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/755305c4-518a-48f8-b732-a825b32487f6-logs\") pod \"755305c4-518a-48f8-b732-a825b32487f6\" (UID: \"755305c4-518a-48f8-b732-a825b32487f6\") " Dec 11 10:35:24 crc kubenswrapper[4953]: I1211 10:35:24.256658 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jjhcg\" (UniqueName: \"kubernetes.io/projected/755305c4-518a-48f8-b732-a825b32487f6-kube-api-access-jjhcg\") pod \"755305c4-518a-48f8-b732-a825b32487f6\" (UID: \"755305c4-518a-48f8-b732-a825b32487f6\") " Dec 11 10:35:24 crc kubenswrapper[4953]: I1211 10:35:24.256730 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/755305c4-518a-48f8-b732-a825b32487f6-public-tls-certs\") pod \"755305c4-518a-48f8-b732-a825b32487f6\" (UID: \"755305c4-518a-48f8-b732-a825b32487f6\") " Dec 11 10:35:24 crc kubenswrapper[4953]: I1211 10:35:24.256754 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/755305c4-518a-48f8-b732-a825b32487f6-scripts\") pod \"755305c4-518a-48f8-b732-a825b32487f6\" (UID: \"755305c4-518a-48f8-b732-a825b32487f6\") " Dec 11 10:35:24 crc kubenswrapper[4953]: I1211 10:35:24.256792 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/755305c4-518a-48f8-b732-a825b32487f6-combined-ca-bundle\") pod \"755305c4-518a-48f8-b732-a825b32487f6\" (UID: \"755305c4-518a-48f8-b732-a825b32487f6\") " Dec 11 10:35:24 crc kubenswrapper[4953]: I1211 10:35:24.257810 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/755305c4-518a-48f8-b732-a825b32487f6-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "755305c4-518a-48f8-b732-a825b32487f6" (UID: "755305c4-518a-48f8-b732-a825b32487f6"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:35:24 crc kubenswrapper[4953]: I1211 10:35:24.260362 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/755305c4-518a-48f8-b732-a825b32487f6-logs" (OuterVolumeSpecName: "logs") pod "755305c4-518a-48f8-b732-a825b32487f6" (UID: "755305c4-518a-48f8-b732-a825b32487f6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:35:24 crc kubenswrapper[4953]: I1211 10:35:24.264594 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "glance") pod "755305c4-518a-48f8-b732-a825b32487f6" (UID: "755305c4-518a-48f8-b732-a825b32487f6"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 11 10:35:24 crc kubenswrapper[4953]: I1211 10:35:24.266867 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/755305c4-518a-48f8-b732-a825b32487f6-scripts" (OuterVolumeSpecName: "scripts") pod "755305c4-518a-48f8-b732-a825b32487f6" (UID: "755305c4-518a-48f8-b732-a825b32487f6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:35:24 crc kubenswrapper[4953]: I1211 10:35:24.266900 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/755305c4-518a-48f8-b732-a825b32487f6-kube-api-access-jjhcg" (OuterVolumeSpecName: "kube-api-access-jjhcg") pod "755305c4-518a-48f8-b732-a825b32487f6" (UID: "755305c4-518a-48f8-b732-a825b32487f6"). InnerVolumeSpecName "kube-api-access-jjhcg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:35:24 crc kubenswrapper[4953]: I1211 10:35:24.313159 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/755305c4-518a-48f8-b732-a825b32487f6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "755305c4-518a-48f8-b732-a825b32487f6" (UID: "755305c4-518a-48f8-b732-a825b32487f6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:35:24 crc kubenswrapper[4953]: I1211 10:35:24.353769 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/755305c4-518a-48f8-b732-a825b32487f6-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "755305c4-518a-48f8-b732-a825b32487f6" (UID: "755305c4-518a-48f8-b732-a825b32487f6"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:35:24 crc kubenswrapper[4953]: I1211 10:35:24.359225 4953 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Dec 11 10:35:24 crc kubenswrapper[4953]: I1211 10:35:24.359267 4953 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/755305c4-518a-48f8-b732-a825b32487f6-logs\") on node \"crc\" DevicePath \"\"" Dec 11 10:35:24 crc kubenswrapper[4953]: I1211 10:35:24.359280 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jjhcg\" (UniqueName: \"kubernetes.io/projected/755305c4-518a-48f8-b732-a825b32487f6-kube-api-access-jjhcg\") on node \"crc\" DevicePath \"\"" Dec 11 10:35:24 crc kubenswrapper[4953]: I1211 10:35:24.359294 4953 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/755305c4-518a-48f8-b732-a825b32487f6-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 11 10:35:24 crc kubenswrapper[4953]: I1211 10:35:24.359309 4953 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/755305c4-518a-48f8-b732-a825b32487f6-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 10:35:24 crc kubenswrapper[4953]: I1211 10:35:24.359321 4953 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/755305c4-518a-48f8-b732-a825b32487f6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 10:35:24 crc kubenswrapper[4953]: I1211 10:35:24.359777 4953 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/755305c4-518a-48f8-b732-a825b32487f6-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 11 10:35:24 crc kubenswrapper[4953]: I1211 10:35:24.384494 4953 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Dec 11 10:35:24 crc kubenswrapper[4953]: I1211 10:35:24.389118 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/755305c4-518a-48f8-b732-a825b32487f6-config-data" (OuterVolumeSpecName: "config-data") pod "755305c4-518a-48f8-b732-a825b32487f6" (UID: "755305c4-518a-48f8-b732-a825b32487f6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:35:24 crc kubenswrapper[4953]: I1211 10:35:24.461350 4953 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Dec 11 10:35:24 crc kubenswrapper[4953]: I1211 10:35:24.461386 4953 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/755305c4-518a-48f8-b732-a825b32487f6-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 10:35:24 crc kubenswrapper[4953]: I1211 10:35:24.598406 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"755305c4-518a-48f8-b732-a825b32487f6","Type":"ContainerDied","Data":"94741ede7ed8427b5f6a9607545b74932b6d6d1c091555374a5e7b143ed17bfd"} Dec 11 10:35:24 crc kubenswrapper[4953]: I1211 10:35:24.598460 4953 scope.go:117] "RemoveContainer" containerID="45336e0c31787cbf6b006a00a75f728e5a2a56c26216027dc3655f6abff8a706" Dec 11 10:35:24 crc kubenswrapper[4953]: I1211 10:35:24.598504 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 11 10:35:24 crc kubenswrapper[4953]: I1211 10:35:24.623532 4953 scope.go:117] "RemoveContainer" containerID="3cc49f527dc719457e9eb36e597ae7042fff0664634dbe3b3be02b7b0b78227b" Dec 11 10:35:24 crc kubenswrapper[4953]: I1211 10:35:24.636920 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 11 10:35:24 crc kubenswrapper[4953]: I1211 10:35:24.647211 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 11 10:35:24 crc kubenswrapper[4953]: I1211 10:35:24.659111 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Dec 11 10:35:24 crc kubenswrapper[4953]: E1211 10:35:24.662125 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="755305c4-518a-48f8-b732-a825b32487f6" containerName="glance-httpd" Dec 11 10:35:24 crc kubenswrapper[4953]: I1211 10:35:24.662161 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="755305c4-518a-48f8-b732-a825b32487f6" containerName="glance-httpd" Dec 11 10:35:24 crc kubenswrapper[4953]: E1211 10:35:24.662187 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="755305c4-518a-48f8-b732-a825b32487f6" containerName="glance-log" Dec 11 10:35:24 crc kubenswrapper[4953]: I1211 10:35:24.662198 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="755305c4-518a-48f8-b732-a825b32487f6" containerName="glance-log" Dec 11 10:35:24 crc kubenswrapper[4953]: I1211 10:35:24.662436 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="755305c4-518a-48f8-b732-a825b32487f6" containerName="glance-log" Dec 11 10:35:24 crc kubenswrapper[4953]: I1211 10:35:24.662469 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="755305c4-518a-48f8-b732-a825b32487f6" containerName="glance-httpd" Dec 11 10:35:24 crc kubenswrapper[4953]: I1211 10:35:24.663933 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 11 10:35:24 crc kubenswrapper[4953]: I1211 10:35:24.667218 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Dec 11 10:35:24 crc kubenswrapper[4953]: I1211 10:35:24.667491 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Dec 11 10:35:24 crc kubenswrapper[4953]: I1211 10:35:24.683802 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 11 10:35:24 crc kubenswrapper[4953]: I1211 10:35:24.767013 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e067a835-8a1a-4672-aaea-b8c101109018-logs\") pod \"glance-default-external-api-0\" (UID: \"e067a835-8a1a-4672-aaea-b8c101109018\") " pod="openstack/glance-default-external-api-0" Dec 11 10:35:24 crc kubenswrapper[4953]: I1211 10:35:24.767092 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e067a835-8a1a-4672-aaea-b8c101109018-scripts\") pod \"glance-default-external-api-0\" (UID: \"e067a835-8a1a-4672-aaea-b8c101109018\") " pod="openstack/glance-default-external-api-0" Dec 11 10:35:24 crc kubenswrapper[4953]: I1211 10:35:24.767150 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e067a835-8a1a-4672-aaea-b8c101109018-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e067a835-8a1a-4672-aaea-b8c101109018\") " pod="openstack/glance-default-external-api-0" Dec 11 10:35:24 crc kubenswrapper[4953]: I1211 10:35:24.767263 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e067a835-8a1a-4672-aaea-b8c101109018-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"e067a835-8a1a-4672-aaea-b8c101109018\") " pod="openstack/glance-default-external-api-0" Dec 11 10:35:24 crc kubenswrapper[4953]: I1211 10:35:24.767360 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e067a835-8a1a-4672-aaea-b8c101109018-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e067a835-8a1a-4672-aaea-b8c101109018\") " pod="openstack/glance-default-external-api-0" Dec 11 10:35:24 crc kubenswrapper[4953]: I1211 10:35:24.767389 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"e067a835-8a1a-4672-aaea-b8c101109018\") " pod="openstack/glance-default-external-api-0" Dec 11 10:35:24 crc kubenswrapper[4953]: I1211 10:35:24.767415 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e067a835-8a1a-4672-aaea-b8c101109018-config-data\") pod \"glance-default-external-api-0\" (UID: \"e067a835-8a1a-4672-aaea-b8c101109018\") " pod="openstack/glance-default-external-api-0" Dec 11 10:35:24 crc kubenswrapper[4953]: I1211 10:35:24.767695 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6x595\" (UniqueName: \"kubernetes.io/projected/e067a835-8a1a-4672-aaea-b8c101109018-kube-api-access-6x595\") pod \"glance-default-external-api-0\" (UID: \"e067a835-8a1a-4672-aaea-b8c101109018\") " pod="openstack/glance-default-external-api-0" Dec 11 10:35:24 crc kubenswrapper[4953]: I1211 10:35:24.868993 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e067a835-8a1a-4672-aaea-b8c101109018-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e067a835-8a1a-4672-aaea-b8c101109018\") " pod="openstack/glance-default-external-api-0" Dec 11 10:35:24 crc kubenswrapper[4953]: I1211 10:35:24.869058 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e067a835-8a1a-4672-aaea-b8c101109018-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"e067a835-8a1a-4672-aaea-b8c101109018\") " pod="openstack/glance-default-external-api-0" Dec 11 10:35:24 crc kubenswrapper[4953]: I1211 10:35:24.869158 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e067a835-8a1a-4672-aaea-b8c101109018-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e067a835-8a1a-4672-aaea-b8c101109018\") " pod="openstack/glance-default-external-api-0" Dec 11 10:35:24 crc kubenswrapper[4953]: I1211 10:35:24.869196 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"e067a835-8a1a-4672-aaea-b8c101109018\") " pod="openstack/glance-default-external-api-0" Dec 11 10:35:24 crc kubenswrapper[4953]: I1211 10:35:24.869236 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e067a835-8a1a-4672-aaea-b8c101109018-config-data\") pod \"glance-default-external-api-0\" (UID: \"e067a835-8a1a-4672-aaea-b8c101109018\") " pod="openstack/glance-default-external-api-0" Dec 11 10:35:24 crc kubenswrapper[4953]: I1211 10:35:24.869310 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6x595\" (UniqueName: \"kubernetes.io/projected/e067a835-8a1a-4672-aaea-b8c101109018-kube-api-access-6x595\") pod \"glance-default-external-api-0\" (UID: \"e067a835-8a1a-4672-aaea-b8c101109018\") " pod="openstack/glance-default-external-api-0" Dec 11 10:35:24 crc kubenswrapper[4953]: I1211 10:35:24.869353 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e067a835-8a1a-4672-aaea-b8c101109018-logs\") pod \"glance-default-external-api-0\" (UID: \"e067a835-8a1a-4672-aaea-b8c101109018\") " pod="openstack/glance-default-external-api-0" Dec 11 10:35:24 crc kubenswrapper[4953]: I1211 10:35:24.869382 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e067a835-8a1a-4672-aaea-b8c101109018-scripts\") pod \"glance-default-external-api-0\" (UID: \"e067a835-8a1a-4672-aaea-b8c101109018\") " pod="openstack/glance-default-external-api-0" Dec 11 10:35:24 crc kubenswrapper[4953]: I1211 10:35:24.869621 4953 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"e067a835-8a1a-4672-aaea-b8c101109018\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/glance-default-external-api-0" Dec 11 10:35:24 crc kubenswrapper[4953]: I1211 10:35:24.870444 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e067a835-8a1a-4672-aaea-b8c101109018-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e067a835-8a1a-4672-aaea-b8c101109018\") " pod="openstack/glance-default-external-api-0" Dec 11 10:35:24 crc kubenswrapper[4953]: I1211 10:35:24.870530 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e067a835-8a1a-4672-aaea-b8c101109018-logs\") pod \"glance-default-external-api-0\" (UID: \"e067a835-8a1a-4672-aaea-b8c101109018\") " pod="openstack/glance-default-external-api-0" Dec 11 10:35:24 crc kubenswrapper[4953]: I1211 10:35:24.876224 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e067a835-8a1a-4672-aaea-b8c101109018-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"e067a835-8a1a-4672-aaea-b8c101109018\") " pod="openstack/glance-default-external-api-0" Dec 11 10:35:24 crc kubenswrapper[4953]: I1211 10:35:24.880281 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e067a835-8a1a-4672-aaea-b8c101109018-scripts\") pod \"glance-default-external-api-0\" (UID: \"e067a835-8a1a-4672-aaea-b8c101109018\") " pod="openstack/glance-default-external-api-0" Dec 11 10:35:24 crc kubenswrapper[4953]: I1211 10:35:24.880950 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e067a835-8a1a-4672-aaea-b8c101109018-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e067a835-8a1a-4672-aaea-b8c101109018\") " pod="openstack/glance-default-external-api-0" Dec 11 10:35:24 crc kubenswrapper[4953]: I1211 10:35:24.881400 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e067a835-8a1a-4672-aaea-b8c101109018-config-data\") pod \"glance-default-external-api-0\" (UID: \"e067a835-8a1a-4672-aaea-b8c101109018\") " pod="openstack/glance-default-external-api-0" Dec 11 10:35:24 crc kubenswrapper[4953]: I1211 10:35:24.898330 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6x595\" (UniqueName: \"kubernetes.io/projected/e067a835-8a1a-4672-aaea-b8c101109018-kube-api-access-6x595\") pod \"glance-default-external-api-0\" (UID: \"e067a835-8a1a-4672-aaea-b8c101109018\") " pod="openstack/glance-default-external-api-0" Dec 11 10:35:24 crc kubenswrapper[4953]: I1211 10:35:24.907539 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"e067a835-8a1a-4672-aaea-b8c101109018\") " pod="openstack/glance-default-external-api-0" Dec 11 10:35:24 crc kubenswrapper[4953]: I1211 10:35:24.985023 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 11 10:35:25 crc kubenswrapper[4953]: I1211 10:35:25.612212 4953 generic.go:334] "Generic (PLEG): container finished" podID="b8c04c52-6e9d-4254-a222-85f06c186b92" containerID="affd56e178058ed46144c2bfb37fc7368d599e663b8408df548ed9b1be736499" exitCode=0 Dec 11 10:35:25 crc kubenswrapper[4953]: I1211 10:35:25.612355 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b8c04c52-6e9d-4254-a222-85f06c186b92","Type":"ContainerDied","Data":"affd56e178058ed46144c2bfb37fc7368d599e663b8408df548ed9b1be736499"} Dec 11 10:35:25 crc kubenswrapper[4953]: I1211 10:35:25.702228 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 11 10:35:25 crc kubenswrapper[4953]: I1211 10:35:25.704630 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 11 10:35:25 crc kubenswrapper[4953]: W1211 10:35:25.705791 4953 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode067a835_8a1a_4672_aaea_b8c101109018.slice/crio-3ef7f0dbe4f89164e33f294de7045589466f6b4bbcc64f97b84b928e92f92a28 WatchSource:0}: Error finding container 3ef7f0dbe4f89164e33f294de7045589466f6b4bbcc64f97b84b928e92f92a28: Status 404 returned error can't find the container with id 3ef7f0dbe4f89164e33f294de7045589466f6b4bbcc64f97b84b928e92f92a28 Dec 11 10:35:25 crc kubenswrapper[4953]: I1211 10:35:25.785596 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b8c04c52-6e9d-4254-a222-85f06c186b92-logs\") pod \"b8c04c52-6e9d-4254-a222-85f06c186b92\" (UID: \"b8c04c52-6e9d-4254-a222-85f06c186b92\") " Dec 11 10:35:25 crc kubenswrapper[4953]: I1211 10:35:25.785692 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8c04c52-6e9d-4254-a222-85f06c186b92-combined-ca-bundle\") pod \"b8c04c52-6e9d-4254-a222-85f06c186b92\" (UID: \"b8c04c52-6e9d-4254-a222-85f06c186b92\") " Dec 11 10:35:25 crc kubenswrapper[4953]: I1211 10:35:25.785714 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b8c04c52-6e9d-4254-a222-85f06c186b92-httpd-run\") pod \"b8c04c52-6e9d-4254-a222-85f06c186b92\" (UID: \"b8c04c52-6e9d-4254-a222-85f06c186b92\") " Dec 11 10:35:25 crc kubenswrapper[4953]: I1211 10:35:25.785765 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8c04c52-6e9d-4254-a222-85f06c186b92-config-data\") pod \"b8c04c52-6e9d-4254-a222-85f06c186b92\" (UID: \"b8c04c52-6e9d-4254-a222-85f06c186b92\") " Dec 11 10:35:25 crc kubenswrapper[4953]: I1211 10:35:25.785791 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b8c04c52-6e9d-4254-a222-85f06c186b92-scripts\") pod \"b8c04c52-6e9d-4254-a222-85f06c186b92\" (UID: \"b8c04c52-6e9d-4254-a222-85f06c186b92\") " Dec 11 10:35:25 crc kubenswrapper[4953]: I1211 10:35:25.785865 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b8c04c52-6e9d-4254-a222-85f06c186b92-internal-tls-certs\") pod \"b8c04c52-6e9d-4254-a222-85f06c186b92\" (UID: \"b8c04c52-6e9d-4254-a222-85f06c186b92\") " Dec 11 10:35:25 crc kubenswrapper[4953]: I1211 10:35:25.785880 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"b8c04c52-6e9d-4254-a222-85f06c186b92\" (UID: \"b8c04c52-6e9d-4254-a222-85f06c186b92\") " Dec 11 10:35:25 crc kubenswrapper[4953]: I1211 10:35:25.785956 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n5ndh\" (UniqueName: \"kubernetes.io/projected/b8c04c52-6e9d-4254-a222-85f06c186b92-kube-api-access-n5ndh\") pod \"b8c04c52-6e9d-4254-a222-85f06c186b92\" (UID: \"b8c04c52-6e9d-4254-a222-85f06c186b92\") " Dec 11 10:35:25 crc kubenswrapper[4953]: I1211 10:35:25.788467 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b8c04c52-6e9d-4254-a222-85f06c186b92-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "b8c04c52-6e9d-4254-a222-85f06c186b92" (UID: "b8c04c52-6e9d-4254-a222-85f06c186b92"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:35:25 crc kubenswrapper[4953]: I1211 10:35:25.789621 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b8c04c52-6e9d-4254-a222-85f06c186b92-logs" (OuterVolumeSpecName: "logs") pod "b8c04c52-6e9d-4254-a222-85f06c186b92" (UID: "b8c04c52-6e9d-4254-a222-85f06c186b92"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:35:25 crc kubenswrapper[4953]: I1211 10:35:25.792896 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8c04c52-6e9d-4254-a222-85f06c186b92-scripts" (OuterVolumeSpecName: "scripts") pod "b8c04c52-6e9d-4254-a222-85f06c186b92" (UID: "b8c04c52-6e9d-4254-a222-85f06c186b92"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:35:25 crc kubenswrapper[4953]: I1211 10:35:25.793183 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8c04c52-6e9d-4254-a222-85f06c186b92-kube-api-access-n5ndh" (OuterVolumeSpecName: "kube-api-access-n5ndh") pod "b8c04c52-6e9d-4254-a222-85f06c186b92" (UID: "b8c04c52-6e9d-4254-a222-85f06c186b92"). InnerVolumeSpecName "kube-api-access-n5ndh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:35:25 crc kubenswrapper[4953]: I1211 10:35:25.799976 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "glance") pod "b8c04c52-6e9d-4254-a222-85f06c186b92" (UID: "b8c04c52-6e9d-4254-a222-85f06c186b92"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 11 10:35:25 crc kubenswrapper[4953]: I1211 10:35:25.824222 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8c04c52-6e9d-4254-a222-85f06c186b92-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b8c04c52-6e9d-4254-a222-85f06c186b92" (UID: "b8c04c52-6e9d-4254-a222-85f06c186b92"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:35:25 crc kubenswrapper[4953]: I1211 10:35:25.857111 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8c04c52-6e9d-4254-a222-85f06c186b92-config-data" (OuterVolumeSpecName: "config-data") pod "b8c04c52-6e9d-4254-a222-85f06c186b92" (UID: "b8c04c52-6e9d-4254-a222-85f06c186b92"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:35:25 crc kubenswrapper[4953]: I1211 10:35:25.870828 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8c04c52-6e9d-4254-a222-85f06c186b92-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "b8c04c52-6e9d-4254-a222-85f06c186b92" (UID: "b8c04c52-6e9d-4254-a222-85f06c186b92"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:35:25 crc kubenswrapper[4953]: I1211 10:35:25.888061 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n5ndh\" (UniqueName: \"kubernetes.io/projected/b8c04c52-6e9d-4254-a222-85f06c186b92-kube-api-access-n5ndh\") on node \"crc\" DevicePath \"\"" Dec 11 10:35:25 crc kubenswrapper[4953]: I1211 10:35:25.888106 4953 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b8c04c52-6e9d-4254-a222-85f06c186b92-logs\") on node \"crc\" DevicePath \"\"" Dec 11 10:35:25 crc kubenswrapper[4953]: I1211 10:35:25.888115 4953 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b8c04c52-6e9d-4254-a222-85f06c186b92-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 11 10:35:25 crc kubenswrapper[4953]: I1211 10:35:25.888125 4953 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8c04c52-6e9d-4254-a222-85f06c186b92-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 10:35:25 crc kubenswrapper[4953]: I1211 10:35:25.888133 4953 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8c04c52-6e9d-4254-a222-85f06c186b92-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 10:35:25 crc kubenswrapper[4953]: I1211 10:35:25.888141 4953 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b8c04c52-6e9d-4254-a222-85f06c186b92-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 10:35:25 crc kubenswrapper[4953]: I1211 10:35:25.888174 4953 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Dec 11 10:35:25 crc kubenswrapper[4953]: I1211 10:35:25.888184 4953 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b8c04c52-6e9d-4254-a222-85f06c186b92-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 11 10:35:25 crc kubenswrapper[4953]: I1211 10:35:25.913787 4953 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Dec 11 10:35:25 crc kubenswrapper[4953]: I1211 10:35:25.989934 4953 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Dec 11 10:35:26 crc kubenswrapper[4953]: I1211 10:35:26.485995 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="755305c4-518a-48f8-b732-a825b32487f6" path="/var/lib/kubelet/pods/755305c4-518a-48f8-b732-a825b32487f6/volumes" Dec 11 10:35:26 crc kubenswrapper[4953]: I1211 10:35:26.630005 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e067a835-8a1a-4672-aaea-b8c101109018","Type":"ContainerStarted","Data":"2e9a60ec1684ff881133bf906166805dce055256199aa98702401b39a20c68d8"} Dec 11 10:35:26 crc kubenswrapper[4953]: I1211 10:35:26.630065 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e067a835-8a1a-4672-aaea-b8c101109018","Type":"ContainerStarted","Data":"3ef7f0dbe4f89164e33f294de7045589466f6b4bbcc64f97b84b928e92f92a28"} Dec 11 10:35:26 crc kubenswrapper[4953]: I1211 10:35:26.632117 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b8c04c52-6e9d-4254-a222-85f06c186b92","Type":"ContainerDied","Data":"4b4078bfee6504a43c8452ccd9f53e4be023e0a20fb30022f659e18955499ca1"} Dec 11 10:35:26 crc kubenswrapper[4953]: I1211 10:35:26.632157 4953 scope.go:117] "RemoveContainer" containerID="affd56e178058ed46144c2bfb37fc7368d599e663b8408df548ed9b1be736499" Dec 11 10:35:26 crc kubenswrapper[4953]: I1211 10:35:26.632343 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 11 10:35:26 crc kubenswrapper[4953]: I1211 10:35:26.665895 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 11 10:35:26 crc kubenswrapper[4953]: I1211 10:35:26.691292 4953 scope.go:117] "RemoveContainer" containerID="178ccd876435208341ba5464adb24cfa2cf54bd9fcc3241af2650d6d14702f82" Dec 11 10:35:26 crc kubenswrapper[4953]: I1211 10:35:26.691705 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 11 10:35:26 crc kubenswrapper[4953]: I1211 10:35:26.703833 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 11 10:35:26 crc kubenswrapper[4953]: E1211 10:35:26.704389 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8c04c52-6e9d-4254-a222-85f06c186b92" containerName="glance-httpd" Dec 11 10:35:26 crc kubenswrapper[4953]: I1211 10:35:26.704407 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8c04c52-6e9d-4254-a222-85f06c186b92" containerName="glance-httpd" Dec 11 10:35:26 crc kubenswrapper[4953]: E1211 10:35:26.704422 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8c04c52-6e9d-4254-a222-85f06c186b92" containerName="glance-log" Dec 11 10:35:26 crc kubenswrapper[4953]: I1211 10:35:26.704428 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8c04c52-6e9d-4254-a222-85f06c186b92" containerName="glance-log" Dec 11 10:35:26 crc kubenswrapper[4953]: I1211 10:35:26.704804 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8c04c52-6e9d-4254-a222-85f06c186b92" containerName="glance-httpd" Dec 11 10:35:26 crc kubenswrapper[4953]: I1211 10:35:26.704859 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8c04c52-6e9d-4254-a222-85f06c186b92" containerName="glance-log" Dec 11 10:35:26 crc kubenswrapper[4953]: I1211 10:35:26.705929 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 11 10:35:26 crc kubenswrapper[4953]: I1211 10:35:26.710307 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Dec 11 10:35:26 crc kubenswrapper[4953]: I1211 10:35:26.710699 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Dec 11 10:35:26 crc kubenswrapper[4953]: I1211 10:35:26.722649 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 11 10:35:26 crc kubenswrapper[4953]: I1211 10:35:26.819316 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8f5v\" (UniqueName: \"kubernetes.io/projected/7b77681a-0823-42e6-b0a4-2af1ce955970-kube-api-access-q8f5v\") pod \"glance-default-internal-api-0\" (UID: \"7b77681a-0823-42e6-b0a4-2af1ce955970\") " pod="openstack/glance-default-internal-api-0" Dec 11 10:35:26 crc kubenswrapper[4953]: I1211 10:35:26.819414 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"7b77681a-0823-42e6-b0a4-2af1ce955970\") " pod="openstack/glance-default-internal-api-0" Dec 11 10:35:26 crc kubenswrapper[4953]: I1211 10:35:26.819446 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7b77681a-0823-42e6-b0a4-2af1ce955970-logs\") pod \"glance-default-internal-api-0\" (UID: \"7b77681a-0823-42e6-b0a4-2af1ce955970\") " pod="openstack/glance-default-internal-api-0" Dec 11 10:35:26 crc kubenswrapper[4953]: I1211 10:35:26.819477 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b77681a-0823-42e6-b0a4-2af1ce955970-scripts\") pod \"glance-default-internal-api-0\" (UID: \"7b77681a-0823-42e6-b0a4-2af1ce955970\") " pod="openstack/glance-default-internal-api-0" Dec 11 10:35:26 crc kubenswrapper[4953]: I1211 10:35:26.819508 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b77681a-0823-42e6-b0a4-2af1ce955970-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"7b77681a-0823-42e6-b0a4-2af1ce955970\") " pod="openstack/glance-default-internal-api-0" Dec 11 10:35:26 crc kubenswrapper[4953]: I1211 10:35:26.819652 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7b77681a-0823-42e6-b0a4-2af1ce955970-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"7b77681a-0823-42e6-b0a4-2af1ce955970\") " pod="openstack/glance-default-internal-api-0" Dec 11 10:35:26 crc kubenswrapper[4953]: I1211 10:35:26.819688 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b77681a-0823-42e6-b0a4-2af1ce955970-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"7b77681a-0823-42e6-b0a4-2af1ce955970\") " pod="openstack/glance-default-internal-api-0" Dec 11 10:35:26 crc kubenswrapper[4953]: I1211 10:35:26.819735 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b77681a-0823-42e6-b0a4-2af1ce955970-config-data\") pod \"glance-default-internal-api-0\" (UID: \"7b77681a-0823-42e6-b0a4-2af1ce955970\") " pod="openstack/glance-default-internal-api-0" Dec 11 10:35:26 crc kubenswrapper[4953]: I1211 10:35:26.839653 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-vq2rm"] Dec 11 10:35:26 crc kubenswrapper[4953]: I1211 10:35:26.841200 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-vq2rm" Dec 11 10:35:26 crc kubenswrapper[4953]: I1211 10:35:26.853858 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-vq2rm"] Dec 11 10:35:26 crc kubenswrapper[4953]: I1211 10:35:26.922329 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8f5v\" (UniqueName: \"kubernetes.io/projected/7b77681a-0823-42e6-b0a4-2af1ce955970-kube-api-access-q8f5v\") pod \"glance-default-internal-api-0\" (UID: \"7b77681a-0823-42e6-b0a4-2af1ce955970\") " pod="openstack/glance-default-internal-api-0" Dec 11 10:35:26 crc kubenswrapper[4953]: I1211 10:35:26.922396 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab54cd16-46dc-45ba-95b2-28afc8aef126-operator-scripts\") pod \"nova-api-db-create-vq2rm\" (UID: \"ab54cd16-46dc-45ba-95b2-28afc8aef126\") " pod="openstack/nova-api-db-create-vq2rm" Dec 11 10:35:26 crc kubenswrapper[4953]: I1211 10:35:26.922419 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"7b77681a-0823-42e6-b0a4-2af1ce955970\") " pod="openstack/glance-default-internal-api-0" Dec 11 10:35:26 crc kubenswrapper[4953]: I1211 10:35:26.922439 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7b77681a-0823-42e6-b0a4-2af1ce955970-logs\") pod \"glance-default-internal-api-0\" (UID: \"7b77681a-0823-42e6-b0a4-2af1ce955970\") " pod="openstack/glance-default-internal-api-0" Dec 11 10:35:26 crc kubenswrapper[4953]: I1211 10:35:26.922460 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b77681a-0823-42e6-b0a4-2af1ce955970-scripts\") pod \"glance-default-internal-api-0\" (UID: \"7b77681a-0823-42e6-b0a4-2af1ce955970\") " pod="openstack/glance-default-internal-api-0" Dec 11 10:35:26 crc kubenswrapper[4953]: I1211 10:35:26.922484 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b77681a-0823-42e6-b0a4-2af1ce955970-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"7b77681a-0823-42e6-b0a4-2af1ce955970\") " pod="openstack/glance-default-internal-api-0" Dec 11 10:35:26 crc kubenswrapper[4953]: I1211 10:35:26.922526 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjrdx\" (UniqueName: \"kubernetes.io/projected/ab54cd16-46dc-45ba-95b2-28afc8aef126-kube-api-access-cjrdx\") pod \"nova-api-db-create-vq2rm\" (UID: \"ab54cd16-46dc-45ba-95b2-28afc8aef126\") " pod="openstack/nova-api-db-create-vq2rm" Dec 11 10:35:26 crc kubenswrapper[4953]: I1211 10:35:26.922597 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7b77681a-0823-42e6-b0a4-2af1ce955970-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"7b77681a-0823-42e6-b0a4-2af1ce955970\") " pod="openstack/glance-default-internal-api-0" Dec 11 10:35:26 crc kubenswrapper[4953]: I1211 10:35:26.922620 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b77681a-0823-42e6-b0a4-2af1ce955970-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"7b77681a-0823-42e6-b0a4-2af1ce955970\") " pod="openstack/glance-default-internal-api-0" Dec 11 10:35:26 crc kubenswrapper[4953]: I1211 10:35:26.922669 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b77681a-0823-42e6-b0a4-2af1ce955970-config-data\") pod \"glance-default-internal-api-0\" (UID: \"7b77681a-0823-42e6-b0a4-2af1ce955970\") " pod="openstack/glance-default-internal-api-0" Dec 11 10:35:26 crc kubenswrapper[4953]: I1211 10:35:26.924906 4953 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"7b77681a-0823-42e6-b0a4-2af1ce955970\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/glance-default-internal-api-0" Dec 11 10:35:26 crc kubenswrapper[4953]: I1211 10:35:26.926225 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7b77681a-0823-42e6-b0a4-2af1ce955970-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"7b77681a-0823-42e6-b0a4-2af1ce955970\") " pod="openstack/glance-default-internal-api-0" Dec 11 10:35:26 crc kubenswrapper[4953]: I1211 10:35:26.926534 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7b77681a-0823-42e6-b0a4-2af1ce955970-logs\") pod \"glance-default-internal-api-0\" (UID: \"7b77681a-0823-42e6-b0a4-2af1ce955970\") " pod="openstack/glance-default-internal-api-0" Dec 11 10:35:26 crc kubenswrapper[4953]: I1211 10:35:26.938685 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b77681a-0823-42e6-b0a4-2af1ce955970-config-data\") pod \"glance-default-internal-api-0\" (UID: \"7b77681a-0823-42e6-b0a4-2af1ce955970\") " pod="openstack/glance-default-internal-api-0" Dec 11 10:35:26 crc kubenswrapper[4953]: I1211 10:35:26.950208 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b77681a-0823-42e6-b0a4-2af1ce955970-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"7b77681a-0823-42e6-b0a4-2af1ce955970\") " pod="openstack/glance-default-internal-api-0" Dec 11 10:35:26 crc kubenswrapper[4953]: I1211 10:35:26.953218 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b77681a-0823-42e6-b0a4-2af1ce955970-scripts\") pod \"glance-default-internal-api-0\" (UID: \"7b77681a-0823-42e6-b0a4-2af1ce955970\") " pod="openstack/glance-default-internal-api-0" Dec 11 10:35:26 crc kubenswrapper[4953]: I1211 10:35:26.971968 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8f5v\" (UniqueName: \"kubernetes.io/projected/7b77681a-0823-42e6-b0a4-2af1ce955970-kube-api-access-q8f5v\") pod \"glance-default-internal-api-0\" (UID: \"7b77681a-0823-42e6-b0a4-2af1ce955970\") " pod="openstack/glance-default-internal-api-0" Dec 11 10:35:26 crc kubenswrapper[4953]: I1211 10:35:26.990839 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b77681a-0823-42e6-b0a4-2af1ce955970-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"7b77681a-0823-42e6-b0a4-2af1ce955970\") " pod="openstack/glance-default-internal-api-0" Dec 11 10:35:27 crc kubenswrapper[4953]: I1211 10:35:27.022836 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-2529j"] Dec 11 10:35:27 crc kubenswrapper[4953]: I1211 10:35:27.026540 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-2529j" Dec 11 10:35:27 crc kubenswrapper[4953]: I1211 10:35:27.033146 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab54cd16-46dc-45ba-95b2-28afc8aef126-operator-scripts\") pod \"nova-api-db-create-vq2rm\" (UID: \"ab54cd16-46dc-45ba-95b2-28afc8aef126\") " pod="openstack/nova-api-db-create-vq2rm" Dec 11 10:35:27 crc kubenswrapper[4953]: I1211 10:35:27.033275 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjrdx\" (UniqueName: \"kubernetes.io/projected/ab54cd16-46dc-45ba-95b2-28afc8aef126-kube-api-access-cjrdx\") pod \"nova-api-db-create-vq2rm\" (UID: \"ab54cd16-46dc-45ba-95b2-28afc8aef126\") " pod="openstack/nova-api-db-create-vq2rm" Dec 11 10:35:27 crc kubenswrapper[4953]: I1211 10:35:27.034407 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab54cd16-46dc-45ba-95b2-28afc8aef126-operator-scripts\") pod \"nova-api-db-create-vq2rm\" (UID: \"ab54cd16-46dc-45ba-95b2-28afc8aef126\") " pod="openstack/nova-api-db-create-vq2rm" Dec 11 10:35:27 crc kubenswrapper[4953]: I1211 10:35:27.086010 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-339c-account-create-update-jjjms"] Dec 11 10:35:27 crc kubenswrapper[4953]: I1211 10:35:27.094894 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-339c-account-create-update-jjjms" Dec 11 10:35:27 crc kubenswrapper[4953]: I1211 10:35:27.096766 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Dec 11 10:35:27 crc kubenswrapper[4953]: I1211 10:35:27.111243 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjrdx\" (UniqueName: \"kubernetes.io/projected/ab54cd16-46dc-45ba-95b2-28afc8aef126-kube-api-access-cjrdx\") pod \"nova-api-db-create-vq2rm\" (UID: \"ab54cd16-46dc-45ba-95b2-28afc8aef126\") " pod="openstack/nova-api-db-create-vq2rm" Dec 11 10:35:27 crc kubenswrapper[4953]: I1211 10:35:27.114647 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-2529j"] Dec 11 10:35:27 crc kubenswrapper[4953]: I1211 10:35:27.123035 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"7b77681a-0823-42e6-b0a4-2af1ce955970\") " pod="openstack/glance-default-internal-api-0" Dec 11 10:35:27 crc kubenswrapper[4953]: I1211 10:35:27.146640 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-339c-account-create-update-jjjms"] Dec 11 10:35:27 crc kubenswrapper[4953]: I1211 10:35:27.148853 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9l5x\" (UniqueName: \"kubernetes.io/projected/80cd1362-41c0-4df3-8c3d-566ab77b6edf-kube-api-access-q9l5x\") pod \"nova-cell0-db-create-2529j\" (UID: \"80cd1362-41c0-4df3-8c3d-566ab77b6edf\") " pod="openstack/nova-cell0-db-create-2529j" Dec 11 10:35:27 crc kubenswrapper[4953]: I1211 10:35:27.148924 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/80cd1362-41c0-4df3-8c3d-566ab77b6edf-operator-scripts\") pod \"nova-cell0-db-create-2529j\" (UID: \"80cd1362-41c0-4df3-8c3d-566ab77b6edf\") " pod="openstack/nova-cell0-db-create-2529j" Dec 11 10:35:27 crc kubenswrapper[4953]: I1211 10:35:27.178120 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-vq2rm" Dec 11 10:35:27 crc kubenswrapper[4953]: I1211 10:35:27.224720 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-caaa-account-create-update-q2szt"] Dec 11 10:35:27 crc kubenswrapper[4953]: I1211 10:35:27.226012 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-caaa-account-create-update-q2szt" Dec 11 10:35:27 crc kubenswrapper[4953]: I1211 10:35:27.235072 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Dec 11 10:35:27 crc kubenswrapper[4953]: I1211 10:35:27.301394 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9l5x\" (UniqueName: \"kubernetes.io/projected/80cd1362-41c0-4df3-8c3d-566ab77b6edf-kube-api-access-q9l5x\") pod \"nova-cell0-db-create-2529j\" (UID: \"80cd1362-41c0-4df3-8c3d-566ab77b6edf\") " pod="openstack/nova-cell0-db-create-2529j" Dec 11 10:35:27 crc kubenswrapper[4953]: I1211 10:35:27.301929 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hddx5\" (UniqueName: \"kubernetes.io/projected/fb34da44-aab9-4100-90fd-dfd6b323e85d-kube-api-access-hddx5\") pod \"nova-api-339c-account-create-update-jjjms\" (UID: \"fb34da44-aab9-4100-90fd-dfd6b323e85d\") " pod="openstack/nova-api-339c-account-create-update-jjjms" Dec 11 10:35:27 crc kubenswrapper[4953]: I1211 10:35:27.301974 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/80cd1362-41c0-4df3-8c3d-566ab77b6edf-operator-scripts\") pod \"nova-cell0-db-create-2529j\" (UID: \"80cd1362-41c0-4df3-8c3d-566ab77b6edf\") " pod="openstack/nova-cell0-db-create-2529j" Dec 11 10:35:27 crc kubenswrapper[4953]: I1211 10:35:27.302058 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fb34da44-aab9-4100-90fd-dfd6b323e85d-operator-scripts\") pod \"nova-api-339c-account-create-update-jjjms\" (UID: \"fb34da44-aab9-4100-90fd-dfd6b323e85d\") " pod="openstack/nova-api-339c-account-create-update-jjjms" Dec 11 10:35:27 crc kubenswrapper[4953]: I1211 10:35:27.304077 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/80cd1362-41c0-4df3-8c3d-566ab77b6edf-operator-scripts\") pod \"nova-cell0-db-create-2529j\" (UID: \"80cd1362-41c0-4df3-8c3d-566ab77b6edf\") " pod="openstack/nova-cell0-db-create-2529j" Dec 11 10:35:27 crc kubenswrapper[4953]: I1211 10:35:27.307071 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-kdnkh"] Dec 11 10:35:27 crc kubenswrapper[4953]: I1211 10:35:27.308533 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-kdnkh" Dec 11 10:35:27 crc kubenswrapper[4953]: I1211 10:35:27.328152 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-caaa-account-create-update-q2szt"] Dec 11 10:35:27 crc kubenswrapper[4953]: I1211 10:35:27.348079 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9l5x\" (UniqueName: \"kubernetes.io/projected/80cd1362-41c0-4df3-8c3d-566ab77b6edf-kube-api-access-q9l5x\") pod \"nova-cell0-db-create-2529j\" (UID: \"80cd1362-41c0-4df3-8c3d-566ab77b6edf\") " pod="openstack/nova-cell0-db-create-2529j" Dec 11 10:35:27 crc kubenswrapper[4953]: I1211 10:35:27.354349 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 11 10:35:27 crc kubenswrapper[4953]: I1211 10:35:27.360013 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-kdnkh"] Dec 11 10:35:27 crc kubenswrapper[4953]: I1211 10:35:27.410717 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hddx5\" (UniqueName: \"kubernetes.io/projected/fb34da44-aab9-4100-90fd-dfd6b323e85d-kube-api-access-hddx5\") pod \"nova-api-339c-account-create-update-jjjms\" (UID: \"fb34da44-aab9-4100-90fd-dfd6b323e85d\") " pod="openstack/nova-api-339c-account-create-update-jjjms" Dec 11 10:35:27 crc kubenswrapper[4953]: I1211 10:35:27.411702 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c095531c-eb2f-46ab-a014-2526cdc5462f-operator-scripts\") pod \"nova-cell0-caaa-account-create-update-q2szt\" (UID: \"c095531c-eb2f-46ab-a014-2526cdc5462f\") " pod="openstack/nova-cell0-caaa-account-create-update-q2szt" Dec 11 10:35:27 crc kubenswrapper[4953]: I1211 10:35:27.411802 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fb34da44-aab9-4100-90fd-dfd6b323e85d-operator-scripts\") pod \"nova-api-339c-account-create-update-jjjms\" (UID: \"fb34da44-aab9-4100-90fd-dfd6b323e85d\") " pod="openstack/nova-api-339c-account-create-update-jjjms" Dec 11 10:35:27 crc kubenswrapper[4953]: I1211 10:35:27.411910 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8b2rw\" (UniqueName: \"kubernetes.io/projected/c095531c-eb2f-46ab-a014-2526cdc5462f-kube-api-access-8b2rw\") pod \"nova-cell0-caaa-account-create-update-q2szt\" (UID: \"c095531c-eb2f-46ab-a014-2526cdc5462f\") " pod="openstack/nova-cell0-caaa-account-create-update-q2szt" Dec 11 10:35:27 crc kubenswrapper[4953]: I1211 10:35:27.412046 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ddgf\" (UniqueName: \"kubernetes.io/projected/7c04ba7e-0ab4-4242-af7e-5566fc6030cb-kube-api-access-8ddgf\") pod \"nova-cell1-db-create-kdnkh\" (UID: \"7c04ba7e-0ab4-4242-af7e-5566fc6030cb\") " pod="openstack/nova-cell1-db-create-kdnkh" Dec 11 10:35:27 crc kubenswrapper[4953]: I1211 10:35:27.412115 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7c04ba7e-0ab4-4242-af7e-5566fc6030cb-operator-scripts\") pod \"nova-cell1-db-create-kdnkh\" (UID: \"7c04ba7e-0ab4-4242-af7e-5566fc6030cb\") " pod="openstack/nova-cell1-db-create-kdnkh" Dec 11 10:35:27 crc kubenswrapper[4953]: I1211 10:35:27.413731 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fb34da44-aab9-4100-90fd-dfd6b323e85d-operator-scripts\") pod \"nova-api-339c-account-create-update-jjjms\" (UID: \"fb34da44-aab9-4100-90fd-dfd6b323e85d\") " pod="openstack/nova-api-339c-account-create-update-jjjms" Dec 11 10:35:27 crc kubenswrapper[4953]: I1211 10:35:27.417632 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-e356-account-create-update-k4hjq"] Dec 11 10:35:27 crc kubenswrapper[4953]: I1211 10:35:27.418857 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-e356-account-create-update-k4hjq" Dec 11 10:35:27 crc kubenswrapper[4953]: I1211 10:35:27.423250 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Dec 11 10:35:27 crc kubenswrapper[4953]: I1211 10:35:27.429183 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-e356-account-create-update-k4hjq"] Dec 11 10:35:27 crc kubenswrapper[4953]: I1211 10:35:27.430117 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hddx5\" (UniqueName: \"kubernetes.io/projected/fb34da44-aab9-4100-90fd-dfd6b323e85d-kube-api-access-hddx5\") pod \"nova-api-339c-account-create-update-jjjms\" (UID: \"fb34da44-aab9-4100-90fd-dfd6b323e85d\") " pod="openstack/nova-api-339c-account-create-update-jjjms" Dec 11 10:35:27 crc kubenswrapper[4953]: I1211 10:35:27.514490 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxp8d\" (UniqueName: \"kubernetes.io/projected/00b5c561-61d7-4ae7-9485-a5882b9a5dc1-kube-api-access-cxp8d\") pod \"nova-cell1-e356-account-create-update-k4hjq\" (UID: \"00b5c561-61d7-4ae7-9485-a5882b9a5dc1\") " pod="openstack/nova-cell1-e356-account-create-update-k4hjq" Dec 11 10:35:27 crc kubenswrapper[4953]: I1211 10:35:27.514699 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c095531c-eb2f-46ab-a014-2526cdc5462f-operator-scripts\") pod \"nova-cell0-caaa-account-create-update-q2szt\" (UID: \"c095531c-eb2f-46ab-a014-2526cdc5462f\") " pod="openstack/nova-cell0-caaa-account-create-update-q2szt" Dec 11 10:35:27 crc kubenswrapper[4953]: I1211 10:35:27.514774 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8b2rw\" (UniqueName: \"kubernetes.io/projected/c095531c-eb2f-46ab-a014-2526cdc5462f-kube-api-access-8b2rw\") pod \"nova-cell0-caaa-account-create-update-q2szt\" (UID: \"c095531c-eb2f-46ab-a014-2526cdc5462f\") " pod="openstack/nova-cell0-caaa-account-create-update-q2szt" Dec 11 10:35:27 crc kubenswrapper[4953]: I1211 10:35:27.514837 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ddgf\" (UniqueName: \"kubernetes.io/projected/7c04ba7e-0ab4-4242-af7e-5566fc6030cb-kube-api-access-8ddgf\") pod \"nova-cell1-db-create-kdnkh\" (UID: \"7c04ba7e-0ab4-4242-af7e-5566fc6030cb\") " pod="openstack/nova-cell1-db-create-kdnkh" Dec 11 10:35:27 crc kubenswrapper[4953]: I1211 10:35:27.514879 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7c04ba7e-0ab4-4242-af7e-5566fc6030cb-operator-scripts\") pod \"nova-cell1-db-create-kdnkh\" (UID: \"7c04ba7e-0ab4-4242-af7e-5566fc6030cb\") " pod="openstack/nova-cell1-db-create-kdnkh" Dec 11 10:35:27 crc kubenswrapper[4953]: I1211 10:35:27.514933 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/00b5c561-61d7-4ae7-9485-a5882b9a5dc1-operator-scripts\") pod \"nova-cell1-e356-account-create-update-k4hjq\" (UID: \"00b5c561-61d7-4ae7-9485-a5882b9a5dc1\") " pod="openstack/nova-cell1-e356-account-create-update-k4hjq" Dec 11 10:35:27 crc kubenswrapper[4953]: I1211 10:35:27.516434 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c095531c-eb2f-46ab-a014-2526cdc5462f-operator-scripts\") pod \"nova-cell0-caaa-account-create-update-q2szt\" (UID: \"c095531c-eb2f-46ab-a014-2526cdc5462f\") " pod="openstack/nova-cell0-caaa-account-create-update-q2szt" Dec 11 10:35:27 crc kubenswrapper[4953]: I1211 10:35:27.517372 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7c04ba7e-0ab4-4242-af7e-5566fc6030cb-operator-scripts\") pod \"nova-cell1-db-create-kdnkh\" (UID: \"7c04ba7e-0ab4-4242-af7e-5566fc6030cb\") " pod="openstack/nova-cell1-db-create-kdnkh" Dec 11 10:35:27 crc kubenswrapper[4953]: I1211 10:35:27.537364 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8b2rw\" (UniqueName: \"kubernetes.io/projected/c095531c-eb2f-46ab-a014-2526cdc5462f-kube-api-access-8b2rw\") pod \"nova-cell0-caaa-account-create-update-q2szt\" (UID: \"c095531c-eb2f-46ab-a014-2526cdc5462f\") " pod="openstack/nova-cell0-caaa-account-create-update-q2szt" Dec 11 10:35:27 crc kubenswrapper[4953]: I1211 10:35:27.539032 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ddgf\" (UniqueName: \"kubernetes.io/projected/7c04ba7e-0ab4-4242-af7e-5566fc6030cb-kube-api-access-8ddgf\") pod \"nova-cell1-db-create-kdnkh\" (UID: \"7c04ba7e-0ab4-4242-af7e-5566fc6030cb\") " pod="openstack/nova-cell1-db-create-kdnkh" Dec 11 10:35:27 crc kubenswrapper[4953]: I1211 10:35:27.551208 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-2529j" Dec 11 10:35:27 crc kubenswrapper[4953]: I1211 10:35:27.603852 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-339c-account-create-update-jjjms" Dec 11 10:35:27 crc kubenswrapper[4953]: I1211 10:35:27.619142 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/00b5c561-61d7-4ae7-9485-a5882b9a5dc1-operator-scripts\") pod \"nova-cell1-e356-account-create-update-k4hjq\" (UID: \"00b5c561-61d7-4ae7-9485-a5882b9a5dc1\") " pod="openstack/nova-cell1-e356-account-create-update-k4hjq" Dec 11 10:35:27 crc kubenswrapper[4953]: I1211 10:35:27.619208 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxp8d\" (UniqueName: \"kubernetes.io/projected/00b5c561-61d7-4ae7-9485-a5882b9a5dc1-kube-api-access-cxp8d\") pod \"nova-cell1-e356-account-create-update-k4hjq\" (UID: \"00b5c561-61d7-4ae7-9485-a5882b9a5dc1\") " pod="openstack/nova-cell1-e356-account-create-update-k4hjq" Dec 11 10:35:27 crc kubenswrapper[4953]: I1211 10:35:27.620423 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/00b5c561-61d7-4ae7-9485-a5882b9a5dc1-operator-scripts\") pod \"nova-cell1-e356-account-create-update-k4hjq\" (UID: \"00b5c561-61d7-4ae7-9485-a5882b9a5dc1\") " pod="openstack/nova-cell1-e356-account-create-update-k4hjq" Dec 11 10:35:27 crc kubenswrapper[4953]: I1211 10:35:27.655056 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-caaa-account-create-update-q2szt" Dec 11 10:35:27 crc kubenswrapper[4953]: I1211 10:35:27.657228 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxp8d\" (UniqueName: \"kubernetes.io/projected/00b5c561-61d7-4ae7-9485-a5882b9a5dc1-kube-api-access-cxp8d\") pod \"nova-cell1-e356-account-create-update-k4hjq\" (UID: \"00b5c561-61d7-4ae7-9485-a5882b9a5dc1\") " pod="openstack/nova-cell1-e356-account-create-update-k4hjq" Dec 11 10:35:27 crc kubenswrapper[4953]: I1211 10:35:27.667974 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-kdnkh" Dec 11 10:35:27 crc kubenswrapper[4953]: I1211 10:35:27.669293 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e067a835-8a1a-4672-aaea-b8c101109018","Type":"ContainerStarted","Data":"30c55b9a63cff189be97f461ce82cf19d069820c204b72f08733751b6e4d8e3b"} Dec 11 10:35:27 crc kubenswrapper[4953]: I1211 10:35:27.723365 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.723340411 podStartE2EDuration="3.723340411s" podCreationTimestamp="2025-12-11 10:35:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:35:27.722003119 +0000 UTC m=+1445.745862152" watchObservedRunningTime="2025-12-11 10:35:27.723340411 +0000 UTC m=+1445.747199444" Dec 11 10:35:27 crc kubenswrapper[4953]: I1211 10:35:27.741587 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-e356-account-create-update-k4hjq" Dec 11 10:35:27 crc kubenswrapper[4953]: I1211 10:35:27.895198 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-vq2rm"] Dec 11 10:35:28 crc kubenswrapper[4953]: I1211 10:35:28.189552 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 11 10:35:28 crc kubenswrapper[4953]: I1211 10:35:28.242120 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-2529j"] Dec 11 10:35:28 crc kubenswrapper[4953]: W1211 10:35:28.248261 4953 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod80cd1362_41c0_4df3_8c3d_566ab77b6edf.slice/crio-e0656702ddf32981f586d00ab4f883a911e1d2419a5659138b171348e75e90c9 WatchSource:0}: Error finding container e0656702ddf32981f586d00ab4f883a911e1d2419a5659138b171348e75e90c9: Status 404 returned error can't find the container with id e0656702ddf32981f586d00ab4f883a911e1d2419a5659138b171348e75e90c9 Dec 11 10:35:28 crc kubenswrapper[4953]: I1211 10:35:28.498530 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8c04c52-6e9d-4254-a222-85f06c186b92" path="/var/lib/kubelet/pods/b8c04c52-6e9d-4254-a222-85f06c186b92/volumes" Dec 11 10:35:28 crc kubenswrapper[4953]: I1211 10:35:28.542539 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-339c-account-create-update-jjjms"] Dec 11 10:35:28 crc kubenswrapper[4953]: W1211 10:35:28.572242 4953 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7c04ba7e_0ab4_4242_af7e_5566fc6030cb.slice/crio-da4e2ff29021204fcb84e391e4e13bae3160664b83c90910ab6ba60f64ef7e3e WatchSource:0}: Error finding container da4e2ff29021204fcb84e391e4e13bae3160664b83c90910ab6ba60f64ef7e3e: Status 404 returned error can't find the container with id da4e2ff29021204fcb84e391e4e13bae3160664b83c90910ab6ba60f64ef7e3e Dec 11 10:35:28 crc kubenswrapper[4953]: I1211 10:35:28.572706 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-kdnkh"] Dec 11 10:35:28 crc kubenswrapper[4953]: I1211 10:35:28.696943 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-caaa-account-create-update-q2szt"] Dec 11 10:35:28 crc kubenswrapper[4953]: I1211 10:35:28.699644 4953 generic.go:334] "Generic (PLEG): container finished" podID="ab54cd16-46dc-45ba-95b2-28afc8aef126" containerID="abbf1dca057a3008bc1e0fc9376d99eca8728bfe7f2c6e01f3f4573f09a97a8a" exitCode=0 Dec 11 10:35:28 crc kubenswrapper[4953]: I1211 10:35:28.699735 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-vq2rm" event={"ID":"ab54cd16-46dc-45ba-95b2-28afc8aef126","Type":"ContainerDied","Data":"abbf1dca057a3008bc1e0fc9376d99eca8728bfe7f2c6e01f3f4573f09a97a8a"} Dec 11 10:35:28 crc kubenswrapper[4953]: I1211 10:35:28.699796 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-vq2rm" event={"ID":"ab54cd16-46dc-45ba-95b2-28afc8aef126","Type":"ContainerStarted","Data":"4e6bc1a5ff8b97e8cddca1f98df723065eca6fe8b25e9145e9cf670b6ba531b0"} Dec 11 10:35:28 crc kubenswrapper[4953]: I1211 10:35:28.703930 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-2529j" event={"ID":"80cd1362-41c0-4df3-8c3d-566ab77b6edf","Type":"ContainerStarted","Data":"fdd070f6d6ae0ce7f82371346d196e9b99e064f7ba7340450355520e870d3e65"} Dec 11 10:35:28 crc kubenswrapper[4953]: I1211 10:35:28.703981 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-2529j" event={"ID":"80cd1362-41c0-4df3-8c3d-566ab77b6edf","Type":"ContainerStarted","Data":"e0656702ddf32981f586d00ab4f883a911e1d2419a5659138b171348e75e90c9"} Dec 11 10:35:28 crc kubenswrapper[4953]: I1211 10:35:28.707590 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-339c-account-create-update-jjjms" event={"ID":"fb34da44-aab9-4100-90fd-dfd6b323e85d","Type":"ContainerStarted","Data":"6877ac5166a805a2db6ed490e034b70eead56e39bdf6e78b1214b6b7c85f9ed3"} Dec 11 10:35:28 crc kubenswrapper[4953]: I1211 10:35:28.710061 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-kdnkh" event={"ID":"7c04ba7e-0ab4-4242-af7e-5566fc6030cb","Type":"ContainerStarted","Data":"da4e2ff29021204fcb84e391e4e13bae3160664b83c90910ab6ba60f64ef7e3e"} Dec 11 10:35:28 crc kubenswrapper[4953]: I1211 10:35:28.714024 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7b77681a-0823-42e6-b0a4-2af1ce955970","Type":"ContainerStarted","Data":"e0f85e3a56fdd109713da0f7db29fda300773a72206b73c5e4adf00a8cc8c7bf"} Dec 11 10:35:28 crc kubenswrapper[4953]: I1211 10:35:28.719726 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-e356-account-create-update-k4hjq"] Dec 11 10:35:28 crc kubenswrapper[4953]: I1211 10:35:28.740096 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-2529j" podStartSLOduration=2.74007593 podStartE2EDuration="2.74007593s" podCreationTimestamp="2025-12-11 10:35:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:35:28.733629867 +0000 UTC m=+1446.757488910" watchObservedRunningTime="2025-12-11 10:35:28.74007593 +0000 UTC m=+1446.763934973" Dec 11 10:35:28 crc kubenswrapper[4953]: W1211 10:35:28.765934 4953 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod00b5c561_61d7_4ae7_9485_a5882b9a5dc1.slice/crio-f214833e4167c9491a65e5d0ce472b19c7e9672442e1ef919bb5b9f92fe6483a WatchSource:0}: Error finding container f214833e4167c9491a65e5d0ce472b19c7e9672442e1ef919bb5b9f92fe6483a: Status 404 returned error can't find the container with id f214833e4167c9491a65e5d0ce472b19c7e9672442e1ef919bb5b9f92fe6483a Dec 11 10:35:29 crc kubenswrapper[4953]: I1211 10:35:29.162359 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 10:35:29 crc kubenswrapper[4953]: I1211 10:35:29.283533 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7lrqj\" (UniqueName: \"kubernetes.io/projected/e6a86b70-2c86-45dc-a446-004affe33e67-kube-api-access-7lrqj\") pod \"e6a86b70-2c86-45dc-a446-004affe33e67\" (UID: \"e6a86b70-2c86-45dc-a446-004affe33e67\") " Dec 11 10:35:29 crc kubenswrapper[4953]: I1211 10:35:29.283729 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6a86b70-2c86-45dc-a446-004affe33e67-combined-ca-bundle\") pod \"e6a86b70-2c86-45dc-a446-004affe33e67\" (UID: \"e6a86b70-2c86-45dc-a446-004affe33e67\") " Dec 11 10:35:29 crc kubenswrapper[4953]: I1211 10:35:29.283770 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e6a86b70-2c86-45dc-a446-004affe33e67-sg-core-conf-yaml\") pod \"e6a86b70-2c86-45dc-a446-004affe33e67\" (UID: \"e6a86b70-2c86-45dc-a446-004affe33e67\") " Dec 11 10:35:29 crc kubenswrapper[4953]: I1211 10:35:29.283802 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e6a86b70-2c86-45dc-a446-004affe33e67-log-httpd\") pod \"e6a86b70-2c86-45dc-a446-004affe33e67\" (UID: \"e6a86b70-2c86-45dc-a446-004affe33e67\") " Dec 11 10:35:29 crc kubenswrapper[4953]: I1211 10:35:29.283847 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6a86b70-2c86-45dc-a446-004affe33e67-config-data\") pod \"e6a86b70-2c86-45dc-a446-004affe33e67\" (UID: \"e6a86b70-2c86-45dc-a446-004affe33e67\") " Dec 11 10:35:29 crc kubenswrapper[4953]: I1211 10:35:29.283947 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e6a86b70-2c86-45dc-a446-004affe33e67-run-httpd\") pod \"e6a86b70-2c86-45dc-a446-004affe33e67\" (UID: \"e6a86b70-2c86-45dc-a446-004affe33e67\") " Dec 11 10:35:29 crc kubenswrapper[4953]: I1211 10:35:29.283973 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e6a86b70-2c86-45dc-a446-004affe33e67-scripts\") pod \"e6a86b70-2c86-45dc-a446-004affe33e67\" (UID: \"e6a86b70-2c86-45dc-a446-004affe33e67\") " Dec 11 10:35:29 crc kubenswrapper[4953]: I1211 10:35:29.284467 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e6a86b70-2c86-45dc-a446-004affe33e67-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "e6a86b70-2c86-45dc-a446-004affe33e67" (UID: "e6a86b70-2c86-45dc-a446-004affe33e67"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:35:29 crc kubenswrapper[4953]: I1211 10:35:29.284890 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e6a86b70-2c86-45dc-a446-004affe33e67-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "e6a86b70-2c86-45dc-a446-004affe33e67" (UID: "e6a86b70-2c86-45dc-a446-004affe33e67"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:35:29 crc kubenswrapper[4953]: I1211 10:35:29.288905 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6a86b70-2c86-45dc-a446-004affe33e67-scripts" (OuterVolumeSpecName: "scripts") pod "e6a86b70-2c86-45dc-a446-004affe33e67" (UID: "e6a86b70-2c86-45dc-a446-004affe33e67"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:35:29 crc kubenswrapper[4953]: I1211 10:35:29.289224 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6a86b70-2c86-45dc-a446-004affe33e67-kube-api-access-7lrqj" (OuterVolumeSpecName: "kube-api-access-7lrqj") pod "e6a86b70-2c86-45dc-a446-004affe33e67" (UID: "e6a86b70-2c86-45dc-a446-004affe33e67"). InnerVolumeSpecName "kube-api-access-7lrqj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:35:29 crc kubenswrapper[4953]: I1211 10:35:29.333204 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6a86b70-2c86-45dc-a446-004affe33e67-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "e6a86b70-2c86-45dc-a446-004affe33e67" (UID: "e6a86b70-2c86-45dc-a446-004affe33e67"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:35:29 crc kubenswrapper[4953]: I1211 10:35:29.379610 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6a86b70-2c86-45dc-a446-004affe33e67-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e6a86b70-2c86-45dc-a446-004affe33e67" (UID: "e6a86b70-2c86-45dc-a446-004affe33e67"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:35:29 crc kubenswrapper[4953]: I1211 10:35:29.386128 4953 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6a86b70-2c86-45dc-a446-004affe33e67-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 10:35:29 crc kubenswrapper[4953]: I1211 10:35:29.386173 4953 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e6a86b70-2c86-45dc-a446-004affe33e67-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 11 10:35:29 crc kubenswrapper[4953]: I1211 10:35:29.386189 4953 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e6a86b70-2c86-45dc-a446-004affe33e67-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 11 10:35:29 crc kubenswrapper[4953]: I1211 10:35:29.386200 4953 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e6a86b70-2c86-45dc-a446-004affe33e67-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 11 10:35:29 crc kubenswrapper[4953]: I1211 10:35:29.386211 4953 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e6a86b70-2c86-45dc-a446-004affe33e67-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 10:35:29 crc kubenswrapper[4953]: I1211 10:35:29.386224 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7lrqj\" (UniqueName: \"kubernetes.io/projected/e6a86b70-2c86-45dc-a446-004affe33e67-kube-api-access-7lrqj\") on node \"crc\" DevicePath \"\"" Dec 11 10:35:29 crc kubenswrapper[4953]: I1211 10:35:29.420485 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6a86b70-2c86-45dc-a446-004affe33e67-config-data" (OuterVolumeSpecName: "config-data") pod "e6a86b70-2c86-45dc-a446-004affe33e67" (UID: "e6a86b70-2c86-45dc-a446-004affe33e67"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:35:29 crc kubenswrapper[4953]: I1211 10:35:29.487652 4953 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6a86b70-2c86-45dc-a446-004affe33e67-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 10:35:29 crc kubenswrapper[4953]: I1211 10:35:29.737957 4953 generic.go:334] "Generic (PLEG): container finished" podID="e6a86b70-2c86-45dc-a446-004affe33e67" containerID="88836a26d94c1e06dfec99dab514224230bb38e5d142ea483ddeb5e930200792" exitCode=0 Dec 11 10:35:29 crc kubenswrapper[4953]: I1211 10:35:29.738036 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 10:35:29 crc kubenswrapper[4953]: I1211 10:35:29.738033 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e6a86b70-2c86-45dc-a446-004affe33e67","Type":"ContainerDied","Data":"88836a26d94c1e06dfec99dab514224230bb38e5d142ea483ddeb5e930200792"} Dec 11 10:35:29 crc kubenswrapper[4953]: I1211 10:35:29.738166 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e6a86b70-2c86-45dc-a446-004affe33e67","Type":"ContainerDied","Data":"4e829e90163c6bd17a9b6a4e54ef0f78231009f09ed45753ae92b20599125133"} Dec 11 10:35:29 crc kubenswrapper[4953]: I1211 10:35:29.738187 4953 scope.go:117] "RemoveContainer" containerID="05f0d441546a6589e8391f79f24192efc727d811c4df82dc4fe6bbc931c59ae8" Dec 11 10:35:29 crc kubenswrapper[4953]: I1211 10:35:29.746359 4953 generic.go:334] "Generic (PLEG): container finished" podID="00b5c561-61d7-4ae7-9485-a5882b9a5dc1" containerID="0491ffc5f1bc0bc44582efeba6a5a0935a60c935c046c46e19e369d2e4913539" exitCode=0 Dec 11 10:35:29 crc kubenswrapper[4953]: I1211 10:35:29.746849 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-e356-account-create-update-k4hjq" event={"ID":"00b5c561-61d7-4ae7-9485-a5882b9a5dc1","Type":"ContainerDied","Data":"0491ffc5f1bc0bc44582efeba6a5a0935a60c935c046c46e19e369d2e4913539"} Dec 11 10:35:29 crc kubenswrapper[4953]: I1211 10:35:29.746881 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-e356-account-create-update-k4hjq" event={"ID":"00b5c561-61d7-4ae7-9485-a5882b9a5dc1","Type":"ContainerStarted","Data":"f214833e4167c9491a65e5d0ce472b19c7e9672442e1ef919bb5b9f92fe6483a"} Dec 11 10:35:29 crc kubenswrapper[4953]: I1211 10:35:29.788741 4953 generic.go:334] "Generic (PLEG): container finished" podID="fb34da44-aab9-4100-90fd-dfd6b323e85d" containerID="2225e689693ec9440a39dbcbbd5349461e4303b1a07c6561f0168f098dab8191" exitCode=0 Dec 11 10:35:29 crc kubenswrapper[4953]: I1211 10:35:29.788802 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-339c-account-create-update-jjjms" event={"ID":"fb34da44-aab9-4100-90fd-dfd6b323e85d","Type":"ContainerDied","Data":"2225e689693ec9440a39dbcbbd5349461e4303b1a07c6561f0168f098dab8191"} Dec 11 10:35:29 crc kubenswrapper[4953]: I1211 10:35:29.797365 4953 generic.go:334] "Generic (PLEG): container finished" podID="7c04ba7e-0ab4-4242-af7e-5566fc6030cb" containerID="2267e211bfce5ffc305d093bf44d566700acca94813d4aea6430f83c0ecb326d" exitCode=0 Dec 11 10:35:29 crc kubenswrapper[4953]: I1211 10:35:29.797463 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-kdnkh" event={"ID":"7c04ba7e-0ab4-4242-af7e-5566fc6030cb","Type":"ContainerDied","Data":"2267e211bfce5ffc305d093bf44d566700acca94813d4aea6430f83c0ecb326d"} Dec 11 10:35:29 crc kubenswrapper[4953]: I1211 10:35:29.802633 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7b77681a-0823-42e6-b0a4-2af1ce955970","Type":"ContainerStarted","Data":"3dd428abe094a4785fe247c46053c25a62247016c38cd9af55762bbf581ab80f"} Dec 11 10:35:29 crc kubenswrapper[4953]: I1211 10:35:29.805391 4953 generic.go:334] "Generic (PLEG): container finished" podID="80cd1362-41c0-4df3-8c3d-566ab77b6edf" containerID="fdd070f6d6ae0ce7f82371346d196e9b99e064f7ba7340450355520e870d3e65" exitCode=0 Dec 11 10:35:29 crc kubenswrapper[4953]: I1211 10:35:29.805450 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-2529j" event={"ID":"80cd1362-41c0-4df3-8c3d-566ab77b6edf","Type":"ContainerDied","Data":"fdd070f6d6ae0ce7f82371346d196e9b99e064f7ba7340450355520e870d3e65"} Dec 11 10:35:29 crc kubenswrapper[4953]: I1211 10:35:29.807017 4953 generic.go:334] "Generic (PLEG): container finished" podID="c095531c-eb2f-46ab-a014-2526cdc5462f" containerID="10c8d2ad3cc0892332f1c89eaa6623a95eee3fe76e3f7a9daa5675267b2e9091" exitCode=0 Dec 11 10:35:29 crc kubenswrapper[4953]: I1211 10:35:29.807058 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-caaa-account-create-update-q2szt" event={"ID":"c095531c-eb2f-46ab-a014-2526cdc5462f","Type":"ContainerDied","Data":"10c8d2ad3cc0892332f1c89eaa6623a95eee3fe76e3f7a9daa5675267b2e9091"} Dec 11 10:35:29 crc kubenswrapper[4953]: I1211 10:35:29.807109 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-caaa-account-create-update-q2szt" event={"ID":"c095531c-eb2f-46ab-a014-2526cdc5462f","Type":"ContainerStarted","Data":"ad486dea69f51206f84ed928f2725045bdf4c0e93d9880e6ff8c7533f994f99b"} Dec 11 10:35:29 crc kubenswrapper[4953]: I1211 10:35:29.826233 4953 scope.go:117] "RemoveContainer" containerID="e00b05052f4dfeb34195ff75719c94b66b12e27fa51e40774f72ad6e607e2086" Dec 11 10:35:29 crc kubenswrapper[4953]: I1211 10:35:29.878336 4953 scope.go:117] "RemoveContainer" containerID="e67d999cea9a46e7f2a2d118af0e0e80cc90eb09935fd36400921f3869b0b99a" Dec 11 10:35:29 crc kubenswrapper[4953]: I1211 10:35:29.896583 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 11 10:35:29 crc kubenswrapper[4953]: I1211 10:35:29.909661 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 11 10:35:29 crc kubenswrapper[4953]: I1211 10:35:29.917992 4953 scope.go:117] "RemoveContainer" containerID="88836a26d94c1e06dfec99dab514224230bb38e5d142ea483ddeb5e930200792" Dec 11 10:35:29 crc kubenswrapper[4953]: I1211 10:35:29.918159 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 11 10:35:29 crc kubenswrapper[4953]: E1211 10:35:29.918938 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6a86b70-2c86-45dc-a446-004affe33e67" containerName="ceilometer-notification-agent" Dec 11 10:35:29 crc kubenswrapper[4953]: I1211 10:35:29.918957 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6a86b70-2c86-45dc-a446-004affe33e67" containerName="ceilometer-notification-agent" Dec 11 10:35:29 crc kubenswrapper[4953]: E1211 10:35:29.918985 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6a86b70-2c86-45dc-a446-004affe33e67" containerName="ceilometer-central-agent" Dec 11 10:35:29 crc kubenswrapper[4953]: I1211 10:35:29.918993 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6a86b70-2c86-45dc-a446-004affe33e67" containerName="ceilometer-central-agent" Dec 11 10:35:29 crc kubenswrapper[4953]: E1211 10:35:29.919008 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6a86b70-2c86-45dc-a446-004affe33e67" containerName="sg-core" Dec 11 10:35:29 crc kubenswrapper[4953]: I1211 10:35:29.919014 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6a86b70-2c86-45dc-a446-004affe33e67" containerName="sg-core" Dec 11 10:35:29 crc kubenswrapper[4953]: E1211 10:35:29.919047 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6a86b70-2c86-45dc-a446-004affe33e67" containerName="proxy-httpd" Dec 11 10:35:29 crc kubenswrapper[4953]: I1211 10:35:29.919053 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6a86b70-2c86-45dc-a446-004affe33e67" containerName="proxy-httpd" Dec 11 10:35:29 crc kubenswrapper[4953]: I1211 10:35:29.919321 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6a86b70-2c86-45dc-a446-004affe33e67" containerName="ceilometer-notification-agent" Dec 11 10:35:29 crc kubenswrapper[4953]: I1211 10:35:29.919335 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6a86b70-2c86-45dc-a446-004affe33e67" containerName="sg-core" Dec 11 10:35:29 crc kubenswrapper[4953]: I1211 10:35:29.919352 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6a86b70-2c86-45dc-a446-004affe33e67" containerName="proxy-httpd" Dec 11 10:35:29 crc kubenswrapper[4953]: I1211 10:35:29.919366 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6a86b70-2c86-45dc-a446-004affe33e67" containerName="ceilometer-central-agent" Dec 11 10:35:29 crc kubenswrapper[4953]: I1211 10:35:29.921098 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 10:35:29 crc kubenswrapper[4953]: I1211 10:35:29.926057 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 11 10:35:29 crc kubenswrapper[4953]: I1211 10:35:29.930783 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 11 10:35:29 crc kubenswrapper[4953]: I1211 10:35:29.935100 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 11 10:35:29 crc kubenswrapper[4953]: I1211 10:35:29.960688 4953 scope.go:117] "RemoveContainer" containerID="05f0d441546a6589e8391f79f24192efc727d811c4df82dc4fe6bbc931c59ae8" Dec 11 10:35:29 crc kubenswrapper[4953]: E1211 10:35:29.961231 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"05f0d441546a6589e8391f79f24192efc727d811c4df82dc4fe6bbc931c59ae8\": container with ID starting with 05f0d441546a6589e8391f79f24192efc727d811c4df82dc4fe6bbc931c59ae8 not found: ID does not exist" containerID="05f0d441546a6589e8391f79f24192efc727d811c4df82dc4fe6bbc931c59ae8" Dec 11 10:35:29 crc kubenswrapper[4953]: I1211 10:35:29.961259 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"05f0d441546a6589e8391f79f24192efc727d811c4df82dc4fe6bbc931c59ae8"} err="failed to get container status \"05f0d441546a6589e8391f79f24192efc727d811c4df82dc4fe6bbc931c59ae8\": rpc error: code = NotFound desc = could not find container \"05f0d441546a6589e8391f79f24192efc727d811c4df82dc4fe6bbc931c59ae8\": container with ID starting with 05f0d441546a6589e8391f79f24192efc727d811c4df82dc4fe6bbc931c59ae8 not found: ID does not exist" Dec 11 10:35:29 crc kubenswrapper[4953]: I1211 10:35:29.961283 4953 scope.go:117] "RemoveContainer" containerID="e00b05052f4dfeb34195ff75719c94b66b12e27fa51e40774f72ad6e607e2086" Dec 11 10:35:29 crc kubenswrapper[4953]: E1211 10:35:29.961651 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e00b05052f4dfeb34195ff75719c94b66b12e27fa51e40774f72ad6e607e2086\": container with ID starting with e00b05052f4dfeb34195ff75719c94b66b12e27fa51e40774f72ad6e607e2086 not found: ID does not exist" containerID="e00b05052f4dfeb34195ff75719c94b66b12e27fa51e40774f72ad6e607e2086" Dec 11 10:35:29 crc kubenswrapper[4953]: I1211 10:35:29.961668 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e00b05052f4dfeb34195ff75719c94b66b12e27fa51e40774f72ad6e607e2086"} err="failed to get container status \"e00b05052f4dfeb34195ff75719c94b66b12e27fa51e40774f72ad6e607e2086\": rpc error: code = NotFound desc = could not find container \"e00b05052f4dfeb34195ff75719c94b66b12e27fa51e40774f72ad6e607e2086\": container with ID starting with e00b05052f4dfeb34195ff75719c94b66b12e27fa51e40774f72ad6e607e2086 not found: ID does not exist" Dec 11 10:35:29 crc kubenswrapper[4953]: I1211 10:35:29.961683 4953 scope.go:117] "RemoveContainer" containerID="e67d999cea9a46e7f2a2d118af0e0e80cc90eb09935fd36400921f3869b0b99a" Dec 11 10:35:29 crc kubenswrapper[4953]: E1211 10:35:29.961922 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e67d999cea9a46e7f2a2d118af0e0e80cc90eb09935fd36400921f3869b0b99a\": container with ID starting with e67d999cea9a46e7f2a2d118af0e0e80cc90eb09935fd36400921f3869b0b99a not found: ID does not exist" containerID="e67d999cea9a46e7f2a2d118af0e0e80cc90eb09935fd36400921f3869b0b99a" Dec 11 10:35:29 crc kubenswrapper[4953]: I1211 10:35:29.961939 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e67d999cea9a46e7f2a2d118af0e0e80cc90eb09935fd36400921f3869b0b99a"} err="failed to get container status \"e67d999cea9a46e7f2a2d118af0e0e80cc90eb09935fd36400921f3869b0b99a\": rpc error: code = NotFound desc = could not find container \"e67d999cea9a46e7f2a2d118af0e0e80cc90eb09935fd36400921f3869b0b99a\": container with ID starting with e67d999cea9a46e7f2a2d118af0e0e80cc90eb09935fd36400921f3869b0b99a not found: ID does not exist" Dec 11 10:35:29 crc kubenswrapper[4953]: I1211 10:35:29.961952 4953 scope.go:117] "RemoveContainer" containerID="88836a26d94c1e06dfec99dab514224230bb38e5d142ea483ddeb5e930200792" Dec 11 10:35:29 crc kubenswrapper[4953]: E1211 10:35:29.962164 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88836a26d94c1e06dfec99dab514224230bb38e5d142ea483ddeb5e930200792\": container with ID starting with 88836a26d94c1e06dfec99dab514224230bb38e5d142ea483ddeb5e930200792 not found: ID does not exist" containerID="88836a26d94c1e06dfec99dab514224230bb38e5d142ea483ddeb5e930200792" Dec 11 10:35:29 crc kubenswrapper[4953]: I1211 10:35:29.962179 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88836a26d94c1e06dfec99dab514224230bb38e5d142ea483ddeb5e930200792"} err="failed to get container status \"88836a26d94c1e06dfec99dab514224230bb38e5d142ea483ddeb5e930200792\": rpc error: code = NotFound desc = could not find container \"88836a26d94c1e06dfec99dab514224230bb38e5d142ea483ddeb5e930200792\": container with ID starting with 88836a26d94c1e06dfec99dab514224230bb38e5d142ea483ddeb5e930200792 not found: ID does not exist" Dec 11 10:35:30 crc kubenswrapper[4953]: I1211 10:35:30.148917 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b808d3c-37db-43c0-bec6-8edeca8028c5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6b808d3c-37db-43c0-bec6-8edeca8028c5\") " pod="openstack/ceilometer-0" Dec 11 10:35:30 crc kubenswrapper[4953]: I1211 10:35:30.149267 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6b808d3c-37db-43c0-bec6-8edeca8028c5-scripts\") pod \"ceilometer-0\" (UID: \"6b808d3c-37db-43c0-bec6-8edeca8028c5\") " pod="openstack/ceilometer-0" Dec 11 10:35:30 crc kubenswrapper[4953]: I1211 10:35:30.149308 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b808d3c-37db-43c0-bec6-8edeca8028c5-config-data\") pod \"ceilometer-0\" (UID: \"6b808d3c-37db-43c0-bec6-8edeca8028c5\") " pod="openstack/ceilometer-0" Dec 11 10:35:30 crc kubenswrapper[4953]: I1211 10:35:30.149354 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5nqn\" (UniqueName: \"kubernetes.io/projected/6b808d3c-37db-43c0-bec6-8edeca8028c5-kube-api-access-v5nqn\") pod \"ceilometer-0\" (UID: \"6b808d3c-37db-43c0-bec6-8edeca8028c5\") " pod="openstack/ceilometer-0" Dec 11 10:35:30 crc kubenswrapper[4953]: I1211 10:35:30.149421 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6b808d3c-37db-43c0-bec6-8edeca8028c5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6b808d3c-37db-43c0-bec6-8edeca8028c5\") " pod="openstack/ceilometer-0" Dec 11 10:35:30 crc kubenswrapper[4953]: I1211 10:35:30.149470 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6b808d3c-37db-43c0-bec6-8edeca8028c5-run-httpd\") pod \"ceilometer-0\" (UID: \"6b808d3c-37db-43c0-bec6-8edeca8028c5\") " pod="openstack/ceilometer-0" Dec 11 10:35:30 crc kubenswrapper[4953]: I1211 10:35:30.149634 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6b808d3c-37db-43c0-bec6-8edeca8028c5-log-httpd\") pod \"ceilometer-0\" (UID: \"6b808d3c-37db-43c0-bec6-8edeca8028c5\") " pod="openstack/ceilometer-0" Dec 11 10:35:30 crc kubenswrapper[4953]: I1211 10:35:30.251693 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b808d3c-37db-43c0-bec6-8edeca8028c5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6b808d3c-37db-43c0-bec6-8edeca8028c5\") " pod="openstack/ceilometer-0" Dec 11 10:35:30 crc kubenswrapper[4953]: I1211 10:35:30.251775 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6b808d3c-37db-43c0-bec6-8edeca8028c5-scripts\") pod \"ceilometer-0\" (UID: \"6b808d3c-37db-43c0-bec6-8edeca8028c5\") " pod="openstack/ceilometer-0" Dec 11 10:35:30 crc kubenswrapper[4953]: I1211 10:35:30.251809 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b808d3c-37db-43c0-bec6-8edeca8028c5-config-data\") pod \"ceilometer-0\" (UID: \"6b808d3c-37db-43c0-bec6-8edeca8028c5\") " pod="openstack/ceilometer-0" Dec 11 10:35:30 crc kubenswrapper[4953]: I1211 10:35:30.251837 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v5nqn\" (UniqueName: \"kubernetes.io/projected/6b808d3c-37db-43c0-bec6-8edeca8028c5-kube-api-access-v5nqn\") pod \"ceilometer-0\" (UID: \"6b808d3c-37db-43c0-bec6-8edeca8028c5\") " pod="openstack/ceilometer-0" Dec 11 10:35:30 crc kubenswrapper[4953]: I1211 10:35:30.251872 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6b808d3c-37db-43c0-bec6-8edeca8028c5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6b808d3c-37db-43c0-bec6-8edeca8028c5\") " pod="openstack/ceilometer-0" Dec 11 10:35:30 crc kubenswrapper[4953]: I1211 10:35:30.251910 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6b808d3c-37db-43c0-bec6-8edeca8028c5-run-httpd\") pod \"ceilometer-0\" (UID: \"6b808d3c-37db-43c0-bec6-8edeca8028c5\") " pod="openstack/ceilometer-0" Dec 11 10:35:30 crc kubenswrapper[4953]: I1211 10:35:30.251975 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6b808d3c-37db-43c0-bec6-8edeca8028c5-log-httpd\") pod \"ceilometer-0\" (UID: \"6b808d3c-37db-43c0-bec6-8edeca8028c5\") " pod="openstack/ceilometer-0" Dec 11 10:35:30 crc kubenswrapper[4953]: I1211 10:35:30.252473 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6b808d3c-37db-43c0-bec6-8edeca8028c5-log-httpd\") pod \"ceilometer-0\" (UID: \"6b808d3c-37db-43c0-bec6-8edeca8028c5\") " pod="openstack/ceilometer-0" Dec 11 10:35:30 crc kubenswrapper[4953]: I1211 10:35:30.252981 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6b808d3c-37db-43c0-bec6-8edeca8028c5-run-httpd\") pod \"ceilometer-0\" (UID: \"6b808d3c-37db-43c0-bec6-8edeca8028c5\") " pod="openstack/ceilometer-0" Dec 11 10:35:30 crc kubenswrapper[4953]: I1211 10:35:30.259686 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b808d3c-37db-43c0-bec6-8edeca8028c5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6b808d3c-37db-43c0-bec6-8edeca8028c5\") " pod="openstack/ceilometer-0" Dec 11 10:35:30 crc kubenswrapper[4953]: I1211 10:35:30.260030 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6b808d3c-37db-43c0-bec6-8edeca8028c5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6b808d3c-37db-43c0-bec6-8edeca8028c5\") " pod="openstack/ceilometer-0" Dec 11 10:35:30 crc kubenswrapper[4953]: I1211 10:35:30.264670 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6b808d3c-37db-43c0-bec6-8edeca8028c5-scripts\") pod \"ceilometer-0\" (UID: \"6b808d3c-37db-43c0-bec6-8edeca8028c5\") " pod="openstack/ceilometer-0" Dec 11 10:35:30 crc kubenswrapper[4953]: I1211 10:35:30.274025 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5nqn\" (UniqueName: \"kubernetes.io/projected/6b808d3c-37db-43c0-bec6-8edeca8028c5-kube-api-access-v5nqn\") pod \"ceilometer-0\" (UID: \"6b808d3c-37db-43c0-bec6-8edeca8028c5\") " pod="openstack/ceilometer-0" Dec 11 10:35:30 crc kubenswrapper[4953]: I1211 10:35:30.282984 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b808d3c-37db-43c0-bec6-8edeca8028c5-config-data\") pod \"ceilometer-0\" (UID: \"6b808d3c-37db-43c0-bec6-8edeca8028c5\") " pod="openstack/ceilometer-0" Dec 11 10:35:30 crc kubenswrapper[4953]: I1211 10:35:30.367953 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-vq2rm" Dec 11 10:35:30 crc kubenswrapper[4953]: I1211 10:35:30.484779 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6a86b70-2c86-45dc-a446-004affe33e67" path="/var/lib/kubelet/pods/e6a86b70-2c86-45dc-a446-004affe33e67/volumes" Dec 11 10:35:30 crc kubenswrapper[4953]: I1211 10:35:30.549172 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 10:35:30 crc kubenswrapper[4953]: I1211 10:35:30.559293 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cjrdx\" (UniqueName: \"kubernetes.io/projected/ab54cd16-46dc-45ba-95b2-28afc8aef126-kube-api-access-cjrdx\") pod \"ab54cd16-46dc-45ba-95b2-28afc8aef126\" (UID: \"ab54cd16-46dc-45ba-95b2-28afc8aef126\") " Dec 11 10:35:30 crc kubenswrapper[4953]: I1211 10:35:30.559531 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab54cd16-46dc-45ba-95b2-28afc8aef126-operator-scripts\") pod \"ab54cd16-46dc-45ba-95b2-28afc8aef126\" (UID: \"ab54cd16-46dc-45ba-95b2-28afc8aef126\") " Dec 11 10:35:30 crc kubenswrapper[4953]: I1211 10:35:30.563115 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab54cd16-46dc-45ba-95b2-28afc8aef126-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ab54cd16-46dc-45ba-95b2-28afc8aef126" (UID: "ab54cd16-46dc-45ba-95b2-28afc8aef126"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:35:30 crc kubenswrapper[4953]: I1211 10:35:30.567187 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab54cd16-46dc-45ba-95b2-28afc8aef126-kube-api-access-cjrdx" (OuterVolumeSpecName: "kube-api-access-cjrdx") pod "ab54cd16-46dc-45ba-95b2-28afc8aef126" (UID: "ab54cd16-46dc-45ba-95b2-28afc8aef126"). InnerVolumeSpecName "kube-api-access-cjrdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:35:30 crc kubenswrapper[4953]: I1211 10:35:30.662392 4953 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab54cd16-46dc-45ba-95b2-28afc8aef126-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 10:35:30 crc kubenswrapper[4953]: I1211 10:35:30.662436 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cjrdx\" (UniqueName: \"kubernetes.io/projected/ab54cd16-46dc-45ba-95b2-28afc8aef126-kube-api-access-cjrdx\") on node \"crc\" DevicePath \"\"" Dec 11 10:35:30 crc kubenswrapper[4953]: I1211 10:35:30.821225 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-vq2rm" event={"ID":"ab54cd16-46dc-45ba-95b2-28afc8aef126","Type":"ContainerDied","Data":"4e6bc1a5ff8b97e8cddca1f98df723065eca6fe8b25e9145e9cf670b6ba531b0"} Dec 11 10:35:30 crc kubenswrapper[4953]: I1211 10:35:30.821612 4953 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4e6bc1a5ff8b97e8cddca1f98df723065eca6fe8b25e9145e9cf670b6ba531b0" Dec 11 10:35:30 crc kubenswrapper[4953]: I1211 10:35:30.821270 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-vq2rm" Dec 11 10:35:30 crc kubenswrapper[4953]: I1211 10:35:30.840008 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7b77681a-0823-42e6-b0a4-2af1ce955970","Type":"ContainerStarted","Data":"4221eaf86758a08993df7de85552e51a217b8b7260281a70c92cd1a666135bc7"} Dec 11 10:35:30 crc kubenswrapper[4953]: I1211 10:35:30.883379 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.88335605 podStartE2EDuration="4.88335605s" podCreationTimestamp="2025-12-11 10:35:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:35:30.878121265 +0000 UTC m=+1448.901980298" watchObservedRunningTime="2025-12-11 10:35:30.88335605 +0000 UTC m=+1448.907215083" Dec 11 10:35:31 crc kubenswrapper[4953]: I1211 10:35:31.181269 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 11 10:35:31 crc kubenswrapper[4953]: I1211 10:35:31.186843 4953 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 11 10:35:31 crc kubenswrapper[4953]: I1211 10:35:31.321121 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-339c-account-create-update-jjjms" Dec 11 10:35:31 crc kubenswrapper[4953]: I1211 10:35:31.457610 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-2529j" Dec 11 10:35:31 crc kubenswrapper[4953]: I1211 10:35:31.466589 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-e356-account-create-update-k4hjq" Dec 11 10:35:31 crc kubenswrapper[4953]: I1211 10:35:31.478931 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hddx5\" (UniqueName: \"kubernetes.io/projected/fb34da44-aab9-4100-90fd-dfd6b323e85d-kube-api-access-hddx5\") pod \"fb34da44-aab9-4100-90fd-dfd6b323e85d\" (UID: \"fb34da44-aab9-4100-90fd-dfd6b323e85d\") " Dec 11 10:35:31 crc kubenswrapper[4953]: I1211 10:35:31.479089 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fb34da44-aab9-4100-90fd-dfd6b323e85d-operator-scripts\") pod \"fb34da44-aab9-4100-90fd-dfd6b323e85d\" (UID: \"fb34da44-aab9-4100-90fd-dfd6b323e85d\") " Dec 11 10:35:31 crc kubenswrapper[4953]: I1211 10:35:31.489096 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb34da44-aab9-4100-90fd-dfd6b323e85d-kube-api-access-hddx5" (OuterVolumeSpecName: "kube-api-access-hddx5") pod "fb34da44-aab9-4100-90fd-dfd6b323e85d" (UID: "fb34da44-aab9-4100-90fd-dfd6b323e85d"). InnerVolumeSpecName "kube-api-access-hddx5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:35:31 crc kubenswrapper[4953]: I1211 10:35:31.490155 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb34da44-aab9-4100-90fd-dfd6b323e85d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fb34da44-aab9-4100-90fd-dfd6b323e85d" (UID: "fb34da44-aab9-4100-90fd-dfd6b323e85d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:35:31 crc kubenswrapper[4953]: I1211 10:35:31.498930 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-caaa-account-create-update-q2szt" Dec 11 10:35:31 crc kubenswrapper[4953]: I1211 10:35:31.527764 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-kdnkh" Dec 11 10:35:31 crc kubenswrapper[4953]: I1211 10:35:31.581231 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/00b5c561-61d7-4ae7-9485-a5882b9a5dc1-operator-scripts\") pod \"00b5c561-61d7-4ae7-9485-a5882b9a5dc1\" (UID: \"00b5c561-61d7-4ae7-9485-a5882b9a5dc1\") " Dec 11 10:35:31 crc kubenswrapper[4953]: I1211 10:35:31.581438 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cxp8d\" (UniqueName: \"kubernetes.io/projected/00b5c561-61d7-4ae7-9485-a5882b9a5dc1-kube-api-access-cxp8d\") pod \"00b5c561-61d7-4ae7-9485-a5882b9a5dc1\" (UID: \"00b5c561-61d7-4ae7-9485-a5882b9a5dc1\") " Dec 11 10:35:31 crc kubenswrapper[4953]: I1211 10:35:31.581494 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/80cd1362-41c0-4df3-8c3d-566ab77b6edf-operator-scripts\") pod \"80cd1362-41c0-4df3-8c3d-566ab77b6edf\" (UID: \"80cd1362-41c0-4df3-8c3d-566ab77b6edf\") " Dec 11 10:35:31 crc kubenswrapper[4953]: I1211 10:35:31.581672 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8b2rw\" (UniqueName: \"kubernetes.io/projected/c095531c-eb2f-46ab-a014-2526cdc5462f-kube-api-access-8b2rw\") pod \"c095531c-eb2f-46ab-a014-2526cdc5462f\" (UID: \"c095531c-eb2f-46ab-a014-2526cdc5462f\") " Dec 11 10:35:31 crc kubenswrapper[4953]: I1211 10:35:31.581766 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q9l5x\" (UniqueName: \"kubernetes.io/projected/80cd1362-41c0-4df3-8c3d-566ab77b6edf-kube-api-access-q9l5x\") pod \"80cd1362-41c0-4df3-8c3d-566ab77b6edf\" (UID: \"80cd1362-41c0-4df3-8c3d-566ab77b6edf\") " Dec 11 10:35:31 crc kubenswrapper[4953]: I1211 10:35:31.581816 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00b5c561-61d7-4ae7-9485-a5882b9a5dc1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "00b5c561-61d7-4ae7-9485-a5882b9a5dc1" (UID: "00b5c561-61d7-4ae7-9485-a5882b9a5dc1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:35:31 crc kubenswrapper[4953]: I1211 10:35:31.581877 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c095531c-eb2f-46ab-a014-2526cdc5462f-operator-scripts\") pod \"c095531c-eb2f-46ab-a014-2526cdc5462f\" (UID: \"c095531c-eb2f-46ab-a014-2526cdc5462f\") " Dec 11 10:35:31 crc kubenswrapper[4953]: I1211 10:35:31.582224 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/80cd1362-41c0-4df3-8c3d-566ab77b6edf-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "80cd1362-41c0-4df3-8c3d-566ab77b6edf" (UID: "80cd1362-41c0-4df3-8c3d-566ab77b6edf"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:35:31 crc kubenswrapper[4953]: I1211 10:35:31.582497 4953 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/00b5c561-61d7-4ae7-9485-a5882b9a5dc1-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 10:35:31 crc kubenswrapper[4953]: I1211 10:35:31.582525 4953 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fb34da44-aab9-4100-90fd-dfd6b323e85d-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 10:35:31 crc kubenswrapper[4953]: I1211 10:35:31.582537 4953 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/80cd1362-41c0-4df3-8c3d-566ab77b6edf-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 10:35:31 crc kubenswrapper[4953]: I1211 10:35:31.582547 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hddx5\" (UniqueName: \"kubernetes.io/projected/fb34da44-aab9-4100-90fd-dfd6b323e85d-kube-api-access-hddx5\") on node \"crc\" DevicePath \"\"" Dec 11 10:35:31 crc kubenswrapper[4953]: I1211 10:35:31.583794 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c095531c-eb2f-46ab-a014-2526cdc5462f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c095531c-eb2f-46ab-a014-2526cdc5462f" (UID: "c095531c-eb2f-46ab-a014-2526cdc5462f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:35:31 crc kubenswrapper[4953]: I1211 10:35:31.590791 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c095531c-eb2f-46ab-a014-2526cdc5462f-kube-api-access-8b2rw" (OuterVolumeSpecName: "kube-api-access-8b2rw") pod "c095531c-eb2f-46ab-a014-2526cdc5462f" (UID: "c095531c-eb2f-46ab-a014-2526cdc5462f"). InnerVolumeSpecName "kube-api-access-8b2rw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:35:31 crc kubenswrapper[4953]: I1211 10:35:31.592896 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00b5c561-61d7-4ae7-9485-a5882b9a5dc1-kube-api-access-cxp8d" (OuterVolumeSpecName: "kube-api-access-cxp8d") pod "00b5c561-61d7-4ae7-9485-a5882b9a5dc1" (UID: "00b5c561-61d7-4ae7-9485-a5882b9a5dc1"). InnerVolumeSpecName "kube-api-access-cxp8d". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:35:31 crc kubenswrapper[4953]: I1211 10:35:31.598699 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80cd1362-41c0-4df3-8c3d-566ab77b6edf-kube-api-access-q9l5x" (OuterVolumeSpecName: "kube-api-access-q9l5x") pod "80cd1362-41c0-4df3-8c3d-566ab77b6edf" (UID: "80cd1362-41c0-4df3-8c3d-566ab77b6edf"). InnerVolumeSpecName "kube-api-access-q9l5x". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:35:31 crc kubenswrapper[4953]: I1211 10:35:31.684680 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7c04ba7e-0ab4-4242-af7e-5566fc6030cb-operator-scripts\") pod \"7c04ba7e-0ab4-4242-af7e-5566fc6030cb\" (UID: \"7c04ba7e-0ab4-4242-af7e-5566fc6030cb\") " Dec 11 10:35:31 crc kubenswrapper[4953]: I1211 10:35:31.684728 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8ddgf\" (UniqueName: \"kubernetes.io/projected/7c04ba7e-0ab4-4242-af7e-5566fc6030cb-kube-api-access-8ddgf\") pod \"7c04ba7e-0ab4-4242-af7e-5566fc6030cb\" (UID: \"7c04ba7e-0ab4-4242-af7e-5566fc6030cb\") " Dec 11 10:35:31 crc kubenswrapper[4953]: I1211 10:35:31.685310 4953 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c095531c-eb2f-46ab-a014-2526cdc5462f-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 10:35:31 crc kubenswrapper[4953]: I1211 10:35:31.685333 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cxp8d\" (UniqueName: \"kubernetes.io/projected/00b5c561-61d7-4ae7-9485-a5882b9a5dc1-kube-api-access-cxp8d\") on node \"crc\" DevicePath \"\"" Dec 11 10:35:31 crc kubenswrapper[4953]: I1211 10:35:31.685345 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8b2rw\" (UniqueName: \"kubernetes.io/projected/c095531c-eb2f-46ab-a014-2526cdc5462f-kube-api-access-8b2rw\") on node \"crc\" DevicePath \"\"" Dec 11 10:35:31 crc kubenswrapper[4953]: I1211 10:35:31.685354 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q9l5x\" (UniqueName: \"kubernetes.io/projected/80cd1362-41c0-4df3-8c3d-566ab77b6edf-kube-api-access-q9l5x\") on node \"crc\" DevicePath \"\"" Dec 11 10:35:31 crc kubenswrapper[4953]: I1211 10:35:31.685958 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c04ba7e-0ab4-4242-af7e-5566fc6030cb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7c04ba7e-0ab4-4242-af7e-5566fc6030cb" (UID: "7c04ba7e-0ab4-4242-af7e-5566fc6030cb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:35:31 crc kubenswrapper[4953]: I1211 10:35:31.689307 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c04ba7e-0ab4-4242-af7e-5566fc6030cb-kube-api-access-8ddgf" (OuterVolumeSpecName: "kube-api-access-8ddgf") pod "7c04ba7e-0ab4-4242-af7e-5566fc6030cb" (UID: "7c04ba7e-0ab4-4242-af7e-5566fc6030cb"). InnerVolumeSpecName "kube-api-access-8ddgf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:35:31 crc kubenswrapper[4953]: I1211 10:35:31.787476 4953 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7c04ba7e-0ab4-4242-af7e-5566fc6030cb-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 10:35:31 crc kubenswrapper[4953]: I1211 10:35:31.787849 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8ddgf\" (UniqueName: \"kubernetes.io/projected/7c04ba7e-0ab4-4242-af7e-5566fc6030cb-kube-api-access-8ddgf\") on node \"crc\" DevicePath \"\"" Dec 11 10:35:31 crc kubenswrapper[4953]: I1211 10:35:31.851654 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-kdnkh" Dec 11 10:35:31 crc kubenswrapper[4953]: I1211 10:35:31.851707 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-kdnkh" event={"ID":"7c04ba7e-0ab4-4242-af7e-5566fc6030cb","Type":"ContainerDied","Data":"da4e2ff29021204fcb84e391e4e13bae3160664b83c90910ab6ba60f64ef7e3e"} Dec 11 10:35:31 crc kubenswrapper[4953]: I1211 10:35:31.851773 4953 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="da4e2ff29021204fcb84e391e4e13bae3160664b83c90910ab6ba60f64ef7e3e" Dec 11 10:35:31 crc kubenswrapper[4953]: I1211 10:35:31.853939 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-2529j" Dec 11 10:35:31 crc kubenswrapper[4953]: I1211 10:35:31.853934 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-2529j" event={"ID":"80cd1362-41c0-4df3-8c3d-566ab77b6edf","Type":"ContainerDied","Data":"e0656702ddf32981f586d00ab4f883a911e1d2419a5659138b171348e75e90c9"} Dec 11 10:35:31 crc kubenswrapper[4953]: I1211 10:35:31.854089 4953 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e0656702ddf32981f586d00ab4f883a911e1d2419a5659138b171348e75e90c9" Dec 11 10:35:31 crc kubenswrapper[4953]: I1211 10:35:31.855927 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-caaa-account-create-update-q2szt" Dec 11 10:35:31 crc kubenswrapper[4953]: I1211 10:35:31.855947 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-caaa-account-create-update-q2szt" event={"ID":"c095531c-eb2f-46ab-a014-2526cdc5462f","Type":"ContainerDied","Data":"ad486dea69f51206f84ed928f2725045bdf4c0e93d9880e6ff8c7533f994f99b"} Dec 11 10:35:31 crc kubenswrapper[4953]: I1211 10:35:31.856210 4953 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad486dea69f51206f84ed928f2725045bdf4c0e93d9880e6ff8c7533f994f99b" Dec 11 10:35:31 crc kubenswrapper[4953]: I1211 10:35:31.857720 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6b808d3c-37db-43c0-bec6-8edeca8028c5","Type":"ContainerStarted","Data":"4cbfefa823421584bc503a7f333d75109d486ac2fc4ab996d7a1a8b8a7d7a0b9"} Dec 11 10:35:31 crc kubenswrapper[4953]: I1211 10:35:31.859555 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-e356-account-create-update-k4hjq" event={"ID":"00b5c561-61d7-4ae7-9485-a5882b9a5dc1","Type":"ContainerDied","Data":"f214833e4167c9491a65e5d0ce472b19c7e9672442e1ef919bb5b9f92fe6483a"} Dec 11 10:35:31 crc kubenswrapper[4953]: I1211 10:35:31.859623 4953 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f214833e4167c9491a65e5d0ce472b19c7e9672442e1ef919bb5b9f92fe6483a" Dec 11 10:35:31 crc kubenswrapper[4953]: I1211 10:35:31.859591 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-e356-account-create-update-k4hjq" Dec 11 10:35:31 crc kubenswrapper[4953]: I1211 10:35:31.861439 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-339c-account-create-update-jjjms" event={"ID":"fb34da44-aab9-4100-90fd-dfd6b323e85d","Type":"ContainerDied","Data":"6877ac5166a805a2db6ed490e034b70eead56e39bdf6e78b1214b6b7c85f9ed3"} Dec 11 10:35:31 crc kubenswrapper[4953]: I1211 10:35:31.861481 4953 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6877ac5166a805a2db6ed490e034b70eead56e39bdf6e78b1214b6b7c85f9ed3" Dec 11 10:35:31 crc kubenswrapper[4953]: I1211 10:35:31.861449 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-339c-account-create-update-jjjms" Dec 11 10:35:32 crc kubenswrapper[4953]: I1211 10:35:32.873193 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6b808d3c-37db-43c0-bec6-8edeca8028c5","Type":"ContainerStarted","Data":"a01d9ed466563c185cfcacb8d3f0b91daea77fd31e94cfbede7a8ecb1cd70b95"} Dec 11 10:35:32 crc kubenswrapper[4953]: I1211 10:35:32.873258 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6b808d3c-37db-43c0-bec6-8edeca8028c5","Type":"ContainerStarted","Data":"98aa167b6a86472673752a8c31fe391db3dab7e00df30e5692bbe6b0d694ba44"} Dec 11 10:35:33 crc kubenswrapper[4953]: I1211 10:35:33.885468 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6b808d3c-37db-43c0-bec6-8edeca8028c5","Type":"ContainerStarted","Data":"9097898d38c6000d19b402e2d3cee9123e73375df063ef446677a79125d77a30"} Dec 11 10:35:34 crc kubenswrapper[4953]: I1211 10:35:34.985410 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 11 10:35:34 crc kubenswrapper[4953]: I1211 10:35:34.986525 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 11 10:35:35 crc kubenswrapper[4953]: I1211 10:35:35.162066 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 11 10:35:35 crc kubenswrapper[4953]: I1211 10:35:35.189875 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 11 10:35:35 crc kubenswrapper[4953]: I1211 10:35:35.941027 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6b808d3c-37db-43c0-bec6-8edeca8028c5","Type":"ContainerStarted","Data":"cc48106ab88f39e2fa2e3cc788010ec227413582c1f829e7114f90b57f2b3c7a"} Dec 11 10:35:35 crc kubenswrapper[4953]: I1211 10:35:35.941552 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 11 10:35:35 crc kubenswrapper[4953]: I1211 10:35:35.941583 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 11 10:35:35 crc kubenswrapper[4953]: I1211 10:35:35.941594 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 11 10:35:35 crc kubenswrapper[4953]: I1211 10:35:35.993507 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.279028487 podStartE2EDuration="6.993486798s" podCreationTimestamp="2025-12-11 10:35:29 +0000 UTC" firstStartedPulling="2025-12-11 10:35:31.186583211 +0000 UTC m=+1449.210442244" lastFinishedPulling="2025-12-11 10:35:34.901041532 +0000 UTC m=+1452.924900555" observedRunningTime="2025-12-11 10:35:35.98593185 +0000 UTC m=+1454.009790883" watchObservedRunningTime="2025-12-11 10:35:35.993486798 +0000 UTC m=+1454.017345831" Dec 11 10:35:37 crc kubenswrapper[4953]: I1211 10:35:37.355523 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 11 10:35:37 crc kubenswrapper[4953]: I1211 10:35:37.356743 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 11 10:35:37 crc kubenswrapper[4953]: I1211 10:35:37.408001 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 11 10:35:37 crc kubenswrapper[4953]: I1211 10:35:37.413114 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 11 10:35:37 crc kubenswrapper[4953]: I1211 10:35:37.669592 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-v4tfr"] Dec 11 10:35:37 crc kubenswrapper[4953]: E1211 10:35:37.670111 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c04ba7e-0ab4-4242-af7e-5566fc6030cb" containerName="mariadb-database-create" Dec 11 10:35:37 crc kubenswrapper[4953]: I1211 10:35:37.670137 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c04ba7e-0ab4-4242-af7e-5566fc6030cb" containerName="mariadb-database-create" Dec 11 10:35:37 crc kubenswrapper[4953]: E1211 10:35:37.670154 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab54cd16-46dc-45ba-95b2-28afc8aef126" containerName="mariadb-database-create" Dec 11 10:35:37 crc kubenswrapper[4953]: I1211 10:35:37.670163 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab54cd16-46dc-45ba-95b2-28afc8aef126" containerName="mariadb-database-create" Dec 11 10:35:37 crc kubenswrapper[4953]: E1211 10:35:37.670174 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb34da44-aab9-4100-90fd-dfd6b323e85d" containerName="mariadb-account-create-update" Dec 11 10:35:37 crc kubenswrapper[4953]: I1211 10:35:37.670182 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb34da44-aab9-4100-90fd-dfd6b323e85d" containerName="mariadb-account-create-update" Dec 11 10:35:37 crc kubenswrapper[4953]: E1211 10:35:37.670199 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00b5c561-61d7-4ae7-9485-a5882b9a5dc1" containerName="mariadb-account-create-update" Dec 11 10:35:37 crc kubenswrapper[4953]: I1211 10:35:37.670207 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="00b5c561-61d7-4ae7-9485-a5882b9a5dc1" containerName="mariadb-account-create-update" Dec 11 10:35:37 crc kubenswrapper[4953]: E1211 10:35:37.670223 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c095531c-eb2f-46ab-a014-2526cdc5462f" containerName="mariadb-account-create-update" Dec 11 10:35:37 crc kubenswrapper[4953]: I1211 10:35:37.670232 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="c095531c-eb2f-46ab-a014-2526cdc5462f" containerName="mariadb-account-create-update" Dec 11 10:35:37 crc kubenswrapper[4953]: E1211 10:35:37.670255 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80cd1362-41c0-4df3-8c3d-566ab77b6edf" containerName="mariadb-database-create" Dec 11 10:35:37 crc kubenswrapper[4953]: I1211 10:35:37.670264 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="80cd1362-41c0-4df3-8c3d-566ab77b6edf" containerName="mariadb-database-create" Dec 11 10:35:37 crc kubenswrapper[4953]: I1211 10:35:37.670495 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="c095531c-eb2f-46ab-a014-2526cdc5462f" containerName="mariadb-account-create-update" Dec 11 10:35:37 crc kubenswrapper[4953]: I1211 10:35:37.670515 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb34da44-aab9-4100-90fd-dfd6b323e85d" containerName="mariadb-account-create-update" Dec 11 10:35:37 crc kubenswrapper[4953]: I1211 10:35:37.670530 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="00b5c561-61d7-4ae7-9485-a5882b9a5dc1" containerName="mariadb-account-create-update" Dec 11 10:35:37 crc kubenswrapper[4953]: I1211 10:35:37.670543 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c04ba7e-0ab4-4242-af7e-5566fc6030cb" containerName="mariadb-database-create" Dec 11 10:35:37 crc kubenswrapper[4953]: I1211 10:35:37.670557 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="80cd1362-41c0-4df3-8c3d-566ab77b6edf" containerName="mariadb-database-create" Dec 11 10:35:37 crc kubenswrapper[4953]: I1211 10:35:37.670586 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab54cd16-46dc-45ba-95b2-28afc8aef126" containerName="mariadb-database-create" Dec 11 10:35:37 crc kubenswrapper[4953]: I1211 10:35:37.671405 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-v4tfr" Dec 11 10:35:37 crc kubenswrapper[4953]: I1211 10:35:37.675211 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-vq2tb" Dec 11 10:35:37 crc kubenswrapper[4953]: I1211 10:35:37.675246 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Dec 11 10:35:37 crc kubenswrapper[4953]: I1211 10:35:37.675633 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Dec 11 10:35:37 crc kubenswrapper[4953]: I1211 10:35:37.694035 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fad155f8-cdef-44b0-9be5-5a7db0881abc-scripts\") pod \"nova-cell0-conductor-db-sync-v4tfr\" (UID: \"fad155f8-cdef-44b0-9be5-5a7db0881abc\") " pod="openstack/nova-cell0-conductor-db-sync-v4tfr" Dec 11 10:35:37 crc kubenswrapper[4953]: I1211 10:35:37.694280 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9gjl\" (UniqueName: \"kubernetes.io/projected/fad155f8-cdef-44b0-9be5-5a7db0881abc-kube-api-access-g9gjl\") pod \"nova-cell0-conductor-db-sync-v4tfr\" (UID: \"fad155f8-cdef-44b0-9be5-5a7db0881abc\") " pod="openstack/nova-cell0-conductor-db-sync-v4tfr" Dec 11 10:35:37 crc kubenswrapper[4953]: I1211 10:35:37.694346 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fad155f8-cdef-44b0-9be5-5a7db0881abc-config-data\") pod \"nova-cell0-conductor-db-sync-v4tfr\" (UID: \"fad155f8-cdef-44b0-9be5-5a7db0881abc\") " pod="openstack/nova-cell0-conductor-db-sync-v4tfr" Dec 11 10:35:37 crc kubenswrapper[4953]: I1211 10:35:37.694384 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fad155f8-cdef-44b0-9be5-5a7db0881abc-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-v4tfr\" (UID: \"fad155f8-cdef-44b0-9be5-5a7db0881abc\") " pod="openstack/nova-cell0-conductor-db-sync-v4tfr" Dec 11 10:35:37 crc kubenswrapper[4953]: I1211 10:35:37.699847 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-v4tfr"] Dec 11 10:35:37 crc kubenswrapper[4953]: I1211 10:35:37.795971 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g9gjl\" (UniqueName: \"kubernetes.io/projected/fad155f8-cdef-44b0-9be5-5a7db0881abc-kube-api-access-g9gjl\") pod \"nova-cell0-conductor-db-sync-v4tfr\" (UID: \"fad155f8-cdef-44b0-9be5-5a7db0881abc\") " pod="openstack/nova-cell0-conductor-db-sync-v4tfr" Dec 11 10:35:37 crc kubenswrapper[4953]: I1211 10:35:37.796391 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fad155f8-cdef-44b0-9be5-5a7db0881abc-config-data\") pod \"nova-cell0-conductor-db-sync-v4tfr\" (UID: \"fad155f8-cdef-44b0-9be5-5a7db0881abc\") " pod="openstack/nova-cell0-conductor-db-sync-v4tfr" Dec 11 10:35:37 crc kubenswrapper[4953]: I1211 10:35:37.796437 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fad155f8-cdef-44b0-9be5-5a7db0881abc-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-v4tfr\" (UID: \"fad155f8-cdef-44b0-9be5-5a7db0881abc\") " pod="openstack/nova-cell0-conductor-db-sync-v4tfr" Dec 11 10:35:37 crc kubenswrapper[4953]: I1211 10:35:37.796523 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fad155f8-cdef-44b0-9be5-5a7db0881abc-scripts\") pod \"nova-cell0-conductor-db-sync-v4tfr\" (UID: \"fad155f8-cdef-44b0-9be5-5a7db0881abc\") " pod="openstack/nova-cell0-conductor-db-sync-v4tfr" Dec 11 10:35:37 crc kubenswrapper[4953]: I1211 10:35:37.803096 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fad155f8-cdef-44b0-9be5-5a7db0881abc-config-data\") pod \"nova-cell0-conductor-db-sync-v4tfr\" (UID: \"fad155f8-cdef-44b0-9be5-5a7db0881abc\") " pod="openstack/nova-cell0-conductor-db-sync-v4tfr" Dec 11 10:35:37 crc kubenswrapper[4953]: I1211 10:35:37.805145 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fad155f8-cdef-44b0-9be5-5a7db0881abc-scripts\") pod \"nova-cell0-conductor-db-sync-v4tfr\" (UID: \"fad155f8-cdef-44b0-9be5-5a7db0881abc\") " pod="openstack/nova-cell0-conductor-db-sync-v4tfr" Dec 11 10:35:37 crc kubenswrapper[4953]: I1211 10:35:37.814211 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fad155f8-cdef-44b0-9be5-5a7db0881abc-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-v4tfr\" (UID: \"fad155f8-cdef-44b0-9be5-5a7db0881abc\") " pod="openstack/nova-cell0-conductor-db-sync-v4tfr" Dec 11 10:35:37 crc kubenswrapper[4953]: I1211 10:35:37.816219 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9gjl\" (UniqueName: \"kubernetes.io/projected/fad155f8-cdef-44b0-9be5-5a7db0881abc-kube-api-access-g9gjl\") pod \"nova-cell0-conductor-db-sync-v4tfr\" (UID: \"fad155f8-cdef-44b0-9be5-5a7db0881abc\") " pod="openstack/nova-cell0-conductor-db-sync-v4tfr" Dec 11 10:35:37 crc kubenswrapper[4953]: I1211 10:35:37.954612 4953 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 11 10:35:37 crc kubenswrapper[4953]: I1211 10:35:37.954645 4953 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 11 10:35:37 crc kubenswrapper[4953]: I1211 10:35:37.955393 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 11 10:35:37 crc kubenswrapper[4953]: I1211 10:35:37.955467 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 11 10:35:38 crc kubenswrapper[4953]: I1211 10:35:38.019136 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-v4tfr" Dec 11 10:35:38 crc kubenswrapper[4953]: I1211 10:35:38.874063 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-v4tfr"] Dec 11 10:35:38 crc kubenswrapper[4953]: I1211 10:35:38.988788 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-v4tfr" event={"ID":"fad155f8-cdef-44b0-9be5-5a7db0881abc","Type":"ContainerStarted","Data":"325f98c4efe6e8fcba59ca41b680ba85cd88a672376128e5db1e016a53051007"} Dec 11 10:35:39 crc kubenswrapper[4953]: I1211 10:35:39.215163 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 11 10:35:39 crc kubenswrapper[4953]: I1211 10:35:39.215498 4953 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 11 10:35:39 crc kubenswrapper[4953]: I1211 10:35:39.399535 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 11 10:35:40 crc kubenswrapper[4953]: I1211 10:35:40.974305 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 11 10:35:40 crc kubenswrapper[4953]: I1211 10:35:40.974624 4953 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 11 10:35:41 crc kubenswrapper[4953]: I1211 10:35:41.624322 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 11 10:35:50 crc kubenswrapper[4953]: I1211 10:35:50.216799 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-v4tfr" event={"ID":"fad155f8-cdef-44b0-9be5-5a7db0881abc","Type":"ContainerStarted","Data":"14cefb58e0c43b389056f4cf8bb308599dd6cf25bab0e3a4846b0b83bde66613"} Dec 11 10:35:50 crc kubenswrapper[4953]: I1211 10:35:50.240795 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-v4tfr" podStartSLOduration=2.155726507 podStartE2EDuration="13.240774s" podCreationTimestamp="2025-12-11 10:35:37 +0000 UTC" firstStartedPulling="2025-12-11 10:35:38.877608377 +0000 UTC m=+1456.901467410" lastFinishedPulling="2025-12-11 10:35:49.96265587 +0000 UTC m=+1467.986514903" observedRunningTime="2025-12-11 10:35:50.238021483 +0000 UTC m=+1468.261880516" watchObservedRunningTime="2025-12-11 10:35:50.240774 +0000 UTC m=+1468.264633033" Dec 11 10:36:00 crc kubenswrapper[4953]: I1211 10:36:00.555282 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Dec 11 10:36:03 crc kubenswrapper[4953]: I1211 10:36:03.397805 4953 generic.go:334] "Generic (PLEG): container finished" podID="fad155f8-cdef-44b0-9be5-5a7db0881abc" containerID="14cefb58e0c43b389056f4cf8bb308599dd6cf25bab0e3a4846b0b83bde66613" exitCode=0 Dec 11 10:36:03 crc kubenswrapper[4953]: I1211 10:36:03.397897 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-v4tfr" event={"ID":"fad155f8-cdef-44b0-9be5-5a7db0881abc","Type":"ContainerDied","Data":"14cefb58e0c43b389056f4cf8bb308599dd6cf25bab0e3a4846b0b83bde66613"} Dec 11 10:36:04 crc kubenswrapper[4953]: I1211 10:36:04.775126 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-v4tfr" Dec 11 10:36:04 crc kubenswrapper[4953]: I1211 10:36:04.879419 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fad155f8-cdef-44b0-9be5-5a7db0881abc-config-data\") pod \"fad155f8-cdef-44b0-9be5-5a7db0881abc\" (UID: \"fad155f8-cdef-44b0-9be5-5a7db0881abc\") " Dec 11 10:36:04 crc kubenswrapper[4953]: I1211 10:36:04.879975 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fad155f8-cdef-44b0-9be5-5a7db0881abc-scripts\") pod \"fad155f8-cdef-44b0-9be5-5a7db0881abc\" (UID: \"fad155f8-cdef-44b0-9be5-5a7db0881abc\") " Dec 11 10:36:04 crc kubenswrapper[4953]: I1211 10:36:04.880012 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g9gjl\" (UniqueName: \"kubernetes.io/projected/fad155f8-cdef-44b0-9be5-5a7db0881abc-kube-api-access-g9gjl\") pod \"fad155f8-cdef-44b0-9be5-5a7db0881abc\" (UID: \"fad155f8-cdef-44b0-9be5-5a7db0881abc\") " Dec 11 10:36:04 crc kubenswrapper[4953]: I1211 10:36:04.880098 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fad155f8-cdef-44b0-9be5-5a7db0881abc-combined-ca-bundle\") pod \"fad155f8-cdef-44b0-9be5-5a7db0881abc\" (UID: \"fad155f8-cdef-44b0-9be5-5a7db0881abc\") " Dec 11 10:36:04 crc kubenswrapper[4953]: I1211 10:36:04.897734 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fad155f8-cdef-44b0-9be5-5a7db0881abc-kube-api-access-g9gjl" (OuterVolumeSpecName: "kube-api-access-g9gjl") pod "fad155f8-cdef-44b0-9be5-5a7db0881abc" (UID: "fad155f8-cdef-44b0-9be5-5a7db0881abc"). InnerVolumeSpecName "kube-api-access-g9gjl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:36:04 crc kubenswrapper[4953]: I1211 10:36:04.899919 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 11 10:36:04 crc kubenswrapper[4953]: I1211 10:36:04.900205 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="9bc1f5cb-5d27-4ce1-8f01-5219db1cbeab" containerName="kube-state-metrics" containerID="cri-o://6cf3f181073119c217c689c71a3c2197cf56714fb1f0e3129fa899a165c1605b" gracePeriod=30 Dec 11 10:36:04 crc kubenswrapper[4953]: I1211 10:36:04.900432 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fad155f8-cdef-44b0-9be5-5a7db0881abc-scripts" (OuterVolumeSpecName: "scripts") pod "fad155f8-cdef-44b0-9be5-5a7db0881abc" (UID: "fad155f8-cdef-44b0-9be5-5a7db0881abc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:36:04 crc kubenswrapper[4953]: I1211 10:36:04.922823 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fad155f8-cdef-44b0-9be5-5a7db0881abc-config-data" (OuterVolumeSpecName: "config-data") pod "fad155f8-cdef-44b0-9be5-5a7db0881abc" (UID: "fad155f8-cdef-44b0-9be5-5a7db0881abc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:36:04 crc kubenswrapper[4953]: I1211 10:36:04.927024 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fad155f8-cdef-44b0-9be5-5a7db0881abc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fad155f8-cdef-44b0-9be5-5a7db0881abc" (UID: "fad155f8-cdef-44b0-9be5-5a7db0881abc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:36:04 crc kubenswrapper[4953]: I1211 10:36:04.982541 4953 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fad155f8-cdef-44b0-9be5-5a7db0881abc-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 10:36:04 crc kubenswrapper[4953]: I1211 10:36:04.982597 4953 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fad155f8-cdef-44b0-9be5-5a7db0881abc-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 10:36:04 crc kubenswrapper[4953]: I1211 10:36:04.982612 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g9gjl\" (UniqueName: \"kubernetes.io/projected/fad155f8-cdef-44b0-9be5-5a7db0881abc-kube-api-access-g9gjl\") on node \"crc\" DevicePath \"\"" Dec 11 10:36:04 crc kubenswrapper[4953]: I1211 10:36:04.982625 4953 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fad155f8-cdef-44b0-9be5-5a7db0881abc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 10:36:05 crc kubenswrapper[4953]: I1211 10:36:05.384220 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 11 10:36:05 crc kubenswrapper[4953]: I1211 10:36:05.427470 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-v4tfr" event={"ID":"fad155f8-cdef-44b0-9be5-5a7db0881abc","Type":"ContainerDied","Data":"325f98c4efe6e8fcba59ca41b680ba85cd88a672376128e5db1e016a53051007"} Dec 11 10:36:05 crc kubenswrapper[4953]: I1211 10:36:05.427498 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-v4tfr" Dec 11 10:36:05 crc kubenswrapper[4953]: I1211 10:36:05.427892 4953 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="325f98c4efe6e8fcba59ca41b680ba85cd88a672376128e5db1e016a53051007" Dec 11 10:36:05 crc kubenswrapper[4953]: I1211 10:36:05.429523 4953 generic.go:334] "Generic (PLEG): container finished" podID="9bc1f5cb-5d27-4ce1-8f01-5219db1cbeab" containerID="6cf3f181073119c217c689c71a3c2197cf56714fb1f0e3129fa899a165c1605b" exitCode=2 Dec 11 10:36:05 crc kubenswrapper[4953]: I1211 10:36:05.429556 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"9bc1f5cb-5d27-4ce1-8f01-5219db1cbeab","Type":"ContainerDied","Data":"6cf3f181073119c217c689c71a3c2197cf56714fb1f0e3129fa899a165c1605b"} Dec 11 10:36:05 crc kubenswrapper[4953]: I1211 10:36:05.429648 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"9bc1f5cb-5d27-4ce1-8f01-5219db1cbeab","Type":"ContainerDied","Data":"738dfe9957b6c367e5eeb447874b2745a611cfce8db1f2768ce655265cb1852c"} Dec 11 10:36:05 crc kubenswrapper[4953]: I1211 10:36:05.429671 4953 scope.go:117] "RemoveContainer" containerID="6cf3f181073119c217c689c71a3c2197cf56714fb1f0e3129fa899a165c1605b" Dec 11 10:36:05 crc kubenswrapper[4953]: I1211 10:36:05.429736 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 11 10:36:05 crc kubenswrapper[4953]: I1211 10:36:05.480365 4953 scope.go:117] "RemoveContainer" containerID="6cf3f181073119c217c689c71a3c2197cf56714fb1f0e3129fa899a165c1605b" Dec 11 10:36:05 crc kubenswrapper[4953]: E1211 10:36:05.481198 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6cf3f181073119c217c689c71a3c2197cf56714fb1f0e3129fa899a165c1605b\": container with ID starting with 6cf3f181073119c217c689c71a3c2197cf56714fb1f0e3129fa899a165c1605b not found: ID does not exist" containerID="6cf3f181073119c217c689c71a3c2197cf56714fb1f0e3129fa899a165c1605b" Dec 11 10:36:05 crc kubenswrapper[4953]: I1211 10:36:05.481251 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6cf3f181073119c217c689c71a3c2197cf56714fb1f0e3129fa899a165c1605b"} err="failed to get container status \"6cf3f181073119c217c689c71a3c2197cf56714fb1f0e3129fa899a165c1605b\": rpc error: code = NotFound desc = could not find container \"6cf3f181073119c217c689c71a3c2197cf56714fb1f0e3129fa899a165c1605b\": container with ID starting with 6cf3f181073119c217c689c71a3c2197cf56714fb1f0e3129fa899a165c1605b not found: ID does not exist" Dec 11 10:36:05 crc kubenswrapper[4953]: I1211 10:36:05.494464 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-742qb\" (UniqueName: \"kubernetes.io/projected/9bc1f5cb-5d27-4ce1-8f01-5219db1cbeab-kube-api-access-742qb\") pod \"9bc1f5cb-5d27-4ce1-8f01-5219db1cbeab\" (UID: \"9bc1f5cb-5d27-4ce1-8f01-5219db1cbeab\") " Dec 11 10:36:05 crc kubenswrapper[4953]: I1211 10:36:05.502089 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9bc1f5cb-5d27-4ce1-8f01-5219db1cbeab-kube-api-access-742qb" (OuterVolumeSpecName: "kube-api-access-742qb") pod "9bc1f5cb-5d27-4ce1-8f01-5219db1cbeab" (UID: "9bc1f5cb-5d27-4ce1-8f01-5219db1cbeab"). InnerVolumeSpecName "kube-api-access-742qb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:36:05 crc kubenswrapper[4953]: I1211 10:36:05.525620 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-cfhwh"] Dec 11 10:36:05 crc kubenswrapper[4953]: E1211 10:36:05.526152 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fad155f8-cdef-44b0-9be5-5a7db0881abc" containerName="nova-cell0-conductor-db-sync" Dec 11 10:36:05 crc kubenswrapper[4953]: I1211 10:36:05.526182 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="fad155f8-cdef-44b0-9be5-5a7db0881abc" containerName="nova-cell0-conductor-db-sync" Dec 11 10:36:05 crc kubenswrapper[4953]: E1211 10:36:05.526196 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9bc1f5cb-5d27-4ce1-8f01-5219db1cbeab" containerName="kube-state-metrics" Dec 11 10:36:05 crc kubenswrapper[4953]: I1211 10:36:05.526203 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="9bc1f5cb-5d27-4ce1-8f01-5219db1cbeab" containerName="kube-state-metrics" Dec 11 10:36:05 crc kubenswrapper[4953]: I1211 10:36:05.526429 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="fad155f8-cdef-44b0-9be5-5a7db0881abc" containerName="nova-cell0-conductor-db-sync" Dec 11 10:36:05 crc kubenswrapper[4953]: I1211 10:36:05.526445 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="9bc1f5cb-5d27-4ce1-8f01-5219db1cbeab" containerName="kube-state-metrics" Dec 11 10:36:05 crc kubenswrapper[4953]: I1211 10:36:05.532029 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cfhwh" Dec 11 10:36:05 crc kubenswrapper[4953]: I1211 10:36:05.558706 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cfhwh"] Dec 11 10:36:05 crc kubenswrapper[4953]: I1211 10:36:05.597740 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-742qb\" (UniqueName: \"kubernetes.io/projected/9bc1f5cb-5d27-4ce1-8f01-5219db1cbeab-kube-api-access-742qb\") on node \"crc\" DevicePath \"\"" Dec 11 10:36:05 crc kubenswrapper[4953]: I1211 10:36:05.634857 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 11 10:36:05 crc kubenswrapper[4953]: I1211 10:36:05.636246 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Dec 11 10:36:05 crc kubenswrapper[4953]: I1211 10:36:05.640392 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-vq2tb" Dec 11 10:36:05 crc kubenswrapper[4953]: I1211 10:36:05.643399 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Dec 11 10:36:05 crc kubenswrapper[4953]: I1211 10:36:05.676966 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 11 10:36:05 crc kubenswrapper[4953]: I1211 10:36:05.701706 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91d95335-f50e-4677-b3f9-bad2ab143c43-utilities\") pod \"redhat-operators-cfhwh\" (UID: \"91d95335-f50e-4677-b3f9-bad2ab143c43\") " pod="openshift-marketplace/redhat-operators-cfhwh" Dec 11 10:36:05 crc kubenswrapper[4953]: I1211 10:36:05.701750 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91d95335-f50e-4677-b3f9-bad2ab143c43-catalog-content\") pod \"redhat-operators-cfhwh\" (UID: \"91d95335-f50e-4677-b3f9-bad2ab143c43\") " pod="openshift-marketplace/redhat-operators-cfhwh" Dec 11 10:36:05 crc kubenswrapper[4953]: I1211 10:36:05.701802 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kxxb\" (UniqueName: \"kubernetes.io/projected/91d95335-f50e-4677-b3f9-bad2ab143c43-kube-api-access-8kxxb\") pod \"redhat-operators-cfhwh\" (UID: \"91d95335-f50e-4677-b3f9-bad2ab143c43\") " pod="openshift-marketplace/redhat-operators-cfhwh" Dec 11 10:36:05 crc kubenswrapper[4953]: I1211 10:36:05.770641 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 11 10:36:05 crc kubenswrapper[4953]: I1211 10:36:05.778541 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 11 10:36:05 crc kubenswrapper[4953]: I1211 10:36:05.790831 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Dec 11 10:36:05 crc kubenswrapper[4953]: I1211 10:36:05.796556 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 11 10:36:05 crc kubenswrapper[4953]: I1211 10:36:05.799253 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Dec 11 10:36:05 crc kubenswrapper[4953]: I1211 10:36:05.799252 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Dec 11 10:36:05 crc kubenswrapper[4953]: I1211 10:36:05.803225 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpfg7\" (UniqueName: \"kubernetes.io/projected/1b3d5c24-61f6-4926-94ec-0e3a462334df-kube-api-access-hpfg7\") pod \"nova-cell0-conductor-0\" (UID: \"1b3d5c24-61f6-4926-94ec-0e3a462334df\") " pod="openstack/nova-cell0-conductor-0" Dec 11 10:36:05 crc kubenswrapper[4953]: I1211 10:36:05.803330 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b3d5c24-61f6-4926-94ec-0e3a462334df-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"1b3d5c24-61f6-4926-94ec-0e3a462334df\") " pod="openstack/nova-cell0-conductor-0" Dec 11 10:36:05 crc kubenswrapper[4953]: I1211 10:36:05.803368 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91d95335-f50e-4677-b3f9-bad2ab143c43-utilities\") pod \"redhat-operators-cfhwh\" (UID: \"91d95335-f50e-4677-b3f9-bad2ab143c43\") " pod="openshift-marketplace/redhat-operators-cfhwh" Dec 11 10:36:05 crc kubenswrapper[4953]: I1211 10:36:05.803396 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b3d5c24-61f6-4926-94ec-0e3a462334df-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"1b3d5c24-61f6-4926-94ec-0e3a462334df\") " pod="openstack/nova-cell0-conductor-0" Dec 11 10:36:05 crc kubenswrapper[4953]: I1211 10:36:05.803418 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91d95335-f50e-4677-b3f9-bad2ab143c43-catalog-content\") pod \"redhat-operators-cfhwh\" (UID: \"91d95335-f50e-4677-b3f9-bad2ab143c43\") " pod="openshift-marketplace/redhat-operators-cfhwh" Dec 11 10:36:05 crc kubenswrapper[4953]: I1211 10:36:05.803465 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8kxxb\" (UniqueName: \"kubernetes.io/projected/91d95335-f50e-4677-b3f9-bad2ab143c43-kube-api-access-8kxxb\") pod \"redhat-operators-cfhwh\" (UID: \"91d95335-f50e-4677-b3f9-bad2ab143c43\") " pod="openshift-marketplace/redhat-operators-cfhwh" Dec 11 10:36:05 crc kubenswrapper[4953]: I1211 10:36:05.804188 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91d95335-f50e-4677-b3f9-bad2ab143c43-utilities\") pod \"redhat-operators-cfhwh\" (UID: \"91d95335-f50e-4677-b3f9-bad2ab143c43\") " pod="openshift-marketplace/redhat-operators-cfhwh" Dec 11 10:36:05 crc kubenswrapper[4953]: I1211 10:36:05.804465 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91d95335-f50e-4677-b3f9-bad2ab143c43-catalog-content\") pod \"redhat-operators-cfhwh\" (UID: \"91d95335-f50e-4677-b3f9-bad2ab143c43\") " pod="openshift-marketplace/redhat-operators-cfhwh" Dec 11 10:36:05 crc kubenswrapper[4953]: I1211 10:36:05.805737 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 11 10:36:05 crc kubenswrapper[4953]: I1211 10:36:05.843218 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8kxxb\" (UniqueName: \"kubernetes.io/projected/91d95335-f50e-4677-b3f9-bad2ab143c43-kube-api-access-8kxxb\") pod \"redhat-operators-cfhwh\" (UID: \"91d95335-f50e-4677-b3f9-bad2ab143c43\") " pod="openshift-marketplace/redhat-operators-cfhwh" Dec 11 10:36:05 crc kubenswrapper[4953]: I1211 10:36:05.863153 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cfhwh" Dec 11 10:36:05 crc kubenswrapper[4953]: I1211 10:36:05.904820 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/9da03c89-b3fb-431e-bef0-eb8f6d0b180e-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"9da03c89-b3fb-431e-bef0-eb8f6d0b180e\") " pod="openstack/kube-state-metrics-0" Dec 11 10:36:05 crc kubenswrapper[4953]: I1211 10:36:05.904882 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bsbd\" (UniqueName: \"kubernetes.io/projected/9da03c89-b3fb-431e-bef0-eb8f6d0b180e-kube-api-access-6bsbd\") pod \"kube-state-metrics-0\" (UID: \"9da03c89-b3fb-431e-bef0-eb8f6d0b180e\") " pod="openstack/kube-state-metrics-0" Dec 11 10:36:05 crc kubenswrapper[4953]: I1211 10:36:05.904923 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hpfg7\" (UniqueName: \"kubernetes.io/projected/1b3d5c24-61f6-4926-94ec-0e3a462334df-kube-api-access-hpfg7\") pod \"nova-cell0-conductor-0\" (UID: \"1b3d5c24-61f6-4926-94ec-0e3a462334df\") " pod="openstack/nova-cell0-conductor-0" Dec 11 10:36:05 crc kubenswrapper[4953]: I1211 10:36:05.904953 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9da03c89-b3fb-431e-bef0-eb8f6d0b180e-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"9da03c89-b3fb-431e-bef0-eb8f6d0b180e\") " pod="openstack/kube-state-metrics-0" Dec 11 10:36:05 crc kubenswrapper[4953]: I1211 10:36:05.905026 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/9da03c89-b3fb-431e-bef0-eb8f6d0b180e-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"9da03c89-b3fb-431e-bef0-eb8f6d0b180e\") " pod="openstack/kube-state-metrics-0" Dec 11 10:36:05 crc kubenswrapper[4953]: I1211 10:36:05.905086 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b3d5c24-61f6-4926-94ec-0e3a462334df-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"1b3d5c24-61f6-4926-94ec-0e3a462334df\") " pod="openstack/nova-cell0-conductor-0" Dec 11 10:36:05 crc kubenswrapper[4953]: I1211 10:36:05.905116 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b3d5c24-61f6-4926-94ec-0e3a462334df-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"1b3d5c24-61f6-4926-94ec-0e3a462334df\") " pod="openstack/nova-cell0-conductor-0" Dec 11 10:36:05 crc kubenswrapper[4953]: I1211 10:36:05.910458 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b3d5c24-61f6-4926-94ec-0e3a462334df-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"1b3d5c24-61f6-4926-94ec-0e3a462334df\") " pod="openstack/nova-cell0-conductor-0" Dec 11 10:36:05 crc kubenswrapper[4953]: I1211 10:36:05.912139 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b3d5c24-61f6-4926-94ec-0e3a462334df-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"1b3d5c24-61f6-4926-94ec-0e3a462334df\") " pod="openstack/nova-cell0-conductor-0" Dec 11 10:36:05 crc kubenswrapper[4953]: I1211 10:36:05.931447 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hpfg7\" (UniqueName: \"kubernetes.io/projected/1b3d5c24-61f6-4926-94ec-0e3a462334df-kube-api-access-hpfg7\") pod \"nova-cell0-conductor-0\" (UID: \"1b3d5c24-61f6-4926-94ec-0e3a462334df\") " pod="openstack/nova-cell0-conductor-0" Dec 11 10:36:05 crc kubenswrapper[4953]: I1211 10:36:05.966222 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Dec 11 10:36:06 crc kubenswrapper[4953]: I1211 10:36:06.007845 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/9da03c89-b3fb-431e-bef0-eb8f6d0b180e-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"9da03c89-b3fb-431e-bef0-eb8f6d0b180e\") " pod="openstack/kube-state-metrics-0" Dec 11 10:36:06 crc kubenswrapper[4953]: I1211 10:36:06.007914 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bsbd\" (UniqueName: \"kubernetes.io/projected/9da03c89-b3fb-431e-bef0-eb8f6d0b180e-kube-api-access-6bsbd\") pod \"kube-state-metrics-0\" (UID: \"9da03c89-b3fb-431e-bef0-eb8f6d0b180e\") " pod="openstack/kube-state-metrics-0" Dec 11 10:36:06 crc kubenswrapper[4953]: I1211 10:36:06.007978 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9da03c89-b3fb-431e-bef0-eb8f6d0b180e-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"9da03c89-b3fb-431e-bef0-eb8f6d0b180e\") " pod="openstack/kube-state-metrics-0" Dec 11 10:36:06 crc kubenswrapper[4953]: I1211 10:36:06.008060 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/9da03c89-b3fb-431e-bef0-eb8f6d0b180e-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"9da03c89-b3fb-431e-bef0-eb8f6d0b180e\") " pod="openstack/kube-state-metrics-0" Dec 11 10:36:06 crc kubenswrapper[4953]: I1211 10:36:06.012686 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/9da03c89-b3fb-431e-bef0-eb8f6d0b180e-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"9da03c89-b3fb-431e-bef0-eb8f6d0b180e\") " pod="openstack/kube-state-metrics-0" Dec 11 10:36:06 crc kubenswrapper[4953]: I1211 10:36:06.012778 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/9da03c89-b3fb-431e-bef0-eb8f6d0b180e-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"9da03c89-b3fb-431e-bef0-eb8f6d0b180e\") " pod="openstack/kube-state-metrics-0" Dec 11 10:36:06 crc kubenswrapper[4953]: I1211 10:36:06.015366 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9da03c89-b3fb-431e-bef0-eb8f6d0b180e-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"9da03c89-b3fb-431e-bef0-eb8f6d0b180e\") " pod="openstack/kube-state-metrics-0" Dec 11 10:36:06 crc kubenswrapper[4953]: I1211 10:36:06.034538 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6bsbd\" (UniqueName: \"kubernetes.io/projected/9da03c89-b3fb-431e-bef0-eb8f6d0b180e-kube-api-access-6bsbd\") pod \"kube-state-metrics-0\" (UID: \"9da03c89-b3fb-431e-bef0-eb8f6d0b180e\") " pod="openstack/kube-state-metrics-0" Dec 11 10:36:06 crc kubenswrapper[4953]: I1211 10:36:06.124117 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 11 10:36:06 crc kubenswrapper[4953]: I1211 10:36:06.465501 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cfhwh"] Dec 11 10:36:06 crc kubenswrapper[4953]: I1211 10:36:06.486647 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9bc1f5cb-5d27-4ce1-8f01-5219db1cbeab" path="/var/lib/kubelet/pods/9bc1f5cb-5d27-4ce1-8f01-5219db1cbeab/volumes" Dec 11 10:36:06 crc kubenswrapper[4953]: I1211 10:36:06.592333 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 11 10:36:06 crc kubenswrapper[4953]: I1211 10:36:06.738704 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 11 10:36:06 crc kubenswrapper[4953]: W1211 10:36:06.799971 4953 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9da03c89_b3fb_431e_bef0_eb8f6d0b180e.slice/crio-7dcdbbd8ee7d8a16bad77426b73131c9d4d423d0cbe6d097e8bf75b5a2d868cc WatchSource:0}: Error finding container 7dcdbbd8ee7d8a16bad77426b73131c9d4d423d0cbe6d097e8bf75b5a2d868cc: Status 404 returned error can't find the container with id 7dcdbbd8ee7d8a16bad77426b73131c9d4d423d0cbe6d097e8bf75b5a2d868cc Dec 11 10:36:07 crc kubenswrapper[4953]: I1211 10:36:07.450950 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"9da03c89-b3fb-431e-bef0-eb8f6d0b180e","Type":"ContainerStarted","Data":"e77c7ae1c87e7949e1f82009c61668e44597f8128ff13d56a0b924b074388ac2"} Dec 11 10:36:07 crc kubenswrapper[4953]: I1211 10:36:07.451379 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Dec 11 10:36:07 crc kubenswrapper[4953]: I1211 10:36:07.451398 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"9da03c89-b3fb-431e-bef0-eb8f6d0b180e","Type":"ContainerStarted","Data":"7dcdbbd8ee7d8a16bad77426b73131c9d4d423d0cbe6d097e8bf75b5a2d868cc"} Dec 11 10:36:07 crc kubenswrapper[4953]: I1211 10:36:07.452658 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"1b3d5c24-61f6-4926-94ec-0e3a462334df","Type":"ContainerStarted","Data":"4d40902f2adb77e2b7dde3ed43d14df9863e66572e62ab6a82f12fa7bb0bcca2"} Dec 11 10:36:07 crc kubenswrapper[4953]: I1211 10:36:07.452716 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"1b3d5c24-61f6-4926-94ec-0e3a462334df","Type":"ContainerStarted","Data":"94dbef9b490a8e8c405a0a6aed23fb65496547797e1ed5c0afaf857e99833387"} Dec 11 10:36:07 crc kubenswrapper[4953]: I1211 10:36:07.452762 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Dec 11 10:36:07 crc kubenswrapper[4953]: I1211 10:36:07.454564 4953 generic.go:334] "Generic (PLEG): container finished" podID="91d95335-f50e-4677-b3f9-bad2ab143c43" containerID="571f2f0d095093b25606c15c6559ea2443bec43af1017a2d950ecc2a2775b846" exitCode=0 Dec 11 10:36:07 crc kubenswrapper[4953]: I1211 10:36:07.454640 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cfhwh" event={"ID":"91d95335-f50e-4677-b3f9-bad2ab143c43","Type":"ContainerDied","Data":"571f2f0d095093b25606c15c6559ea2443bec43af1017a2d950ecc2a2775b846"} Dec 11 10:36:07 crc kubenswrapper[4953]: I1211 10:36:07.454660 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cfhwh" event={"ID":"91d95335-f50e-4677-b3f9-bad2ab143c43","Type":"ContainerStarted","Data":"c371d81790001ff851de4bd6490443fb83e9c5070777f014a356cf9f75098bca"} Dec 11 10:36:07 crc kubenswrapper[4953]: I1211 10:36:07.489272 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.146137304 podStartE2EDuration="2.489253723s" podCreationTimestamp="2025-12-11 10:36:05 +0000 UTC" firstStartedPulling="2025-12-11 10:36:06.802328143 +0000 UTC m=+1484.826187176" lastFinishedPulling="2025-12-11 10:36:07.145444562 +0000 UTC m=+1485.169303595" observedRunningTime="2025-12-11 10:36:07.472416382 +0000 UTC m=+1485.496275415" watchObservedRunningTime="2025-12-11 10:36:07.489253723 +0000 UTC m=+1485.513112756" Dec 11 10:36:07 crc kubenswrapper[4953]: I1211 10:36:07.527047 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.527028193 podStartE2EDuration="2.527028193s" podCreationTimestamp="2025-12-11 10:36:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:36:07.514503929 +0000 UTC m=+1485.538362982" watchObservedRunningTime="2025-12-11 10:36:07.527028193 +0000 UTC m=+1485.550887226" Dec 11 10:36:07 crc kubenswrapper[4953]: I1211 10:36:07.584124 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 11 10:36:07 crc kubenswrapper[4953]: I1211 10:36:07.584818 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6b808d3c-37db-43c0-bec6-8edeca8028c5" containerName="ceilometer-central-agent" containerID="cri-o://98aa167b6a86472673752a8c31fe391db3dab7e00df30e5692bbe6b0d694ba44" gracePeriod=30 Dec 11 10:36:07 crc kubenswrapper[4953]: I1211 10:36:07.584868 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6b808d3c-37db-43c0-bec6-8edeca8028c5" containerName="proxy-httpd" containerID="cri-o://cc48106ab88f39e2fa2e3cc788010ec227413582c1f829e7114f90b57f2b3c7a" gracePeriod=30 Dec 11 10:36:07 crc kubenswrapper[4953]: I1211 10:36:07.584959 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6b808d3c-37db-43c0-bec6-8edeca8028c5" containerName="sg-core" containerID="cri-o://9097898d38c6000d19b402e2d3cee9123e73375df063ef446677a79125d77a30" gracePeriod=30 Dec 11 10:36:07 crc kubenswrapper[4953]: I1211 10:36:07.585016 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6b808d3c-37db-43c0-bec6-8edeca8028c5" containerName="ceilometer-notification-agent" containerID="cri-o://a01d9ed466563c185cfcacb8d3f0b91daea77fd31e94cfbede7a8ecb1cd70b95" gracePeriod=30 Dec 11 10:36:08 crc kubenswrapper[4953]: I1211 10:36:08.469840 4953 generic.go:334] "Generic (PLEG): container finished" podID="6b808d3c-37db-43c0-bec6-8edeca8028c5" containerID="cc48106ab88f39e2fa2e3cc788010ec227413582c1f829e7114f90b57f2b3c7a" exitCode=0 Dec 11 10:36:08 crc kubenswrapper[4953]: I1211 10:36:08.470136 4953 generic.go:334] "Generic (PLEG): container finished" podID="6b808d3c-37db-43c0-bec6-8edeca8028c5" containerID="9097898d38c6000d19b402e2d3cee9123e73375df063ef446677a79125d77a30" exitCode=2 Dec 11 10:36:08 crc kubenswrapper[4953]: I1211 10:36:08.470152 4953 generic.go:334] "Generic (PLEG): container finished" podID="6b808d3c-37db-43c0-bec6-8edeca8028c5" containerID="98aa167b6a86472673752a8c31fe391db3dab7e00df30e5692bbe6b0d694ba44" exitCode=0 Dec 11 10:36:08 crc kubenswrapper[4953]: I1211 10:36:08.471747 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6b808d3c-37db-43c0-bec6-8edeca8028c5","Type":"ContainerDied","Data":"cc48106ab88f39e2fa2e3cc788010ec227413582c1f829e7114f90b57f2b3c7a"} Dec 11 10:36:08 crc kubenswrapper[4953]: I1211 10:36:08.471811 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6b808d3c-37db-43c0-bec6-8edeca8028c5","Type":"ContainerDied","Data":"9097898d38c6000d19b402e2d3cee9123e73375df063ef446677a79125d77a30"} Dec 11 10:36:08 crc kubenswrapper[4953]: I1211 10:36:08.471826 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6b808d3c-37db-43c0-bec6-8edeca8028c5","Type":"ContainerDied","Data":"98aa167b6a86472673752a8c31fe391db3dab7e00df30e5692bbe6b0d694ba44"} Dec 11 10:36:09 crc kubenswrapper[4953]: I1211 10:36:09.486313 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cfhwh" event={"ID":"91d95335-f50e-4677-b3f9-bad2ab143c43","Type":"ContainerStarted","Data":"99ef33837ac2ece6cbc1a7e300f49c2a737e90eef345cb0114bfd3f8cc04c3de"} Dec 11 10:36:10 crc kubenswrapper[4953]: I1211 10:36:10.496199 4953 generic.go:334] "Generic (PLEG): container finished" podID="91d95335-f50e-4677-b3f9-bad2ab143c43" containerID="99ef33837ac2ece6cbc1a7e300f49c2a737e90eef345cb0114bfd3f8cc04c3de" exitCode=0 Dec 11 10:36:10 crc kubenswrapper[4953]: I1211 10:36:10.496265 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cfhwh" event={"ID":"91d95335-f50e-4677-b3f9-bad2ab143c43","Type":"ContainerDied","Data":"99ef33837ac2ece6cbc1a7e300f49c2a737e90eef345cb0114bfd3f8cc04c3de"} Dec 11 10:36:14 crc kubenswrapper[4953]: I1211 10:36:14.554330 4953 generic.go:334] "Generic (PLEG): container finished" podID="6b808d3c-37db-43c0-bec6-8edeca8028c5" containerID="a01d9ed466563c185cfcacb8d3f0b91daea77fd31e94cfbede7a8ecb1cd70b95" exitCode=0 Dec 11 10:36:14 crc kubenswrapper[4953]: I1211 10:36:14.554829 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6b808d3c-37db-43c0-bec6-8edeca8028c5","Type":"ContainerDied","Data":"a01d9ed466563c185cfcacb8d3f0b91daea77fd31e94cfbede7a8ecb1cd70b95"} Dec 11 10:36:14 crc kubenswrapper[4953]: I1211 10:36:14.566385 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cfhwh" event={"ID":"91d95335-f50e-4677-b3f9-bad2ab143c43","Type":"ContainerStarted","Data":"b338be24b9b7fc98cf818808dc924b7fa24ae9f50b38edddd4363bfbf6b84f08"} Dec 11 10:36:14 crc kubenswrapper[4953]: I1211 10:36:14.598712 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-cfhwh" podStartSLOduration=3.172044601 podStartE2EDuration="9.598416336s" podCreationTimestamp="2025-12-11 10:36:05 +0000 UTC" firstStartedPulling="2025-12-11 10:36:07.455905041 +0000 UTC m=+1485.479764074" lastFinishedPulling="2025-12-11 10:36:13.882276776 +0000 UTC m=+1491.906135809" observedRunningTime="2025-12-11 10:36:14.583148673 +0000 UTC m=+1492.607007706" watchObservedRunningTime="2025-12-11 10:36:14.598416336 +0000 UTC m=+1492.622275369" Dec 11 10:36:14 crc kubenswrapper[4953]: I1211 10:36:14.749406 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 10:36:14 crc kubenswrapper[4953]: I1211 10:36:14.808407 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b808d3c-37db-43c0-bec6-8edeca8028c5-combined-ca-bundle\") pod \"6b808d3c-37db-43c0-bec6-8edeca8028c5\" (UID: \"6b808d3c-37db-43c0-bec6-8edeca8028c5\") " Dec 11 10:36:14 crc kubenswrapper[4953]: I1211 10:36:14.808513 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b808d3c-37db-43c0-bec6-8edeca8028c5-config-data\") pod \"6b808d3c-37db-43c0-bec6-8edeca8028c5\" (UID: \"6b808d3c-37db-43c0-bec6-8edeca8028c5\") " Dec 11 10:36:14 crc kubenswrapper[4953]: I1211 10:36:14.808541 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6b808d3c-37db-43c0-bec6-8edeca8028c5-sg-core-conf-yaml\") pod \"6b808d3c-37db-43c0-bec6-8edeca8028c5\" (UID: \"6b808d3c-37db-43c0-bec6-8edeca8028c5\") " Dec 11 10:36:14 crc kubenswrapper[4953]: I1211 10:36:14.808586 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6b808d3c-37db-43c0-bec6-8edeca8028c5-log-httpd\") pod \"6b808d3c-37db-43c0-bec6-8edeca8028c5\" (UID: \"6b808d3c-37db-43c0-bec6-8edeca8028c5\") " Dec 11 10:36:14 crc kubenswrapper[4953]: I1211 10:36:14.808612 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6b808d3c-37db-43c0-bec6-8edeca8028c5-run-httpd\") pod \"6b808d3c-37db-43c0-bec6-8edeca8028c5\" (UID: \"6b808d3c-37db-43c0-bec6-8edeca8028c5\") " Dec 11 10:36:14 crc kubenswrapper[4953]: I1211 10:36:14.808715 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6b808d3c-37db-43c0-bec6-8edeca8028c5-scripts\") pod \"6b808d3c-37db-43c0-bec6-8edeca8028c5\" (UID: \"6b808d3c-37db-43c0-bec6-8edeca8028c5\") " Dec 11 10:36:14 crc kubenswrapper[4953]: I1211 10:36:14.808791 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v5nqn\" (UniqueName: \"kubernetes.io/projected/6b808d3c-37db-43c0-bec6-8edeca8028c5-kube-api-access-v5nqn\") pod \"6b808d3c-37db-43c0-bec6-8edeca8028c5\" (UID: \"6b808d3c-37db-43c0-bec6-8edeca8028c5\") " Dec 11 10:36:14 crc kubenswrapper[4953]: I1211 10:36:14.810112 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6b808d3c-37db-43c0-bec6-8edeca8028c5-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "6b808d3c-37db-43c0-bec6-8edeca8028c5" (UID: "6b808d3c-37db-43c0-bec6-8edeca8028c5"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:36:14 crc kubenswrapper[4953]: I1211 10:36:14.810284 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6b808d3c-37db-43c0-bec6-8edeca8028c5-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "6b808d3c-37db-43c0-bec6-8edeca8028c5" (UID: "6b808d3c-37db-43c0-bec6-8edeca8028c5"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:36:14 crc kubenswrapper[4953]: I1211 10:36:14.816590 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b808d3c-37db-43c0-bec6-8edeca8028c5-kube-api-access-v5nqn" (OuterVolumeSpecName: "kube-api-access-v5nqn") pod "6b808d3c-37db-43c0-bec6-8edeca8028c5" (UID: "6b808d3c-37db-43c0-bec6-8edeca8028c5"). InnerVolumeSpecName "kube-api-access-v5nqn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:36:14 crc kubenswrapper[4953]: I1211 10:36:14.816961 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b808d3c-37db-43c0-bec6-8edeca8028c5-scripts" (OuterVolumeSpecName: "scripts") pod "6b808d3c-37db-43c0-bec6-8edeca8028c5" (UID: "6b808d3c-37db-43c0-bec6-8edeca8028c5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:36:14 crc kubenswrapper[4953]: I1211 10:36:14.846939 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b808d3c-37db-43c0-bec6-8edeca8028c5-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "6b808d3c-37db-43c0-bec6-8edeca8028c5" (UID: "6b808d3c-37db-43c0-bec6-8edeca8028c5"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:36:14 crc kubenswrapper[4953]: I1211 10:36:14.918150 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v5nqn\" (UniqueName: \"kubernetes.io/projected/6b808d3c-37db-43c0-bec6-8edeca8028c5-kube-api-access-v5nqn\") on node \"crc\" DevicePath \"\"" Dec 11 10:36:14 crc kubenswrapper[4953]: I1211 10:36:14.918181 4953 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6b808d3c-37db-43c0-bec6-8edeca8028c5-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 11 10:36:14 crc kubenswrapper[4953]: I1211 10:36:14.918208 4953 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6b808d3c-37db-43c0-bec6-8edeca8028c5-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 11 10:36:14 crc kubenswrapper[4953]: I1211 10:36:14.918219 4953 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6b808d3c-37db-43c0-bec6-8edeca8028c5-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 11 10:36:14 crc kubenswrapper[4953]: I1211 10:36:14.918229 4953 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6b808d3c-37db-43c0-bec6-8edeca8028c5-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 10:36:14 crc kubenswrapper[4953]: I1211 10:36:14.918711 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b808d3c-37db-43c0-bec6-8edeca8028c5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6b808d3c-37db-43c0-bec6-8edeca8028c5" (UID: "6b808d3c-37db-43c0-bec6-8edeca8028c5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:36:14 crc kubenswrapper[4953]: I1211 10:36:14.924014 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b808d3c-37db-43c0-bec6-8edeca8028c5-config-data" (OuterVolumeSpecName: "config-data") pod "6b808d3c-37db-43c0-bec6-8edeca8028c5" (UID: "6b808d3c-37db-43c0-bec6-8edeca8028c5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:36:15 crc kubenswrapper[4953]: I1211 10:36:15.019445 4953 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b808d3c-37db-43c0-bec6-8edeca8028c5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 10:36:15 crc kubenswrapper[4953]: I1211 10:36:15.019683 4953 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b808d3c-37db-43c0-bec6-8edeca8028c5-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 10:36:15 crc kubenswrapper[4953]: I1211 10:36:15.582513 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 10:36:15 crc kubenswrapper[4953]: I1211 10:36:15.589847 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6b808d3c-37db-43c0-bec6-8edeca8028c5","Type":"ContainerDied","Data":"4cbfefa823421584bc503a7f333d75109d486ac2fc4ab996d7a1a8b8a7d7a0b9"} Dec 11 10:36:15 crc kubenswrapper[4953]: I1211 10:36:15.589897 4953 scope.go:117] "RemoveContainer" containerID="cc48106ab88f39e2fa2e3cc788010ec227413582c1f829e7114f90b57f2b3c7a" Dec 11 10:36:15 crc kubenswrapper[4953]: I1211 10:36:15.622228 4953 scope.go:117] "RemoveContainer" containerID="9097898d38c6000d19b402e2d3cee9123e73375df063ef446677a79125d77a30" Dec 11 10:36:15 crc kubenswrapper[4953]: I1211 10:36:15.622741 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 11 10:36:15 crc kubenswrapper[4953]: I1211 10:36:15.633171 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 11 10:36:15 crc kubenswrapper[4953]: I1211 10:36:15.655779 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 11 10:36:15 crc kubenswrapper[4953]: E1211 10:36:15.656212 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b808d3c-37db-43c0-bec6-8edeca8028c5" containerName="ceilometer-central-agent" Dec 11 10:36:15 crc kubenswrapper[4953]: I1211 10:36:15.656231 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b808d3c-37db-43c0-bec6-8edeca8028c5" containerName="ceilometer-central-agent" Dec 11 10:36:15 crc kubenswrapper[4953]: E1211 10:36:15.656244 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b808d3c-37db-43c0-bec6-8edeca8028c5" containerName="sg-core" Dec 11 10:36:15 crc kubenswrapper[4953]: I1211 10:36:15.656251 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b808d3c-37db-43c0-bec6-8edeca8028c5" containerName="sg-core" Dec 11 10:36:15 crc kubenswrapper[4953]: E1211 10:36:15.656270 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b808d3c-37db-43c0-bec6-8edeca8028c5" containerName="proxy-httpd" Dec 11 10:36:15 crc kubenswrapper[4953]: I1211 10:36:15.656277 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b808d3c-37db-43c0-bec6-8edeca8028c5" containerName="proxy-httpd" Dec 11 10:36:15 crc kubenswrapper[4953]: E1211 10:36:15.656286 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b808d3c-37db-43c0-bec6-8edeca8028c5" containerName="ceilometer-notification-agent" Dec 11 10:36:15 crc kubenswrapper[4953]: I1211 10:36:15.656292 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b808d3c-37db-43c0-bec6-8edeca8028c5" containerName="ceilometer-notification-agent" Dec 11 10:36:15 crc kubenswrapper[4953]: I1211 10:36:15.656453 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b808d3c-37db-43c0-bec6-8edeca8028c5" containerName="sg-core" Dec 11 10:36:15 crc kubenswrapper[4953]: I1211 10:36:15.656467 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b808d3c-37db-43c0-bec6-8edeca8028c5" containerName="proxy-httpd" Dec 11 10:36:15 crc kubenswrapper[4953]: I1211 10:36:15.656484 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b808d3c-37db-43c0-bec6-8edeca8028c5" containerName="ceilometer-central-agent" Dec 11 10:36:15 crc kubenswrapper[4953]: I1211 10:36:15.656501 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b808d3c-37db-43c0-bec6-8edeca8028c5" containerName="ceilometer-notification-agent" Dec 11 10:36:15 crc kubenswrapper[4953]: I1211 10:36:15.658262 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 10:36:15 crc kubenswrapper[4953]: I1211 10:36:15.663713 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 11 10:36:15 crc kubenswrapper[4953]: I1211 10:36:15.663872 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Dec 11 10:36:15 crc kubenswrapper[4953]: I1211 10:36:15.664121 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 11 10:36:15 crc kubenswrapper[4953]: I1211 10:36:15.671610 4953 scope.go:117] "RemoveContainer" containerID="a01d9ed466563c185cfcacb8d3f0b91daea77fd31e94cfbede7a8ecb1cd70b95" Dec 11 10:36:15 crc kubenswrapper[4953]: I1211 10:36:15.686330 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 11 10:36:15 crc kubenswrapper[4953]: I1211 10:36:15.745290 4953 scope.go:117] "RemoveContainer" containerID="98aa167b6a86472673752a8c31fe391db3dab7e00df30e5692bbe6b0d694ba44" Dec 11 10:36:15 crc kubenswrapper[4953]: I1211 10:36:15.832468 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06de80b3-ffbb-4efd-beca-dbf2b67046fa-config-data\") pod \"ceilometer-0\" (UID: \"06de80b3-ffbb-4efd-beca-dbf2b67046fa\") " pod="openstack/ceilometer-0" Dec 11 10:36:15 crc kubenswrapper[4953]: I1211 10:36:15.832531 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06de80b3-ffbb-4efd-beca-dbf2b67046fa-scripts\") pod \"ceilometer-0\" (UID: \"06de80b3-ffbb-4efd-beca-dbf2b67046fa\") " pod="openstack/ceilometer-0" Dec 11 10:36:15 crc kubenswrapper[4953]: I1211 10:36:15.832611 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/06de80b3-ffbb-4efd-beca-dbf2b67046fa-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"06de80b3-ffbb-4efd-beca-dbf2b67046fa\") " pod="openstack/ceilometer-0" Dec 11 10:36:15 crc kubenswrapper[4953]: I1211 10:36:15.832695 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvdzr\" (UniqueName: \"kubernetes.io/projected/06de80b3-ffbb-4efd-beca-dbf2b67046fa-kube-api-access-gvdzr\") pod \"ceilometer-0\" (UID: \"06de80b3-ffbb-4efd-beca-dbf2b67046fa\") " pod="openstack/ceilometer-0" Dec 11 10:36:15 crc kubenswrapper[4953]: I1211 10:36:15.832737 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/06de80b3-ffbb-4efd-beca-dbf2b67046fa-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"06de80b3-ffbb-4efd-beca-dbf2b67046fa\") " pod="openstack/ceilometer-0" Dec 11 10:36:15 crc kubenswrapper[4953]: I1211 10:36:15.832761 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/06de80b3-ffbb-4efd-beca-dbf2b67046fa-run-httpd\") pod \"ceilometer-0\" (UID: \"06de80b3-ffbb-4efd-beca-dbf2b67046fa\") " pod="openstack/ceilometer-0" Dec 11 10:36:15 crc kubenswrapper[4953]: I1211 10:36:15.832782 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06de80b3-ffbb-4efd-beca-dbf2b67046fa-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"06de80b3-ffbb-4efd-beca-dbf2b67046fa\") " pod="openstack/ceilometer-0" Dec 11 10:36:15 crc kubenswrapper[4953]: I1211 10:36:15.832806 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/06de80b3-ffbb-4efd-beca-dbf2b67046fa-log-httpd\") pod \"ceilometer-0\" (UID: \"06de80b3-ffbb-4efd-beca-dbf2b67046fa\") " pod="openstack/ceilometer-0" Dec 11 10:36:15 crc kubenswrapper[4953]: I1211 10:36:15.863586 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-cfhwh" Dec 11 10:36:15 crc kubenswrapper[4953]: I1211 10:36:15.863662 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-cfhwh" Dec 11 10:36:15 crc kubenswrapper[4953]: I1211 10:36:15.934709 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06de80b3-ffbb-4efd-beca-dbf2b67046fa-scripts\") pod \"ceilometer-0\" (UID: \"06de80b3-ffbb-4efd-beca-dbf2b67046fa\") " pod="openstack/ceilometer-0" Dec 11 10:36:15 crc kubenswrapper[4953]: I1211 10:36:15.934833 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/06de80b3-ffbb-4efd-beca-dbf2b67046fa-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"06de80b3-ffbb-4efd-beca-dbf2b67046fa\") " pod="openstack/ceilometer-0" Dec 11 10:36:15 crc kubenswrapper[4953]: I1211 10:36:15.934932 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gvdzr\" (UniqueName: \"kubernetes.io/projected/06de80b3-ffbb-4efd-beca-dbf2b67046fa-kube-api-access-gvdzr\") pod \"ceilometer-0\" (UID: \"06de80b3-ffbb-4efd-beca-dbf2b67046fa\") " pod="openstack/ceilometer-0" Dec 11 10:36:15 crc kubenswrapper[4953]: I1211 10:36:15.934995 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/06de80b3-ffbb-4efd-beca-dbf2b67046fa-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"06de80b3-ffbb-4efd-beca-dbf2b67046fa\") " pod="openstack/ceilometer-0" Dec 11 10:36:15 crc kubenswrapper[4953]: I1211 10:36:15.935068 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/06de80b3-ffbb-4efd-beca-dbf2b67046fa-run-httpd\") pod \"ceilometer-0\" (UID: \"06de80b3-ffbb-4efd-beca-dbf2b67046fa\") " pod="openstack/ceilometer-0" Dec 11 10:36:15 crc kubenswrapper[4953]: I1211 10:36:15.935120 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06de80b3-ffbb-4efd-beca-dbf2b67046fa-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"06de80b3-ffbb-4efd-beca-dbf2b67046fa\") " pod="openstack/ceilometer-0" Dec 11 10:36:15 crc kubenswrapper[4953]: I1211 10:36:15.935176 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/06de80b3-ffbb-4efd-beca-dbf2b67046fa-log-httpd\") pod \"ceilometer-0\" (UID: \"06de80b3-ffbb-4efd-beca-dbf2b67046fa\") " pod="openstack/ceilometer-0" Dec 11 10:36:15 crc kubenswrapper[4953]: I1211 10:36:15.935229 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06de80b3-ffbb-4efd-beca-dbf2b67046fa-config-data\") pod \"ceilometer-0\" (UID: \"06de80b3-ffbb-4efd-beca-dbf2b67046fa\") " pod="openstack/ceilometer-0" Dec 11 10:36:15 crc kubenswrapper[4953]: I1211 10:36:15.935779 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/06de80b3-ffbb-4efd-beca-dbf2b67046fa-run-httpd\") pod \"ceilometer-0\" (UID: \"06de80b3-ffbb-4efd-beca-dbf2b67046fa\") " pod="openstack/ceilometer-0" Dec 11 10:36:15 crc kubenswrapper[4953]: I1211 10:36:15.936089 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/06de80b3-ffbb-4efd-beca-dbf2b67046fa-log-httpd\") pod \"ceilometer-0\" (UID: \"06de80b3-ffbb-4efd-beca-dbf2b67046fa\") " pod="openstack/ceilometer-0" Dec 11 10:36:15 crc kubenswrapper[4953]: I1211 10:36:15.940395 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06de80b3-ffbb-4efd-beca-dbf2b67046fa-scripts\") pod \"ceilometer-0\" (UID: \"06de80b3-ffbb-4efd-beca-dbf2b67046fa\") " pod="openstack/ceilometer-0" Dec 11 10:36:15 crc kubenswrapper[4953]: I1211 10:36:15.940800 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06de80b3-ffbb-4efd-beca-dbf2b67046fa-config-data\") pod \"ceilometer-0\" (UID: \"06de80b3-ffbb-4efd-beca-dbf2b67046fa\") " pod="openstack/ceilometer-0" Dec 11 10:36:15 crc kubenswrapper[4953]: I1211 10:36:15.944538 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/06de80b3-ffbb-4efd-beca-dbf2b67046fa-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"06de80b3-ffbb-4efd-beca-dbf2b67046fa\") " pod="openstack/ceilometer-0" Dec 11 10:36:15 crc kubenswrapper[4953]: I1211 10:36:15.954149 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/06de80b3-ffbb-4efd-beca-dbf2b67046fa-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"06de80b3-ffbb-4efd-beca-dbf2b67046fa\") " pod="openstack/ceilometer-0" Dec 11 10:36:15 crc kubenswrapper[4953]: I1211 10:36:15.954963 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06de80b3-ffbb-4efd-beca-dbf2b67046fa-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"06de80b3-ffbb-4efd-beca-dbf2b67046fa\") " pod="openstack/ceilometer-0" Dec 11 10:36:15 crc kubenswrapper[4953]: I1211 10:36:15.971667 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gvdzr\" (UniqueName: \"kubernetes.io/projected/06de80b3-ffbb-4efd-beca-dbf2b67046fa-kube-api-access-gvdzr\") pod \"ceilometer-0\" (UID: \"06de80b3-ffbb-4efd-beca-dbf2b67046fa\") " pod="openstack/ceilometer-0" Dec 11 10:36:15 crc kubenswrapper[4953]: I1211 10:36:15.988856 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 10:36:16 crc kubenswrapper[4953]: I1211 10:36:16.004481 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Dec 11 10:36:16 crc kubenswrapper[4953]: I1211 10:36:16.181276 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Dec 11 10:36:16 crc kubenswrapper[4953]: I1211 10:36:16.527417 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b808d3c-37db-43c0-bec6-8edeca8028c5" path="/var/lib/kubelet/pods/6b808d3c-37db-43c0-bec6-8edeca8028c5/volumes" Dec 11 10:36:16 crc kubenswrapper[4953]: I1211 10:36:16.570095 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 11 10:36:16 crc kubenswrapper[4953]: W1211 10:36:16.576816 4953 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod06de80b3_ffbb_4efd_beca_dbf2b67046fa.slice/crio-b338969bdf245a9b78f32e3573e9ed668362f3df5bd891213e53caccf7ab725f WatchSource:0}: Error finding container b338969bdf245a9b78f32e3573e9ed668362f3df5bd891213e53caccf7ab725f: Status 404 returned error can't find the container with id b338969bdf245a9b78f32e3573e9ed668362f3df5bd891213e53caccf7ab725f Dec 11 10:36:16 crc kubenswrapper[4953]: I1211 10:36:16.593367 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"06de80b3-ffbb-4efd-beca-dbf2b67046fa","Type":"ContainerStarted","Data":"b338969bdf245a9b78f32e3573e9ed668362f3df5bd891213e53caccf7ab725f"} Dec 11 10:36:16 crc kubenswrapper[4953]: I1211 10:36:16.725746 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-8t2m9"] Dec 11 10:36:16 crc kubenswrapper[4953]: I1211 10:36:16.726949 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-8t2m9" Dec 11 10:36:16 crc kubenswrapper[4953]: I1211 10:36:16.729391 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Dec 11 10:36:16 crc kubenswrapper[4953]: I1211 10:36:16.731550 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Dec 11 10:36:16 crc kubenswrapper[4953]: I1211 10:36:16.745920 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-8t2m9"] Dec 11 10:36:16 crc kubenswrapper[4953]: I1211 10:36:16.857723 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8092c791-0c1a-454e-9fe8-b3dcb63c3415-scripts\") pod \"nova-cell0-cell-mapping-8t2m9\" (UID: \"8092c791-0c1a-454e-9fe8-b3dcb63c3415\") " pod="openstack/nova-cell0-cell-mapping-8t2m9" Dec 11 10:36:16 crc kubenswrapper[4953]: I1211 10:36:16.858173 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mpbh\" (UniqueName: \"kubernetes.io/projected/8092c791-0c1a-454e-9fe8-b3dcb63c3415-kube-api-access-9mpbh\") pod \"nova-cell0-cell-mapping-8t2m9\" (UID: \"8092c791-0c1a-454e-9fe8-b3dcb63c3415\") " pod="openstack/nova-cell0-cell-mapping-8t2m9" Dec 11 10:36:16 crc kubenswrapper[4953]: I1211 10:36:16.858247 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8092c791-0c1a-454e-9fe8-b3dcb63c3415-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-8t2m9\" (UID: \"8092c791-0c1a-454e-9fe8-b3dcb63c3415\") " pod="openstack/nova-cell0-cell-mapping-8t2m9" Dec 11 10:36:16 crc kubenswrapper[4953]: I1211 10:36:16.858619 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8092c791-0c1a-454e-9fe8-b3dcb63c3415-config-data\") pod \"nova-cell0-cell-mapping-8t2m9\" (UID: \"8092c791-0c1a-454e-9fe8-b3dcb63c3415\") " pod="openstack/nova-cell0-cell-mapping-8t2m9" Dec 11 10:36:16 crc kubenswrapper[4953]: I1211 10:36:16.923006 4953 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-cfhwh" podUID="91d95335-f50e-4677-b3f9-bad2ab143c43" containerName="registry-server" probeResult="failure" output=< Dec 11 10:36:16 crc kubenswrapper[4953]: timeout: failed to connect service ":50051" within 1s Dec 11 10:36:16 crc kubenswrapper[4953]: > Dec 11 10:36:16 crc kubenswrapper[4953]: I1211 10:36:16.945065 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 11 10:36:16 crc kubenswrapper[4953]: I1211 10:36:16.946635 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 11 10:36:16 crc kubenswrapper[4953]: I1211 10:36:16.949200 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 11 10:36:16 crc kubenswrapper[4953]: I1211 10:36:16.959687 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8092c791-0c1a-454e-9fe8-b3dcb63c3415-scripts\") pod \"nova-cell0-cell-mapping-8t2m9\" (UID: \"8092c791-0c1a-454e-9fe8-b3dcb63c3415\") " pod="openstack/nova-cell0-cell-mapping-8t2m9" Dec 11 10:36:16 crc kubenswrapper[4953]: I1211 10:36:16.959755 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mpbh\" (UniqueName: \"kubernetes.io/projected/8092c791-0c1a-454e-9fe8-b3dcb63c3415-kube-api-access-9mpbh\") pod \"nova-cell0-cell-mapping-8t2m9\" (UID: \"8092c791-0c1a-454e-9fe8-b3dcb63c3415\") " pod="openstack/nova-cell0-cell-mapping-8t2m9" Dec 11 10:36:16 crc kubenswrapper[4953]: I1211 10:36:16.959796 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26d03f3f-057f-4c77-82a4-c394e0732e01-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"26d03f3f-057f-4c77-82a4-c394e0732e01\") " pod="openstack/nova-api-0" Dec 11 10:36:16 crc kubenswrapper[4953]: I1211 10:36:16.959820 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26d03f3f-057f-4c77-82a4-c394e0732e01-config-data\") pod \"nova-api-0\" (UID: \"26d03f3f-057f-4c77-82a4-c394e0732e01\") " pod="openstack/nova-api-0" Dec 11 10:36:16 crc kubenswrapper[4953]: I1211 10:36:16.959850 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8092c791-0c1a-454e-9fe8-b3dcb63c3415-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-8t2m9\" (UID: \"8092c791-0c1a-454e-9fe8-b3dcb63c3415\") " pod="openstack/nova-cell0-cell-mapping-8t2m9" Dec 11 10:36:16 crc kubenswrapper[4953]: I1211 10:36:16.959866 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/26d03f3f-057f-4c77-82a4-c394e0732e01-logs\") pod \"nova-api-0\" (UID: \"26d03f3f-057f-4c77-82a4-c394e0732e01\") " pod="openstack/nova-api-0" Dec 11 10:36:16 crc kubenswrapper[4953]: I1211 10:36:16.959924 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xk6f2\" (UniqueName: \"kubernetes.io/projected/26d03f3f-057f-4c77-82a4-c394e0732e01-kube-api-access-xk6f2\") pod \"nova-api-0\" (UID: \"26d03f3f-057f-4c77-82a4-c394e0732e01\") " pod="openstack/nova-api-0" Dec 11 10:36:16 crc kubenswrapper[4953]: I1211 10:36:16.959955 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8092c791-0c1a-454e-9fe8-b3dcb63c3415-config-data\") pod \"nova-cell0-cell-mapping-8t2m9\" (UID: \"8092c791-0c1a-454e-9fe8-b3dcb63c3415\") " pod="openstack/nova-cell0-cell-mapping-8t2m9" Dec 11 10:36:16 crc kubenswrapper[4953]: I1211 10:36:16.974410 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8092c791-0c1a-454e-9fe8-b3dcb63c3415-config-data\") pod \"nova-cell0-cell-mapping-8t2m9\" (UID: \"8092c791-0c1a-454e-9fe8-b3dcb63c3415\") " pod="openstack/nova-cell0-cell-mapping-8t2m9" Dec 11 10:36:16 crc kubenswrapper[4953]: I1211 10:36:16.981426 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8092c791-0c1a-454e-9fe8-b3dcb63c3415-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-8t2m9\" (UID: \"8092c791-0c1a-454e-9fe8-b3dcb63c3415\") " pod="openstack/nova-cell0-cell-mapping-8t2m9" Dec 11 10:36:16 crc kubenswrapper[4953]: I1211 10:36:16.981819 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8092c791-0c1a-454e-9fe8-b3dcb63c3415-scripts\") pod \"nova-cell0-cell-mapping-8t2m9\" (UID: \"8092c791-0c1a-454e-9fe8-b3dcb63c3415\") " pod="openstack/nova-cell0-cell-mapping-8t2m9" Dec 11 10:36:17 crc kubenswrapper[4953]: I1211 10:36:17.014667 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 11 10:36:17 crc kubenswrapper[4953]: I1211 10:36:17.064080 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xk6f2\" (UniqueName: \"kubernetes.io/projected/26d03f3f-057f-4c77-82a4-c394e0732e01-kube-api-access-xk6f2\") pod \"nova-api-0\" (UID: \"26d03f3f-057f-4c77-82a4-c394e0732e01\") " pod="openstack/nova-api-0" Dec 11 10:36:17 crc kubenswrapper[4953]: I1211 10:36:17.064290 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26d03f3f-057f-4c77-82a4-c394e0732e01-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"26d03f3f-057f-4c77-82a4-c394e0732e01\") " pod="openstack/nova-api-0" Dec 11 10:36:17 crc kubenswrapper[4953]: I1211 10:36:17.132543 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mpbh\" (UniqueName: \"kubernetes.io/projected/8092c791-0c1a-454e-9fe8-b3dcb63c3415-kube-api-access-9mpbh\") pod \"nova-cell0-cell-mapping-8t2m9\" (UID: \"8092c791-0c1a-454e-9fe8-b3dcb63c3415\") " pod="openstack/nova-cell0-cell-mapping-8t2m9" Dec 11 10:36:17 crc kubenswrapper[4953]: I1211 10:36:17.140867 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26d03f3f-057f-4c77-82a4-c394e0732e01-config-data\") pod \"nova-api-0\" (UID: \"26d03f3f-057f-4c77-82a4-c394e0732e01\") " pod="openstack/nova-api-0" Dec 11 10:36:17 crc kubenswrapper[4953]: I1211 10:36:17.140962 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/26d03f3f-057f-4c77-82a4-c394e0732e01-logs\") pod \"nova-api-0\" (UID: \"26d03f3f-057f-4c77-82a4-c394e0732e01\") " pod="openstack/nova-api-0" Dec 11 10:36:17 crc kubenswrapper[4953]: I1211 10:36:17.141504 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/26d03f3f-057f-4c77-82a4-c394e0732e01-logs\") pod \"nova-api-0\" (UID: \"26d03f3f-057f-4c77-82a4-c394e0732e01\") " pod="openstack/nova-api-0" Dec 11 10:36:17 crc kubenswrapper[4953]: I1211 10:36:17.167906 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26d03f3f-057f-4c77-82a4-c394e0732e01-config-data\") pod \"nova-api-0\" (UID: \"26d03f3f-057f-4c77-82a4-c394e0732e01\") " pod="openstack/nova-api-0" Dec 11 10:36:17 crc kubenswrapper[4953]: I1211 10:36:17.189638 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26d03f3f-057f-4c77-82a4-c394e0732e01-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"26d03f3f-057f-4c77-82a4-c394e0732e01\") " pod="openstack/nova-api-0" Dec 11 10:36:17 crc kubenswrapper[4953]: I1211 10:36:17.255858 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xk6f2\" (UniqueName: \"kubernetes.io/projected/26d03f3f-057f-4c77-82a4-c394e0732e01-kube-api-access-xk6f2\") pod \"nova-api-0\" (UID: \"26d03f3f-057f-4c77-82a4-c394e0732e01\") " pod="openstack/nova-api-0" Dec 11 10:36:17 crc kubenswrapper[4953]: I1211 10:36:17.278181 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 11 10:36:17 crc kubenswrapper[4953]: I1211 10:36:17.287874 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 11 10:36:17 crc kubenswrapper[4953]: I1211 10:36:17.297301 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 11 10:36:17 crc kubenswrapper[4953]: I1211 10:36:17.308964 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 11 10:36:17 crc kubenswrapper[4953]: I1211 10:36:17.339683 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 11 10:36:17 crc kubenswrapper[4953]: I1211 10:36:17.351327 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-8t2m9" Dec 11 10:36:17 crc kubenswrapper[4953]: I1211 10:36:17.358389 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc941d7f-79c3-4661-992e-7a27e3a6c6d9-config-data\") pod \"nova-metadata-0\" (UID: \"bc941d7f-79c3-4661-992e-7a27e3a6c6d9\") " pod="openstack/nova-metadata-0" Dec 11 10:36:17 crc kubenswrapper[4953]: I1211 10:36:17.358792 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bc941d7f-79c3-4661-992e-7a27e3a6c6d9-logs\") pod \"nova-metadata-0\" (UID: \"bc941d7f-79c3-4661-992e-7a27e3a6c6d9\") " pod="openstack/nova-metadata-0" Dec 11 10:36:17 crc kubenswrapper[4953]: I1211 10:36:17.358818 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc941d7f-79c3-4661-992e-7a27e3a6c6d9-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"bc941d7f-79c3-4661-992e-7a27e3a6c6d9\") " pod="openstack/nova-metadata-0" Dec 11 10:36:17 crc kubenswrapper[4953]: I1211 10:36:17.358857 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbhf9\" (UniqueName: \"kubernetes.io/projected/bc941d7f-79c3-4661-992e-7a27e3a6c6d9-kube-api-access-rbhf9\") pod \"nova-metadata-0\" (UID: \"bc941d7f-79c3-4661-992e-7a27e3a6c6d9\") " pod="openstack/nova-metadata-0" Dec 11 10:36:17 crc kubenswrapper[4953]: I1211 10:36:17.398886 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Dec 11 10:36:17 crc kubenswrapper[4953]: I1211 10:36:17.400709 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 11 10:36:17 crc kubenswrapper[4953]: I1211 10:36:17.461065 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc941d7f-79c3-4661-992e-7a27e3a6c6d9-config-data\") pod \"nova-metadata-0\" (UID: \"bc941d7f-79c3-4661-992e-7a27e3a6c6d9\") " pod="openstack/nova-metadata-0" Dec 11 10:36:17 crc kubenswrapper[4953]: I1211 10:36:17.461374 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bc941d7f-79c3-4661-992e-7a27e3a6c6d9-logs\") pod \"nova-metadata-0\" (UID: \"bc941d7f-79c3-4661-992e-7a27e3a6c6d9\") " pod="openstack/nova-metadata-0" Dec 11 10:36:17 crc kubenswrapper[4953]: I1211 10:36:17.472788 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc941d7f-79c3-4661-992e-7a27e3a6c6d9-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"bc941d7f-79c3-4661-992e-7a27e3a6c6d9\") " pod="openstack/nova-metadata-0" Dec 11 10:36:17 crc kubenswrapper[4953]: I1211 10:36:17.473007 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbhf9\" (UniqueName: \"kubernetes.io/projected/bc941d7f-79c3-4661-992e-7a27e3a6c6d9-kube-api-access-rbhf9\") pod \"nova-metadata-0\" (UID: \"bc941d7f-79c3-4661-992e-7a27e3a6c6d9\") " pod="openstack/nova-metadata-0" Dec 11 10:36:17 crc kubenswrapper[4953]: I1211 10:36:17.565058 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bc941d7f-79c3-4661-992e-7a27e3a6c6d9-logs\") pod \"nova-metadata-0\" (UID: \"bc941d7f-79c3-4661-992e-7a27e3a6c6d9\") " pod="openstack/nova-metadata-0" Dec 11 10:36:17 crc kubenswrapper[4953]: I1211 10:36:17.567299 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 11 10:36:17 crc kubenswrapper[4953]: I1211 10:36:17.569437 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc941d7f-79c3-4661-992e-7a27e3a6c6d9-config-data\") pod \"nova-metadata-0\" (UID: \"bc941d7f-79c3-4661-992e-7a27e3a6c6d9\") " pod="openstack/nova-metadata-0" Dec 11 10:36:17 crc kubenswrapper[4953]: I1211 10:36:17.571348 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Dec 11 10:36:17 crc kubenswrapper[4953]: I1211 10:36:17.591527 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc941d7f-79c3-4661-992e-7a27e3a6c6d9-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"bc941d7f-79c3-4661-992e-7a27e3a6c6d9\") " pod="openstack/nova-metadata-0" Dec 11 10:36:17 crc kubenswrapper[4953]: I1211 10:36:17.673501 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9df1c590-48bc-4795-8655-114657aa49e9-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"9df1c590-48bc-4795-8655-114657aa49e9\") " pod="openstack/nova-scheduler-0" Dec 11 10:36:17 crc kubenswrapper[4953]: I1211 10:36:17.674527 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-799wh\" (UniqueName: \"kubernetes.io/projected/9df1c590-48bc-4795-8655-114657aa49e9-kube-api-access-799wh\") pod \"nova-scheduler-0\" (UID: \"9df1c590-48bc-4795-8655-114657aa49e9\") " pod="openstack/nova-scheduler-0" Dec 11 10:36:17 crc kubenswrapper[4953]: I1211 10:36:17.675989 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9df1c590-48bc-4795-8655-114657aa49e9-config-data\") pod \"nova-scheduler-0\" (UID: \"9df1c590-48bc-4795-8655-114657aa49e9\") " pod="openstack/nova-scheduler-0" Dec 11 10:36:17 crc kubenswrapper[4953]: I1211 10:36:17.677748 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbhf9\" (UniqueName: \"kubernetes.io/projected/bc941d7f-79c3-4661-992e-7a27e3a6c6d9-kube-api-access-rbhf9\") pod \"nova-metadata-0\" (UID: \"bc941d7f-79c3-4661-992e-7a27e3a6c6d9\") " pod="openstack/nova-metadata-0" Dec 11 10:36:17 crc kubenswrapper[4953]: I1211 10:36:17.761145 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 11 10:36:17 crc kubenswrapper[4953]: I1211 10:36:17.763566 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 11 10:36:17 crc kubenswrapper[4953]: I1211 10:36:17.766656 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Dec 11 10:36:17 crc kubenswrapper[4953]: I1211 10:36:17.796907 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9df1c590-48bc-4795-8655-114657aa49e9-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"9df1c590-48bc-4795-8655-114657aa49e9\") " pod="openstack/nova-scheduler-0" Dec 11 10:36:17 crc kubenswrapper[4953]: I1211 10:36:17.797318 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-799wh\" (UniqueName: \"kubernetes.io/projected/9df1c590-48bc-4795-8655-114657aa49e9-kube-api-access-799wh\") pod \"nova-scheduler-0\" (UID: \"9df1c590-48bc-4795-8655-114657aa49e9\") " pod="openstack/nova-scheduler-0" Dec 11 10:36:17 crc kubenswrapper[4953]: I1211 10:36:17.797369 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9df1c590-48bc-4795-8655-114657aa49e9-config-data\") pod \"nova-scheduler-0\" (UID: \"9df1c590-48bc-4795-8655-114657aa49e9\") " pod="openstack/nova-scheduler-0" Dec 11 10:36:17 crc kubenswrapper[4953]: I1211 10:36:17.821469 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9df1c590-48bc-4795-8655-114657aa49e9-config-data\") pod \"nova-scheduler-0\" (UID: \"9df1c590-48bc-4795-8655-114657aa49e9\") " pod="openstack/nova-scheduler-0" Dec 11 10:36:17 crc kubenswrapper[4953]: I1211 10:36:17.829709 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-799wh\" (UniqueName: \"kubernetes.io/projected/9df1c590-48bc-4795-8655-114657aa49e9-kube-api-access-799wh\") pod \"nova-scheduler-0\" (UID: \"9df1c590-48bc-4795-8655-114657aa49e9\") " pod="openstack/nova-scheduler-0" Dec 11 10:36:17 crc kubenswrapper[4953]: I1211 10:36:17.838869 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9df1c590-48bc-4795-8655-114657aa49e9-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"9df1c590-48bc-4795-8655-114657aa49e9\") " pod="openstack/nova-scheduler-0" Dec 11 10:36:17 crc kubenswrapper[4953]: I1211 10:36:17.841884 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-557bbc7df7-rcxzb"] Dec 11 10:36:17 crc kubenswrapper[4953]: I1211 10:36:17.852373 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-557bbc7df7-rcxzb" Dec 11 10:36:17 crc kubenswrapper[4953]: I1211 10:36:17.890981 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 11 10:36:17 crc kubenswrapper[4953]: I1211 10:36:17.899745 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvfjv\" (UniqueName: \"kubernetes.io/projected/847529a7-a29f-4eb3-a678-3a909e4aa0f2-kube-api-access-kvfjv\") pod \"nova-cell1-novncproxy-0\" (UID: \"847529a7-a29f-4eb3-a678-3a909e4aa0f2\") " pod="openstack/nova-cell1-novncproxy-0" Dec 11 10:36:17 crc kubenswrapper[4953]: I1211 10:36:17.899831 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/847529a7-a29f-4eb3-a678-3a909e4aa0f2-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"847529a7-a29f-4eb3-a678-3a909e4aa0f2\") " pod="openstack/nova-cell1-novncproxy-0" Dec 11 10:36:17 crc kubenswrapper[4953]: I1211 10:36:17.899960 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/847529a7-a29f-4eb3-a678-3a909e4aa0f2-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"847529a7-a29f-4eb3-a678-3a909e4aa0f2\") " pod="openstack/nova-cell1-novncproxy-0" Dec 11 10:36:17 crc kubenswrapper[4953]: I1211 10:36:17.916007 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 11 10:36:17 crc kubenswrapper[4953]: I1211 10:36:17.916042 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-557bbc7df7-rcxzb"] Dec 11 10:36:17 crc kubenswrapper[4953]: I1211 10:36:17.981032 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 11 10:36:18 crc kubenswrapper[4953]: I1211 10:36:18.122498 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/180fcc74-0e86-469b-b328-ccdf05971726-dns-svc\") pod \"dnsmasq-dns-557bbc7df7-rcxzb\" (UID: \"180fcc74-0e86-469b-b328-ccdf05971726\") " pod="openstack/dnsmasq-dns-557bbc7df7-rcxzb" Dec 11 10:36:18 crc kubenswrapper[4953]: I1211 10:36:18.122858 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kvfjv\" (UniqueName: \"kubernetes.io/projected/847529a7-a29f-4eb3-a678-3a909e4aa0f2-kube-api-access-kvfjv\") pod \"nova-cell1-novncproxy-0\" (UID: \"847529a7-a29f-4eb3-a678-3a909e4aa0f2\") " pod="openstack/nova-cell1-novncproxy-0" Dec 11 10:36:18 crc kubenswrapper[4953]: I1211 10:36:18.122901 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/847529a7-a29f-4eb3-a678-3a909e4aa0f2-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"847529a7-a29f-4eb3-a678-3a909e4aa0f2\") " pod="openstack/nova-cell1-novncproxy-0" Dec 11 10:36:18 crc kubenswrapper[4953]: I1211 10:36:18.122938 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/180fcc74-0e86-469b-b328-ccdf05971726-dns-swift-storage-0\") pod \"dnsmasq-dns-557bbc7df7-rcxzb\" (UID: \"180fcc74-0e86-469b-b328-ccdf05971726\") " pod="openstack/dnsmasq-dns-557bbc7df7-rcxzb" Dec 11 10:36:18 crc kubenswrapper[4953]: I1211 10:36:18.122985 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nk7b\" (UniqueName: \"kubernetes.io/projected/180fcc74-0e86-469b-b328-ccdf05971726-kube-api-access-9nk7b\") pod \"dnsmasq-dns-557bbc7df7-rcxzb\" (UID: \"180fcc74-0e86-469b-b328-ccdf05971726\") " pod="openstack/dnsmasq-dns-557bbc7df7-rcxzb" Dec 11 10:36:18 crc kubenswrapper[4953]: I1211 10:36:18.123006 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/180fcc74-0e86-469b-b328-ccdf05971726-ovsdbserver-sb\") pod \"dnsmasq-dns-557bbc7df7-rcxzb\" (UID: \"180fcc74-0e86-469b-b328-ccdf05971726\") " pod="openstack/dnsmasq-dns-557bbc7df7-rcxzb" Dec 11 10:36:18 crc kubenswrapper[4953]: I1211 10:36:18.123034 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/847529a7-a29f-4eb3-a678-3a909e4aa0f2-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"847529a7-a29f-4eb3-a678-3a909e4aa0f2\") " pod="openstack/nova-cell1-novncproxy-0" Dec 11 10:36:18 crc kubenswrapper[4953]: I1211 10:36:18.123069 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/180fcc74-0e86-469b-b328-ccdf05971726-config\") pod \"dnsmasq-dns-557bbc7df7-rcxzb\" (UID: \"180fcc74-0e86-469b-b328-ccdf05971726\") " pod="openstack/dnsmasq-dns-557bbc7df7-rcxzb" Dec 11 10:36:18 crc kubenswrapper[4953]: I1211 10:36:18.123127 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/180fcc74-0e86-469b-b328-ccdf05971726-ovsdbserver-nb\") pod \"dnsmasq-dns-557bbc7df7-rcxzb\" (UID: \"180fcc74-0e86-469b-b328-ccdf05971726\") " pod="openstack/dnsmasq-dns-557bbc7df7-rcxzb" Dec 11 10:36:18 crc kubenswrapper[4953]: I1211 10:36:18.138413 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/847529a7-a29f-4eb3-a678-3a909e4aa0f2-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"847529a7-a29f-4eb3-a678-3a909e4aa0f2\") " pod="openstack/nova-cell1-novncproxy-0" Dec 11 10:36:18 crc kubenswrapper[4953]: I1211 10:36:18.143820 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/847529a7-a29f-4eb3-a678-3a909e4aa0f2-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"847529a7-a29f-4eb3-a678-3a909e4aa0f2\") " pod="openstack/nova-cell1-novncproxy-0" Dec 11 10:36:18 crc kubenswrapper[4953]: I1211 10:36:18.191124 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kvfjv\" (UniqueName: \"kubernetes.io/projected/847529a7-a29f-4eb3-a678-3a909e4aa0f2-kube-api-access-kvfjv\") pod \"nova-cell1-novncproxy-0\" (UID: \"847529a7-a29f-4eb3-a678-3a909e4aa0f2\") " pod="openstack/nova-cell1-novncproxy-0" Dec 11 10:36:18 crc kubenswrapper[4953]: I1211 10:36:18.226426 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/180fcc74-0e86-469b-b328-ccdf05971726-dns-swift-storage-0\") pod \"dnsmasq-dns-557bbc7df7-rcxzb\" (UID: \"180fcc74-0e86-469b-b328-ccdf05971726\") " pod="openstack/dnsmasq-dns-557bbc7df7-rcxzb" Dec 11 10:36:18 crc kubenswrapper[4953]: I1211 10:36:18.226565 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9nk7b\" (UniqueName: \"kubernetes.io/projected/180fcc74-0e86-469b-b328-ccdf05971726-kube-api-access-9nk7b\") pod \"dnsmasq-dns-557bbc7df7-rcxzb\" (UID: \"180fcc74-0e86-469b-b328-ccdf05971726\") " pod="openstack/dnsmasq-dns-557bbc7df7-rcxzb" Dec 11 10:36:18 crc kubenswrapper[4953]: I1211 10:36:18.226617 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/180fcc74-0e86-469b-b328-ccdf05971726-ovsdbserver-sb\") pod \"dnsmasq-dns-557bbc7df7-rcxzb\" (UID: \"180fcc74-0e86-469b-b328-ccdf05971726\") " pod="openstack/dnsmasq-dns-557bbc7df7-rcxzb" Dec 11 10:36:18 crc kubenswrapper[4953]: I1211 10:36:18.226703 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/180fcc74-0e86-469b-b328-ccdf05971726-config\") pod \"dnsmasq-dns-557bbc7df7-rcxzb\" (UID: \"180fcc74-0e86-469b-b328-ccdf05971726\") " pod="openstack/dnsmasq-dns-557bbc7df7-rcxzb" Dec 11 10:36:18 crc kubenswrapper[4953]: I1211 10:36:18.228173 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/180fcc74-0e86-469b-b328-ccdf05971726-dns-swift-storage-0\") pod \"dnsmasq-dns-557bbc7df7-rcxzb\" (UID: \"180fcc74-0e86-469b-b328-ccdf05971726\") " pod="openstack/dnsmasq-dns-557bbc7df7-rcxzb" Dec 11 10:36:18 crc kubenswrapper[4953]: I1211 10:36:18.228290 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/180fcc74-0e86-469b-b328-ccdf05971726-ovsdbserver-nb\") pod \"dnsmasq-dns-557bbc7df7-rcxzb\" (UID: \"180fcc74-0e86-469b-b328-ccdf05971726\") " pod="openstack/dnsmasq-dns-557bbc7df7-rcxzb" Dec 11 10:36:18 crc kubenswrapper[4953]: I1211 10:36:18.228348 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/180fcc74-0e86-469b-b328-ccdf05971726-dns-svc\") pod \"dnsmasq-dns-557bbc7df7-rcxzb\" (UID: \"180fcc74-0e86-469b-b328-ccdf05971726\") " pod="openstack/dnsmasq-dns-557bbc7df7-rcxzb" Dec 11 10:36:18 crc kubenswrapper[4953]: I1211 10:36:18.229997 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/180fcc74-0e86-469b-b328-ccdf05971726-ovsdbserver-sb\") pod \"dnsmasq-dns-557bbc7df7-rcxzb\" (UID: \"180fcc74-0e86-469b-b328-ccdf05971726\") " pod="openstack/dnsmasq-dns-557bbc7df7-rcxzb" Dec 11 10:36:18 crc kubenswrapper[4953]: I1211 10:36:18.231685 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/180fcc74-0e86-469b-b328-ccdf05971726-config\") pod \"dnsmasq-dns-557bbc7df7-rcxzb\" (UID: \"180fcc74-0e86-469b-b328-ccdf05971726\") " pod="openstack/dnsmasq-dns-557bbc7df7-rcxzb" Dec 11 10:36:18 crc kubenswrapper[4953]: I1211 10:36:18.240810 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/180fcc74-0e86-469b-b328-ccdf05971726-dns-svc\") pod \"dnsmasq-dns-557bbc7df7-rcxzb\" (UID: \"180fcc74-0e86-469b-b328-ccdf05971726\") " pod="openstack/dnsmasq-dns-557bbc7df7-rcxzb" Dec 11 10:36:18 crc kubenswrapper[4953]: I1211 10:36:18.243971 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/180fcc74-0e86-469b-b328-ccdf05971726-ovsdbserver-nb\") pod \"dnsmasq-dns-557bbc7df7-rcxzb\" (UID: \"180fcc74-0e86-469b-b328-ccdf05971726\") " pod="openstack/dnsmasq-dns-557bbc7df7-rcxzb" Dec 11 10:36:18 crc kubenswrapper[4953]: I1211 10:36:18.473006 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 11 10:36:18 crc kubenswrapper[4953]: I1211 10:36:18.476570 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9nk7b\" (UniqueName: \"kubernetes.io/projected/180fcc74-0e86-469b-b328-ccdf05971726-kube-api-access-9nk7b\") pod \"dnsmasq-dns-557bbc7df7-rcxzb\" (UID: \"180fcc74-0e86-469b-b328-ccdf05971726\") " pod="openstack/dnsmasq-dns-557bbc7df7-rcxzb" Dec 11 10:36:18 crc kubenswrapper[4953]: I1211 10:36:18.536255 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-557bbc7df7-rcxzb" Dec 11 10:36:18 crc kubenswrapper[4953]: I1211 10:36:18.596230 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 11 10:36:18 crc kubenswrapper[4953]: I1211 10:36:18.751524 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"06de80b3-ffbb-4efd-beca-dbf2b67046fa","Type":"ContainerStarted","Data":"5629ee0af725a70de6d673a73f0b63cccae2bfd4affdec4bec8266f31c24bc77"} Dec 11 10:36:18 crc kubenswrapper[4953]: W1211 10:36:18.827937 4953 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8092c791_0c1a_454e_9fe8_b3dcb63c3415.slice/crio-0b7b8c7026220a9a0348421f17d2899505dd32db42d72a45721d525546fc2ddf WatchSource:0}: Error finding container 0b7b8c7026220a9a0348421f17d2899505dd32db42d72a45721d525546fc2ddf: Status 404 returned error can't find the container with id 0b7b8c7026220a9a0348421f17d2899505dd32db42d72a45721d525546fc2ddf Dec 11 10:36:18 crc kubenswrapper[4953]: I1211 10:36:18.842384 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-8t2m9"] Dec 11 10:36:19 crc kubenswrapper[4953]: I1211 10:36:19.115410 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 11 10:36:19 crc kubenswrapper[4953]: W1211 10:36:19.516061 4953 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod847529a7_a29f_4eb3_a678_3a909e4aa0f2.slice/crio-285dcb6dc561c46607ba5180a2be9a8496fe34ec01936e25a741c9ce9f7b7b5b WatchSource:0}: Error finding container 285dcb6dc561c46607ba5180a2be9a8496fe34ec01936e25a741c9ce9f7b7b5b: Status 404 returned error can't find the container with id 285dcb6dc561c46607ba5180a2be9a8496fe34ec01936e25a741c9ce9f7b7b5b Dec 11 10:36:19 crc kubenswrapper[4953]: I1211 10:36:19.516489 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 11 10:36:19 crc kubenswrapper[4953]: I1211 10:36:19.547471 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-557bbc7df7-rcxzb"] Dec 11 10:36:19 crc kubenswrapper[4953]: I1211 10:36:19.556723 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 11 10:36:19 crc kubenswrapper[4953]: I1211 10:36:19.629631 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-8g247"] Dec 11 10:36:19 crc kubenswrapper[4953]: I1211 10:36:19.631241 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-8g247" Dec 11 10:36:19 crc kubenswrapper[4953]: I1211 10:36:19.637169 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Dec 11 10:36:19 crc kubenswrapper[4953]: I1211 10:36:19.637403 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Dec 11 10:36:19 crc kubenswrapper[4953]: I1211 10:36:19.641279 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-8g247"] Dec 11 10:36:19 crc kubenswrapper[4953]: I1211 10:36:19.736493 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33f6128e-32cb-454f-ba24-3c8e4e1cb2ba-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-8g247\" (UID: \"33f6128e-32cb-454f-ba24-3c8e4e1cb2ba\") " pod="openstack/nova-cell1-conductor-db-sync-8g247" Dec 11 10:36:19 crc kubenswrapper[4953]: I1211 10:36:19.736585 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33f6128e-32cb-454f-ba24-3c8e4e1cb2ba-scripts\") pod \"nova-cell1-conductor-db-sync-8g247\" (UID: \"33f6128e-32cb-454f-ba24-3c8e4e1cb2ba\") " pod="openstack/nova-cell1-conductor-db-sync-8g247" Dec 11 10:36:19 crc kubenswrapper[4953]: I1211 10:36:19.736643 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33f6128e-32cb-454f-ba24-3c8e4e1cb2ba-config-data\") pod \"nova-cell1-conductor-db-sync-8g247\" (UID: \"33f6128e-32cb-454f-ba24-3c8e4e1cb2ba\") " pod="openstack/nova-cell1-conductor-db-sync-8g247" Dec 11 10:36:19 crc kubenswrapper[4953]: I1211 10:36:19.736850 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8q9n\" (UniqueName: \"kubernetes.io/projected/33f6128e-32cb-454f-ba24-3c8e4e1cb2ba-kube-api-access-v8q9n\") pod \"nova-cell1-conductor-db-sync-8g247\" (UID: \"33f6128e-32cb-454f-ba24-3c8e4e1cb2ba\") " pod="openstack/nova-cell1-conductor-db-sync-8g247" Dec 11 10:36:19 crc kubenswrapper[4953]: I1211 10:36:19.774994 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9df1c590-48bc-4795-8655-114657aa49e9","Type":"ContainerStarted","Data":"b391b3286663695c17f187cba0967ea7e70dcc58488bfba7b247ef5adacc94b6"} Dec 11 10:36:19 crc kubenswrapper[4953]: I1211 10:36:19.784275 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-8t2m9" event={"ID":"8092c791-0c1a-454e-9fe8-b3dcb63c3415","Type":"ContainerStarted","Data":"12197dc212f018a71653f4be4db791ba82015fdb5a956452eb7226315756bede"} Dec 11 10:36:19 crc kubenswrapper[4953]: I1211 10:36:19.784337 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-8t2m9" event={"ID":"8092c791-0c1a-454e-9fe8-b3dcb63c3415","Type":"ContainerStarted","Data":"0b7b8c7026220a9a0348421f17d2899505dd32db42d72a45721d525546fc2ddf"} Dec 11 10:36:19 crc kubenswrapper[4953]: I1211 10:36:19.801589 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"06de80b3-ffbb-4efd-beca-dbf2b67046fa","Type":"ContainerStarted","Data":"ab340757c8543dbdd6c8d1eb6549599bd86ca8a4add2ed5959efaf65989d40c2"} Dec 11 10:36:19 crc kubenswrapper[4953]: I1211 10:36:19.805917 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bc941d7f-79c3-4661-992e-7a27e3a6c6d9","Type":"ContainerStarted","Data":"e93bd97226497efaec8632d14d05b84220a26477258dcd3e1fff9af72a9dde21"} Dec 11 10:36:19 crc kubenswrapper[4953]: I1211 10:36:19.811967 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-557bbc7df7-rcxzb" event={"ID":"180fcc74-0e86-469b-b328-ccdf05971726","Type":"ContainerStarted","Data":"7a898f598e255693454cd15728f317d46b3f29bd9c040ee9b106d6d413dc38a9"} Dec 11 10:36:19 crc kubenswrapper[4953]: I1211 10:36:19.813120 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"847529a7-a29f-4eb3-a678-3a909e4aa0f2","Type":"ContainerStarted","Data":"285dcb6dc561c46607ba5180a2be9a8496fe34ec01936e25a741c9ce9f7b7b5b"} Dec 11 10:36:19 crc kubenswrapper[4953]: I1211 10:36:19.815485 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"26d03f3f-057f-4c77-82a4-c394e0732e01","Type":"ContainerStarted","Data":"7efddcacca3bbc9a8762a8baaceef3ec2bfacb66df8150b3e2cfd5c361508d3b"} Dec 11 10:36:19 crc kubenswrapper[4953]: I1211 10:36:19.826799 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-8t2m9" podStartSLOduration=3.826776703 podStartE2EDuration="3.826776703s" podCreationTimestamp="2025-12-11 10:36:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:36:19.811218129 +0000 UTC m=+1497.835077162" watchObservedRunningTime="2025-12-11 10:36:19.826776703 +0000 UTC m=+1497.850635736" Dec 11 10:36:19 crc kubenswrapper[4953]: I1211 10:36:19.838455 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33f6128e-32cb-454f-ba24-3c8e4e1cb2ba-scripts\") pod \"nova-cell1-conductor-db-sync-8g247\" (UID: \"33f6128e-32cb-454f-ba24-3c8e4e1cb2ba\") " pod="openstack/nova-cell1-conductor-db-sync-8g247" Dec 11 10:36:19 crc kubenswrapper[4953]: I1211 10:36:19.838535 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33f6128e-32cb-454f-ba24-3c8e4e1cb2ba-config-data\") pod \"nova-cell1-conductor-db-sync-8g247\" (UID: \"33f6128e-32cb-454f-ba24-3c8e4e1cb2ba\") " pod="openstack/nova-cell1-conductor-db-sync-8g247" Dec 11 10:36:19 crc kubenswrapper[4953]: I1211 10:36:19.838608 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v8q9n\" (UniqueName: \"kubernetes.io/projected/33f6128e-32cb-454f-ba24-3c8e4e1cb2ba-kube-api-access-v8q9n\") pod \"nova-cell1-conductor-db-sync-8g247\" (UID: \"33f6128e-32cb-454f-ba24-3c8e4e1cb2ba\") " pod="openstack/nova-cell1-conductor-db-sync-8g247" Dec 11 10:36:19 crc kubenswrapper[4953]: I1211 10:36:19.838700 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33f6128e-32cb-454f-ba24-3c8e4e1cb2ba-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-8g247\" (UID: \"33f6128e-32cb-454f-ba24-3c8e4e1cb2ba\") " pod="openstack/nova-cell1-conductor-db-sync-8g247" Dec 11 10:36:19 crc kubenswrapper[4953]: I1211 10:36:19.845540 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33f6128e-32cb-454f-ba24-3c8e4e1cb2ba-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-8g247\" (UID: \"33f6128e-32cb-454f-ba24-3c8e4e1cb2ba\") " pod="openstack/nova-cell1-conductor-db-sync-8g247" Dec 11 10:36:19 crc kubenswrapper[4953]: I1211 10:36:19.845998 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33f6128e-32cb-454f-ba24-3c8e4e1cb2ba-scripts\") pod \"nova-cell1-conductor-db-sync-8g247\" (UID: \"33f6128e-32cb-454f-ba24-3c8e4e1cb2ba\") " pod="openstack/nova-cell1-conductor-db-sync-8g247" Dec 11 10:36:19 crc kubenswrapper[4953]: I1211 10:36:19.846679 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33f6128e-32cb-454f-ba24-3c8e4e1cb2ba-config-data\") pod \"nova-cell1-conductor-db-sync-8g247\" (UID: \"33f6128e-32cb-454f-ba24-3c8e4e1cb2ba\") " pod="openstack/nova-cell1-conductor-db-sync-8g247" Dec 11 10:36:19 crc kubenswrapper[4953]: I1211 10:36:19.863621 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8q9n\" (UniqueName: \"kubernetes.io/projected/33f6128e-32cb-454f-ba24-3c8e4e1cb2ba-kube-api-access-v8q9n\") pod \"nova-cell1-conductor-db-sync-8g247\" (UID: \"33f6128e-32cb-454f-ba24-3c8e4e1cb2ba\") " pod="openstack/nova-cell1-conductor-db-sync-8g247" Dec 11 10:36:19 crc kubenswrapper[4953]: I1211 10:36:19.983279 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-8g247" Dec 11 10:36:20 crc kubenswrapper[4953]: I1211 10:36:20.840768 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-8g247"] Dec 11 10:36:20 crc kubenswrapper[4953]: I1211 10:36:20.841683 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"06de80b3-ffbb-4efd-beca-dbf2b67046fa","Type":"ContainerStarted","Data":"11023c9ee9b023c7adc7cbbed1ae49ed8c4599657a8533550b9e0d297fba32d3"} Dec 11 10:36:20 crc kubenswrapper[4953]: I1211 10:36:20.854586 4953 generic.go:334] "Generic (PLEG): container finished" podID="180fcc74-0e86-469b-b328-ccdf05971726" containerID="1a361c23e77ccc1c64af016c246fb4ac4d67817bbaa2ab7ded619ddc594f7a26" exitCode=0 Dec 11 10:36:20 crc kubenswrapper[4953]: I1211 10:36:20.854683 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-557bbc7df7-rcxzb" event={"ID":"180fcc74-0e86-469b-b328-ccdf05971726","Type":"ContainerDied","Data":"1a361c23e77ccc1c64af016c246fb4ac4d67817bbaa2ab7ded619ddc594f7a26"} Dec 11 10:36:20 crc kubenswrapper[4953]: W1211 10:36:20.869122 4953 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod33f6128e_32cb_454f_ba24_3c8e4e1cb2ba.slice/crio-8389aa2f6d69e90820af173fa51d663464de769d92173015a53cae4528723eb7 WatchSource:0}: Error finding container 8389aa2f6d69e90820af173fa51d663464de769d92173015a53cae4528723eb7: Status 404 returned error can't find the container with id 8389aa2f6d69e90820af173fa51d663464de769d92173015a53cae4528723eb7 Dec 11 10:36:21 crc kubenswrapper[4953]: I1211 10:36:21.877675 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-557bbc7df7-rcxzb" event={"ID":"180fcc74-0e86-469b-b328-ccdf05971726","Type":"ContainerStarted","Data":"e90fc695beb9c3d6e53cf585d31e1f95c25f2f4f10bf03bb714d94fd13d3ffd3"} Dec 11 10:36:21 crc kubenswrapper[4953]: I1211 10:36:21.878088 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-557bbc7df7-rcxzb" Dec 11 10:36:21 crc kubenswrapper[4953]: I1211 10:36:21.888197 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-8g247" event={"ID":"33f6128e-32cb-454f-ba24-3c8e4e1cb2ba","Type":"ContainerStarted","Data":"884bf7209c56c792aa6f6119e59a431349030aeb6799dab773a8e92ad0b9f5b2"} Dec 11 10:36:21 crc kubenswrapper[4953]: I1211 10:36:21.888251 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-8g247" event={"ID":"33f6128e-32cb-454f-ba24-3c8e4e1cb2ba","Type":"ContainerStarted","Data":"8389aa2f6d69e90820af173fa51d663464de769d92173015a53cae4528723eb7"} Dec 11 10:36:21 crc kubenswrapper[4953]: I1211 10:36:21.913951 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-557bbc7df7-rcxzb" podStartSLOduration=4.913924462 podStartE2EDuration="4.913924462s" podCreationTimestamp="2025-12-11 10:36:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:36:21.903941483 +0000 UTC m=+1499.927800516" watchObservedRunningTime="2025-12-11 10:36:21.913924462 +0000 UTC m=+1499.937783495" Dec 11 10:36:21 crc kubenswrapper[4953]: I1211 10:36:21.956960 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-8g247" podStartSLOduration=2.956940937 podStartE2EDuration="2.956940937s" podCreationTimestamp="2025-12-11 10:36:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:36:21.943734338 +0000 UTC m=+1499.967593371" watchObservedRunningTime="2025-12-11 10:36:21.956940937 +0000 UTC m=+1499.980799970" Dec 11 10:36:21 crc kubenswrapper[4953]: I1211 10:36:21.975647 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 11 10:36:21 crc kubenswrapper[4953]: I1211 10:36:21.988277 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 11 10:36:24 crc kubenswrapper[4953]: I1211 10:36:24.936816 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"06de80b3-ffbb-4efd-beca-dbf2b67046fa","Type":"ContainerStarted","Data":"e3d77dbc1259bf3a1ef2c7f949bc81dfba52ca6a16159f8fbf73a60eafd81b03"} Dec 11 10:36:24 crc kubenswrapper[4953]: I1211 10:36:24.938632 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 11 10:36:24 crc kubenswrapper[4953]: I1211 10:36:24.942248 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bc941d7f-79c3-4661-992e-7a27e3a6c6d9","Type":"ContainerStarted","Data":"a29a619ff2f3874dbf4aeee9350042334546aa9def04de780c2f66aa6293cb4f"} Dec 11 10:36:24 crc kubenswrapper[4953]: I1211 10:36:24.952488 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"847529a7-a29f-4eb3-a678-3a909e4aa0f2","Type":"ContainerStarted","Data":"08e860180f18e4342a471f4999eff5be287c0baa70b6b137634d6ee40f9b0288"} Dec 11 10:36:24 crc kubenswrapper[4953]: I1211 10:36:24.952632 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="847529a7-a29f-4eb3-a678-3a909e4aa0f2" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://08e860180f18e4342a471f4999eff5be287c0baa70b6b137634d6ee40f9b0288" gracePeriod=30 Dec 11 10:36:24 crc kubenswrapper[4953]: I1211 10:36:24.961378 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.122309798 podStartE2EDuration="9.961363719s" podCreationTimestamp="2025-12-11 10:36:15 +0000 UTC" firstStartedPulling="2025-12-11 10:36:16.579408073 +0000 UTC m=+1494.603267106" lastFinishedPulling="2025-12-11 10:36:24.418461994 +0000 UTC m=+1502.442321027" observedRunningTime="2025-12-11 10:36:24.96106097 +0000 UTC m=+1502.984920003" watchObservedRunningTime="2025-12-11 10:36:24.961363719 +0000 UTC m=+1502.985222752" Dec 11 10:36:24 crc kubenswrapper[4953]: I1211 10:36:24.987074 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.09418793 podStartE2EDuration="7.987057226s" podCreationTimestamp="2025-12-11 10:36:17 +0000 UTC" firstStartedPulling="2025-12-11 10:36:19.519464478 +0000 UTC m=+1497.543323511" lastFinishedPulling="2025-12-11 10:36:24.412333774 +0000 UTC m=+1502.436192807" observedRunningTime="2025-12-11 10:36:24.982718281 +0000 UTC m=+1503.006577314" watchObservedRunningTime="2025-12-11 10:36:24.987057226 +0000 UTC m=+1503.010916259" Dec 11 10:36:25 crc kubenswrapper[4953]: I1211 10:36:25.035975 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.7707219629999997 podStartE2EDuration="8.035954263s" podCreationTimestamp="2025-12-11 10:36:17 +0000 UTC" firstStartedPulling="2025-12-11 10:36:19.153180182 +0000 UTC m=+1497.177039215" lastFinishedPulling="2025-12-11 10:36:24.418412482 +0000 UTC m=+1502.442271515" observedRunningTime="2025-12-11 10:36:24.999705868 +0000 UTC m=+1503.023564901" watchObservedRunningTime="2025-12-11 10:36:25.035954263 +0000 UTC m=+1503.059813296" Dec 11 10:36:25 crc kubenswrapper[4953]: I1211 10:36:25.919779 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-cfhwh" Dec 11 10:36:25 crc kubenswrapper[4953]: I1211 10:36:25.991400 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-cfhwh" Dec 11 10:36:25 crc kubenswrapper[4953]: I1211 10:36:25.998591 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9df1c590-48bc-4795-8655-114657aa49e9","Type":"ContainerStarted","Data":"ec925673fe6d4a03f8a09fe186a8829fa6e16cec926305276056a86bc9311fa0"} Dec 11 10:36:26 crc kubenswrapper[4953]: I1211 10:36:26.001562 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bc941d7f-79c3-4661-992e-7a27e3a6c6d9","Type":"ContainerStarted","Data":"a203b4a9bc1875a65b43f7e0771678b63e631fe18d001c012a6e23fedb650732"} Dec 11 10:36:26 crc kubenswrapper[4953]: I1211 10:36:26.001752 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="bc941d7f-79c3-4661-992e-7a27e3a6c6d9" containerName="nova-metadata-log" containerID="cri-o://a29a619ff2f3874dbf4aeee9350042334546aa9def04de780c2f66aa6293cb4f" gracePeriod=30 Dec 11 10:36:26 crc kubenswrapper[4953]: I1211 10:36:26.001805 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="bc941d7f-79c3-4661-992e-7a27e3a6c6d9" containerName="nova-metadata-metadata" containerID="cri-o://a203b4a9bc1875a65b43f7e0771678b63e631fe18d001c012a6e23fedb650732" gracePeriod=30 Dec 11 10:36:26 crc kubenswrapper[4953]: I1211 10:36:26.019822 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"26d03f3f-057f-4c77-82a4-c394e0732e01","Type":"ContainerStarted","Data":"905c00d9bffdea6c8106bea9fc6becce3f6025d8a8507741a6ad08e738b95d78"} Dec 11 10:36:26 crc kubenswrapper[4953]: I1211 10:36:26.021004 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"26d03f3f-057f-4c77-82a4-c394e0732e01","Type":"ContainerStarted","Data":"a7f8fd17513ee9bb15963858d05565c67c15d09f6c9fa792151763f52954317e"} Dec 11 10:36:26 crc kubenswrapper[4953]: I1211 10:36:26.045186 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=4.171453516 podStartE2EDuration="9.045167147s" podCreationTimestamp="2025-12-11 10:36:17 +0000 UTC" firstStartedPulling="2025-12-11 10:36:19.536130095 +0000 UTC m=+1497.559989118" lastFinishedPulling="2025-12-11 10:36:24.409843716 +0000 UTC m=+1502.433702749" observedRunningTime="2025-12-11 10:36:26.042201385 +0000 UTC m=+1504.066060418" watchObservedRunningTime="2025-12-11 10:36:26.045167147 +0000 UTC m=+1504.069026180" Dec 11 10:36:26 crc kubenswrapper[4953]: I1211 10:36:26.066855 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=4.438144913 podStartE2EDuration="10.06683865s" podCreationTimestamp="2025-12-11 10:36:16 +0000 UTC" firstStartedPulling="2025-12-11 10:36:18.793631096 +0000 UTC m=+1496.817490129" lastFinishedPulling="2025-12-11 10:36:24.422324823 +0000 UTC m=+1502.446183866" observedRunningTime="2025-12-11 10:36:26.060602716 +0000 UTC m=+1504.084461759" watchObservedRunningTime="2025-12-11 10:36:26.06683865 +0000 UTC m=+1504.090697683" Dec 11 10:36:26 crc kubenswrapper[4953]: I1211 10:36:26.160227 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cfhwh"] Dec 11 10:36:26 crc kubenswrapper[4953]: I1211 10:36:26.582618 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 11 10:36:26 crc kubenswrapper[4953]: I1211 10:36:26.583144 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc941d7f-79c3-4661-992e-7a27e3a6c6d9-combined-ca-bundle\") pod \"bc941d7f-79c3-4661-992e-7a27e3a6c6d9\" (UID: \"bc941d7f-79c3-4661-992e-7a27e3a6c6d9\") " Dec 11 10:36:26 crc kubenswrapper[4953]: I1211 10:36:26.583317 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rbhf9\" (UniqueName: \"kubernetes.io/projected/bc941d7f-79c3-4661-992e-7a27e3a6c6d9-kube-api-access-rbhf9\") pod \"bc941d7f-79c3-4661-992e-7a27e3a6c6d9\" (UID: \"bc941d7f-79c3-4661-992e-7a27e3a6c6d9\") " Dec 11 10:36:26 crc kubenswrapper[4953]: I1211 10:36:26.583450 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bc941d7f-79c3-4661-992e-7a27e3a6c6d9-logs\") pod \"bc941d7f-79c3-4661-992e-7a27e3a6c6d9\" (UID: \"bc941d7f-79c3-4661-992e-7a27e3a6c6d9\") " Dec 11 10:36:26 crc kubenswrapper[4953]: I1211 10:36:26.583510 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc941d7f-79c3-4661-992e-7a27e3a6c6d9-config-data\") pod \"bc941d7f-79c3-4661-992e-7a27e3a6c6d9\" (UID: \"bc941d7f-79c3-4661-992e-7a27e3a6c6d9\") " Dec 11 10:36:26 crc kubenswrapper[4953]: I1211 10:36:26.585168 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc941d7f-79c3-4661-992e-7a27e3a6c6d9-logs" (OuterVolumeSpecName: "logs") pod "bc941d7f-79c3-4661-992e-7a27e3a6c6d9" (UID: "bc941d7f-79c3-4661-992e-7a27e3a6c6d9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:36:26 crc kubenswrapper[4953]: I1211 10:36:26.590875 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc941d7f-79c3-4661-992e-7a27e3a6c6d9-kube-api-access-rbhf9" (OuterVolumeSpecName: "kube-api-access-rbhf9") pod "bc941d7f-79c3-4661-992e-7a27e3a6c6d9" (UID: "bc941d7f-79c3-4661-992e-7a27e3a6c6d9"). InnerVolumeSpecName "kube-api-access-rbhf9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:36:26 crc kubenswrapper[4953]: I1211 10:36:26.644457 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc941d7f-79c3-4661-992e-7a27e3a6c6d9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bc941d7f-79c3-4661-992e-7a27e3a6c6d9" (UID: "bc941d7f-79c3-4661-992e-7a27e3a6c6d9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:36:26 crc kubenswrapper[4953]: I1211 10:36:26.662273 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc941d7f-79c3-4661-992e-7a27e3a6c6d9-config-data" (OuterVolumeSpecName: "config-data") pod "bc941d7f-79c3-4661-992e-7a27e3a6c6d9" (UID: "bc941d7f-79c3-4661-992e-7a27e3a6c6d9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:36:26 crc kubenswrapper[4953]: I1211 10:36:26.684931 4953 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bc941d7f-79c3-4661-992e-7a27e3a6c6d9-logs\") on node \"crc\" DevicePath \"\"" Dec 11 10:36:26 crc kubenswrapper[4953]: I1211 10:36:26.684980 4953 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc941d7f-79c3-4661-992e-7a27e3a6c6d9-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 10:36:26 crc kubenswrapper[4953]: I1211 10:36:26.684990 4953 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc941d7f-79c3-4661-992e-7a27e3a6c6d9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 10:36:26 crc kubenswrapper[4953]: I1211 10:36:26.685002 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rbhf9\" (UniqueName: \"kubernetes.io/projected/bc941d7f-79c3-4661-992e-7a27e3a6c6d9-kube-api-access-rbhf9\") on node \"crc\" DevicePath \"\"" Dec 11 10:36:27 crc kubenswrapper[4953]: I1211 10:36:27.031625 4953 generic.go:334] "Generic (PLEG): container finished" podID="bc941d7f-79c3-4661-992e-7a27e3a6c6d9" containerID="a203b4a9bc1875a65b43f7e0771678b63e631fe18d001c012a6e23fedb650732" exitCode=0 Dec 11 10:36:27 crc kubenswrapper[4953]: I1211 10:36:27.033083 4953 generic.go:334] "Generic (PLEG): container finished" podID="bc941d7f-79c3-4661-992e-7a27e3a6c6d9" containerID="a29a619ff2f3874dbf4aeee9350042334546aa9def04de780c2f66aa6293cb4f" exitCode=143 Dec 11 10:36:27 crc kubenswrapper[4953]: I1211 10:36:27.031739 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bc941d7f-79c3-4661-992e-7a27e3a6c6d9","Type":"ContainerDied","Data":"a203b4a9bc1875a65b43f7e0771678b63e631fe18d001c012a6e23fedb650732"} Dec 11 10:36:27 crc kubenswrapper[4953]: I1211 10:36:27.031696 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 11 10:36:27 crc kubenswrapper[4953]: I1211 10:36:27.034802 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bc941d7f-79c3-4661-992e-7a27e3a6c6d9","Type":"ContainerDied","Data":"a29a619ff2f3874dbf4aeee9350042334546aa9def04de780c2f66aa6293cb4f"} Dec 11 10:36:27 crc kubenswrapper[4953]: I1211 10:36:27.034853 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bc941d7f-79c3-4661-992e-7a27e3a6c6d9","Type":"ContainerDied","Data":"e93bd97226497efaec8632d14d05b84220a26477258dcd3e1fff9af72a9dde21"} Dec 11 10:36:27 crc kubenswrapper[4953]: I1211 10:36:27.034879 4953 scope.go:117] "RemoveContainer" containerID="a203b4a9bc1875a65b43f7e0771678b63e631fe18d001c012a6e23fedb650732" Dec 11 10:36:27 crc kubenswrapper[4953]: I1211 10:36:27.034827 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-cfhwh" podUID="91d95335-f50e-4677-b3f9-bad2ab143c43" containerName="registry-server" containerID="cri-o://b338be24b9b7fc98cf818808dc924b7fa24ae9f50b38edddd4363bfbf6b84f08" gracePeriod=2 Dec 11 10:36:27 crc kubenswrapper[4953]: I1211 10:36:27.078869 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 11 10:36:27 crc kubenswrapper[4953]: I1211 10:36:27.105501 4953 scope.go:117] "RemoveContainer" containerID="a29a619ff2f3874dbf4aeee9350042334546aa9def04de780c2f66aa6293cb4f" Dec 11 10:36:27 crc kubenswrapper[4953]: I1211 10:36:27.123926 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Dec 11 10:36:27 crc kubenswrapper[4953]: I1211 10:36:27.150028 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 11 10:36:27 crc kubenswrapper[4953]: I1211 10:36:27.150280 4953 scope.go:117] "RemoveContainer" containerID="a203b4a9bc1875a65b43f7e0771678b63e631fe18d001c012a6e23fedb650732" Dec 11 10:36:27 crc kubenswrapper[4953]: E1211 10:36:27.150932 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a203b4a9bc1875a65b43f7e0771678b63e631fe18d001c012a6e23fedb650732\": container with ID starting with a203b4a9bc1875a65b43f7e0771678b63e631fe18d001c012a6e23fedb650732 not found: ID does not exist" containerID="a203b4a9bc1875a65b43f7e0771678b63e631fe18d001c012a6e23fedb650732" Dec 11 10:36:27 crc kubenswrapper[4953]: I1211 10:36:27.150992 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a203b4a9bc1875a65b43f7e0771678b63e631fe18d001c012a6e23fedb650732"} err="failed to get container status \"a203b4a9bc1875a65b43f7e0771678b63e631fe18d001c012a6e23fedb650732\": rpc error: code = NotFound desc = could not find container \"a203b4a9bc1875a65b43f7e0771678b63e631fe18d001c012a6e23fedb650732\": container with ID starting with a203b4a9bc1875a65b43f7e0771678b63e631fe18d001c012a6e23fedb650732 not found: ID does not exist" Dec 11 10:36:27 crc kubenswrapper[4953]: I1211 10:36:27.151075 4953 scope.go:117] "RemoveContainer" containerID="a29a619ff2f3874dbf4aeee9350042334546aa9def04de780c2f66aa6293cb4f" Dec 11 10:36:27 crc kubenswrapper[4953]: E1211 10:36:27.151841 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a29a619ff2f3874dbf4aeee9350042334546aa9def04de780c2f66aa6293cb4f\": container with ID starting with a29a619ff2f3874dbf4aeee9350042334546aa9def04de780c2f66aa6293cb4f not found: ID does not exist" containerID="a29a619ff2f3874dbf4aeee9350042334546aa9def04de780c2f66aa6293cb4f" Dec 11 10:36:27 crc kubenswrapper[4953]: E1211 10:36:27.151892 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc941d7f-79c3-4661-992e-7a27e3a6c6d9" containerName="nova-metadata-metadata" Dec 11 10:36:27 crc kubenswrapper[4953]: I1211 10:36:27.151909 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc941d7f-79c3-4661-992e-7a27e3a6c6d9" containerName="nova-metadata-metadata" Dec 11 10:36:27 crc kubenswrapper[4953]: E1211 10:36:27.151951 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc941d7f-79c3-4661-992e-7a27e3a6c6d9" containerName="nova-metadata-log" Dec 11 10:36:27 crc kubenswrapper[4953]: I1211 10:36:27.151957 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc941d7f-79c3-4661-992e-7a27e3a6c6d9" containerName="nova-metadata-log" Dec 11 10:36:27 crc kubenswrapper[4953]: I1211 10:36:27.151901 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a29a619ff2f3874dbf4aeee9350042334546aa9def04de780c2f66aa6293cb4f"} err="failed to get container status \"a29a619ff2f3874dbf4aeee9350042334546aa9def04de780c2f66aa6293cb4f\": rpc error: code = NotFound desc = could not find container \"a29a619ff2f3874dbf4aeee9350042334546aa9def04de780c2f66aa6293cb4f\": container with ID starting with a29a619ff2f3874dbf4aeee9350042334546aa9def04de780c2f66aa6293cb4f not found: ID does not exist" Dec 11 10:36:27 crc kubenswrapper[4953]: I1211 10:36:27.152047 4953 scope.go:117] "RemoveContainer" containerID="a203b4a9bc1875a65b43f7e0771678b63e631fe18d001c012a6e23fedb650732" Dec 11 10:36:27 crc kubenswrapper[4953]: I1211 10:36:27.152129 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc941d7f-79c3-4661-992e-7a27e3a6c6d9" containerName="nova-metadata-metadata" Dec 11 10:36:27 crc kubenswrapper[4953]: I1211 10:36:27.152146 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc941d7f-79c3-4661-992e-7a27e3a6c6d9" containerName="nova-metadata-log" Dec 11 10:36:27 crc kubenswrapper[4953]: I1211 10:36:27.153148 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 11 10:36:27 crc kubenswrapper[4953]: I1211 10:36:27.153787 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a203b4a9bc1875a65b43f7e0771678b63e631fe18d001c012a6e23fedb650732"} err="failed to get container status \"a203b4a9bc1875a65b43f7e0771678b63e631fe18d001c012a6e23fedb650732\": rpc error: code = NotFound desc = could not find container \"a203b4a9bc1875a65b43f7e0771678b63e631fe18d001c012a6e23fedb650732\": container with ID starting with a203b4a9bc1875a65b43f7e0771678b63e631fe18d001c012a6e23fedb650732 not found: ID does not exist" Dec 11 10:36:27 crc kubenswrapper[4953]: I1211 10:36:27.153886 4953 scope.go:117] "RemoveContainer" containerID="a29a619ff2f3874dbf4aeee9350042334546aa9def04de780c2f66aa6293cb4f" Dec 11 10:36:27 crc kubenswrapper[4953]: I1211 10:36:27.162005 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a29a619ff2f3874dbf4aeee9350042334546aa9def04de780c2f66aa6293cb4f"} err="failed to get container status \"a29a619ff2f3874dbf4aeee9350042334546aa9def04de780c2f66aa6293cb4f\": rpc error: code = NotFound desc = could not find container \"a29a619ff2f3874dbf4aeee9350042334546aa9def04de780c2f66aa6293cb4f\": container with ID starting with a29a619ff2f3874dbf4aeee9350042334546aa9def04de780c2f66aa6293cb4f not found: ID does not exist" Dec 11 10:36:27 crc kubenswrapper[4953]: I1211 10:36:27.162476 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Dec 11 10:36:27 crc kubenswrapper[4953]: I1211 10:36:27.162602 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 11 10:36:27 crc kubenswrapper[4953]: I1211 10:36:27.169929 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 11 10:36:27 crc kubenswrapper[4953]: I1211 10:36:27.279450 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 11 10:36:27 crc kubenswrapper[4953]: I1211 10:36:27.280035 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 11 10:36:27 crc kubenswrapper[4953]: I1211 10:36:27.329487 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/818f55f9-e432-4bb0-84a0-a4a1f76dbcf7-config-data\") pod \"nova-metadata-0\" (UID: \"818f55f9-e432-4bb0-84a0-a4a1f76dbcf7\") " pod="openstack/nova-metadata-0" Dec 11 10:36:27 crc kubenswrapper[4953]: I1211 10:36:27.329623 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/818f55f9-e432-4bb0-84a0-a4a1f76dbcf7-logs\") pod \"nova-metadata-0\" (UID: \"818f55f9-e432-4bb0-84a0-a4a1f76dbcf7\") " pod="openstack/nova-metadata-0" Dec 11 10:36:27 crc kubenswrapper[4953]: I1211 10:36:27.329684 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfl8k\" (UniqueName: \"kubernetes.io/projected/818f55f9-e432-4bb0-84a0-a4a1f76dbcf7-kube-api-access-tfl8k\") pod \"nova-metadata-0\" (UID: \"818f55f9-e432-4bb0-84a0-a4a1f76dbcf7\") " pod="openstack/nova-metadata-0" Dec 11 10:36:27 crc kubenswrapper[4953]: I1211 10:36:27.329720 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/818f55f9-e432-4bb0-84a0-a4a1f76dbcf7-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"818f55f9-e432-4bb0-84a0-a4a1f76dbcf7\") " pod="openstack/nova-metadata-0" Dec 11 10:36:27 crc kubenswrapper[4953]: I1211 10:36:27.329744 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/818f55f9-e432-4bb0-84a0-a4a1f76dbcf7-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"818f55f9-e432-4bb0-84a0-a4a1f76dbcf7\") " pod="openstack/nova-metadata-0" Dec 11 10:36:27 crc kubenswrapper[4953]: I1211 10:36:27.431818 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/818f55f9-e432-4bb0-84a0-a4a1f76dbcf7-logs\") pod \"nova-metadata-0\" (UID: \"818f55f9-e432-4bb0-84a0-a4a1f76dbcf7\") " pod="openstack/nova-metadata-0" Dec 11 10:36:27 crc kubenswrapper[4953]: I1211 10:36:27.431900 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfl8k\" (UniqueName: \"kubernetes.io/projected/818f55f9-e432-4bb0-84a0-a4a1f76dbcf7-kube-api-access-tfl8k\") pod \"nova-metadata-0\" (UID: \"818f55f9-e432-4bb0-84a0-a4a1f76dbcf7\") " pod="openstack/nova-metadata-0" Dec 11 10:36:27 crc kubenswrapper[4953]: I1211 10:36:27.431937 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/818f55f9-e432-4bb0-84a0-a4a1f76dbcf7-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"818f55f9-e432-4bb0-84a0-a4a1f76dbcf7\") " pod="openstack/nova-metadata-0" Dec 11 10:36:27 crc kubenswrapper[4953]: I1211 10:36:27.431962 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/818f55f9-e432-4bb0-84a0-a4a1f76dbcf7-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"818f55f9-e432-4bb0-84a0-a4a1f76dbcf7\") " pod="openstack/nova-metadata-0" Dec 11 10:36:27 crc kubenswrapper[4953]: I1211 10:36:27.432012 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/818f55f9-e432-4bb0-84a0-a4a1f76dbcf7-config-data\") pod \"nova-metadata-0\" (UID: \"818f55f9-e432-4bb0-84a0-a4a1f76dbcf7\") " pod="openstack/nova-metadata-0" Dec 11 10:36:27 crc kubenswrapper[4953]: I1211 10:36:27.433395 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/818f55f9-e432-4bb0-84a0-a4a1f76dbcf7-logs\") pod \"nova-metadata-0\" (UID: \"818f55f9-e432-4bb0-84a0-a4a1f76dbcf7\") " pod="openstack/nova-metadata-0" Dec 11 10:36:27 crc kubenswrapper[4953]: I1211 10:36:27.435703 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/818f55f9-e432-4bb0-84a0-a4a1f76dbcf7-config-data\") pod \"nova-metadata-0\" (UID: \"818f55f9-e432-4bb0-84a0-a4a1f76dbcf7\") " pod="openstack/nova-metadata-0" Dec 11 10:36:27 crc kubenswrapper[4953]: I1211 10:36:27.436408 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/818f55f9-e432-4bb0-84a0-a4a1f76dbcf7-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"818f55f9-e432-4bb0-84a0-a4a1f76dbcf7\") " pod="openstack/nova-metadata-0" Dec 11 10:36:27 crc kubenswrapper[4953]: I1211 10:36:27.439448 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/818f55f9-e432-4bb0-84a0-a4a1f76dbcf7-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"818f55f9-e432-4bb0-84a0-a4a1f76dbcf7\") " pod="openstack/nova-metadata-0" Dec 11 10:36:27 crc kubenswrapper[4953]: I1211 10:36:27.458029 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tfl8k\" (UniqueName: \"kubernetes.io/projected/818f55f9-e432-4bb0-84a0-a4a1f76dbcf7-kube-api-access-tfl8k\") pod \"nova-metadata-0\" (UID: \"818f55f9-e432-4bb0-84a0-a4a1f76dbcf7\") " pod="openstack/nova-metadata-0" Dec 11 10:36:27 crc kubenswrapper[4953]: I1211 10:36:27.497507 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 11 10:36:27 crc kubenswrapper[4953]: I1211 10:36:27.616913 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cfhwh" Dec 11 10:36:27 crc kubenswrapper[4953]: I1211 10:36:27.743209 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8kxxb\" (UniqueName: \"kubernetes.io/projected/91d95335-f50e-4677-b3f9-bad2ab143c43-kube-api-access-8kxxb\") pod \"91d95335-f50e-4677-b3f9-bad2ab143c43\" (UID: \"91d95335-f50e-4677-b3f9-bad2ab143c43\") " Dec 11 10:36:27 crc kubenswrapper[4953]: I1211 10:36:27.744076 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91d95335-f50e-4677-b3f9-bad2ab143c43-utilities\") pod \"91d95335-f50e-4677-b3f9-bad2ab143c43\" (UID: \"91d95335-f50e-4677-b3f9-bad2ab143c43\") " Dec 11 10:36:27 crc kubenswrapper[4953]: I1211 10:36:27.744149 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91d95335-f50e-4677-b3f9-bad2ab143c43-catalog-content\") pod \"91d95335-f50e-4677-b3f9-bad2ab143c43\" (UID: \"91d95335-f50e-4677-b3f9-bad2ab143c43\") " Dec 11 10:36:27 crc kubenswrapper[4953]: I1211 10:36:27.745017 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/91d95335-f50e-4677-b3f9-bad2ab143c43-utilities" (OuterVolumeSpecName: "utilities") pod "91d95335-f50e-4677-b3f9-bad2ab143c43" (UID: "91d95335-f50e-4677-b3f9-bad2ab143c43"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:36:27 crc kubenswrapper[4953]: I1211 10:36:27.745535 4953 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91d95335-f50e-4677-b3f9-bad2ab143c43-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 10:36:27 crc kubenswrapper[4953]: I1211 10:36:27.767056 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91d95335-f50e-4677-b3f9-bad2ab143c43-kube-api-access-8kxxb" (OuterVolumeSpecName: "kube-api-access-8kxxb") pod "91d95335-f50e-4677-b3f9-bad2ab143c43" (UID: "91d95335-f50e-4677-b3f9-bad2ab143c43"). InnerVolumeSpecName "kube-api-access-8kxxb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:36:27 crc kubenswrapper[4953]: I1211 10:36:27.847294 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8kxxb\" (UniqueName: \"kubernetes.io/projected/91d95335-f50e-4677-b3f9-bad2ab143c43-kube-api-access-8kxxb\") on node \"crc\" DevicePath \"\"" Dec 11 10:36:27 crc kubenswrapper[4953]: I1211 10:36:27.870627 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/91d95335-f50e-4677-b3f9-bad2ab143c43-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "91d95335-f50e-4677-b3f9-bad2ab143c43" (UID: "91d95335-f50e-4677-b3f9-bad2ab143c43"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:36:27 crc kubenswrapper[4953]: I1211 10:36:27.916687 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Dec 11 10:36:27 crc kubenswrapper[4953]: I1211 10:36:27.917781 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Dec 11 10:36:27 crc kubenswrapper[4953]: I1211 10:36:27.949166 4953 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91d95335-f50e-4677-b3f9-bad2ab143c43-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 10:36:27 crc kubenswrapper[4953]: I1211 10:36:27.963289 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Dec 11 10:36:28 crc kubenswrapper[4953]: I1211 10:36:28.029876 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 11 10:36:28 crc kubenswrapper[4953]: W1211 10:36:28.046730 4953 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod818f55f9_e432_4bb0_84a0_a4a1f76dbcf7.slice/crio-d1a4cc9e02f5afe157cecb4e07567a37fb92d7b5f03fda015c6255417d7eb71b WatchSource:0}: Error finding container d1a4cc9e02f5afe157cecb4e07567a37fb92d7b5f03fda015c6255417d7eb71b: Status 404 returned error can't find the container with id d1a4cc9e02f5afe157cecb4e07567a37fb92d7b5f03fda015c6255417d7eb71b Dec 11 10:36:28 crc kubenswrapper[4953]: I1211 10:36:28.052260 4953 generic.go:334] "Generic (PLEG): container finished" podID="91d95335-f50e-4677-b3f9-bad2ab143c43" containerID="b338be24b9b7fc98cf818808dc924b7fa24ae9f50b38edddd4363bfbf6b84f08" exitCode=0 Dec 11 10:36:28 crc kubenswrapper[4953]: I1211 10:36:28.052329 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cfhwh" event={"ID":"91d95335-f50e-4677-b3f9-bad2ab143c43","Type":"ContainerDied","Data":"b338be24b9b7fc98cf818808dc924b7fa24ae9f50b38edddd4363bfbf6b84f08"} Dec 11 10:36:28 crc kubenswrapper[4953]: I1211 10:36:28.052354 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cfhwh" event={"ID":"91d95335-f50e-4677-b3f9-bad2ab143c43","Type":"ContainerDied","Data":"c371d81790001ff851de4bd6490443fb83e9c5070777f014a356cf9f75098bca"} Dec 11 10:36:28 crc kubenswrapper[4953]: I1211 10:36:28.052372 4953 scope.go:117] "RemoveContainer" containerID="b338be24b9b7fc98cf818808dc924b7fa24ae9f50b38edddd4363bfbf6b84f08" Dec 11 10:36:28 crc kubenswrapper[4953]: I1211 10:36:28.052475 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cfhwh" Dec 11 10:36:28 crc kubenswrapper[4953]: I1211 10:36:28.080888 4953 generic.go:334] "Generic (PLEG): container finished" podID="8092c791-0c1a-454e-9fe8-b3dcb63c3415" containerID="12197dc212f018a71653f4be4db791ba82015fdb5a956452eb7226315756bede" exitCode=0 Dec 11 10:36:28 crc kubenswrapper[4953]: I1211 10:36:28.082314 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-8t2m9" event={"ID":"8092c791-0c1a-454e-9fe8-b3dcb63c3415","Type":"ContainerDied","Data":"12197dc212f018a71653f4be4db791ba82015fdb5a956452eb7226315756bede"} Dec 11 10:36:28 crc kubenswrapper[4953]: I1211 10:36:28.134739 4953 scope.go:117] "RemoveContainer" containerID="99ef33837ac2ece6cbc1a7e300f49c2a737e90eef345cb0114bfd3f8cc04c3de" Dec 11 10:36:28 crc kubenswrapper[4953]: I1211 10:36:28.147001 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cfhwh"] Dec 11 10:36:28 crc kubenswrapper[4953]: I1211 10:36:28.148802 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Dec 11 10:36:28 crc kubenswrapper[4953]: I1211 10:36:28.171363 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-cfhwh"] Dec 11 10:36:28 crc kubenswrapper[4953]: I1211 10:36:28.194339 4953 scope.go:117] "RemoveContainer" containerID="571f2f0d095093b25606c15c6559ea2443bec43af1017a2d950ecc2a2775b846" Dec 11 10:36:28 crc kubenswrapper[4953]: I1211 10:36:28.228678 4953 scope.go:117] "RemoveContainer" containerID="b338be24b9b7fc98cf818808dc924b7fa24ae9f50b38edddd4363bfbf6b84f08" Dec 11 10:36:28 crc kubenswrapper[4953]: E1211 10:36:28.229357 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b338be24b9b7fc98cf818808dc924b7fa24ae9f50b38edddd4363bfbf6b84f08\": container with ID starting with b338be24b9b7fc98cf818808dc924b7fa24ae9f50b38edddd4363bfbf6b84f08 not found: ID does not exist" containerID="b338be24b9b7fc98cf818808dc924b7fa24ae9f50b38edddd4363bfbf6b84f08" Dec 11 10:36:28 crc kubenswrapper[4953]: I1211 10:36:28.229402 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b338be24b9b7fc98cf818808dc924b7fa24ae9f50b38edddd4363bfbf6b84f08"} err="failed to get container status \"b338be24b9b7fc98cf818808dc924b7fa24ae9f50b38edddd4363bfbf6b84f08\": rpc error: code = NotFound desc = could not find container \"b338be24b9b7fc98cf818808dc924b7fa24ae9f50b38edddd4363bfbf6b84f08\": container with ID starting with b338be24b9b7fc98cf818808dc924b7fa24ae9f50b38edddd4363bfbf6b84f08 not found: ID does not exist" Dec 11 10:36:28 crc kubenswrapper[4953]: I1211 10:36:28.229475 4953 scope.go:117] "RemoveContainer" containerID="99ef33837ac2ece6cbc1a7e300f49c2a737e90eef345cb0114bfd3f8cc04c3de" Dec 11 10:36:28 crc kubenswrapper[4953]: E1211 10:36:28.229841 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"99ef33837ac2ece6cbc1a7e300f49c2a737e90eef345cb0114bfd3f8cc04c3de\": container with ID starting with 99ef33837ac2ece6cbc1a7e300f49c2a737e90eef345cb0114bfd3f8cc04c3de not found: ID does not exist" containerID="99ef33837ac2ece6cbc1a7e300f49c2a737e90eef345cb0114bfd3f8cc04c3de" Dec 11 10:36:28 crc kubenswrapper[4953]: I1211 10:36:28.229907 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99ef33837ac2ece6cbc1a7e300f49c2a737e90eef345cb0114bfd3f8cc04c3de"} err="failed to get container status \"99ef33837ac2ece6cbc1a7e300f49c2a737e90eef345cb0114bfd3f8cc04c3de\": rpc error: code = NotFound desc = could not find container \"99ef33837ac2ece6cbc1a7e300f49c2a737e90eef345cb0114bfd3f8cc04c3de\": container with ID starting with 99ef33837ac2ece6cbc1a7e300f49c2a737e90eef345cb0114bfd3f8cc04c3de not found: ID does not exist" Dec 11 10:36:28 crc kubenswrapper[4953]: I1211 10:36:28.229954 4953 scope.go:117] "RemoveContainer" containerID="571f2f0d095093b25606c15c6559ea2443bec43af1017a2d950ecc2a2775b846" Dec 11 10:36:28 crc kubenswrapper[4953]: E1211 10:36:28.230160 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"571f2f0d095093b25606c15c6559ea2443bec43af1017a2d950ecc2a2775b846\": container with ID starting with 571f2f0d095093b25606c15c6559ea2443bec43af1017a2d950ecc2a2775b846 not found: ID does not exist" containerID="571f2f0d095093b25606c15c6559ea2443bec43af1017a2d950ecc2a2775b846" Dec 11 10:36:28 crc kubenswrapper[4953]: I1211 10:36:28.230182 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"571f2f0d095093b25606c15c6559ea2443bec43af1017a2d950ecc2a2775b846"} err="failed to get container status \"571f2f0d095093b25606c15c6559ea2443bec43af1017a2d950ecc2a2775b846\": rpc error: code = NotFound desc = could not find container \"571f2f0d095093b25606c15c6559ea2443bec43af1017a2d950ecc2a2775b846\": container with ID starting with 571f2f0d095093b25606c15c6559ea2443bec43af1017a2d950ecc2a2775b846 not found: ID does not exist" Dec 11 10:36:28 crc kubenswrapper[4953]: I1211 10:36:28.361773 4953 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="26d03f3f-057f-4c77-82a4-c394e0732e01" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.186:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 10:36:28 crc kubenswrapper[4953]: I1211 10:36:28.361792 4953 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="26d03f3f-057f-4c77-82a4-c394e0732e01" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.186:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 10:36:28 crc kubenswrapper[4953]: I1211 10:36:28.488443 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91d95335-f50e-4677-b3f9-bad2ab143c43" path="/var/lib/kubelet/pods/91d95335-f50e-4677-b3f9-bad2ab143c43/volumes" Dec 11 10:36:28 crc kubenswrapper[4953]: I1211 10:36:28.489321 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc941d7f-79c3-4661-992e-7a27e3a6c6d9" path="/var/lib/kubelet/pods/bc941d7f-79c3-4661-992e-7a27e3a6c6d9/volumes" Dec 11 10:36:28 crc kubenswrapper[4953]: I1211 10:36:28.490672 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Dec 11 10:36:28 crc kubenswrapper[4953]: I1211 10:36:28.538827 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-557bbc7df7-rcxzb" Dec 11 10:36:28 crc kubenswrapper[4953]: I1211 10:36:28.643931 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75bfc9b94f-qfmcg"] Dec 11 10:36:28 crc kubenswrapper[4953]: I1211 10:36:28.644213 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-75bfc9b94f-qfmcg" podUID="586be632-9d3d-46be-9de4-5059e771edcf" containerName="dnsmasq-dns" containerID="cri-o://8380e2880db7a1c1a8b15a48693d500857a01fbf3050e1a2877a5a73f7d990c1" gracePeriod=10 Dec 11 10:36:29 crc kubenswrapper[4953]: I1211 10:36:29.100278 4953 generic.go:334] "Generic (PLEG): container finished" podID="586be632-9d3d-46be-9de4-5059e771edcf" containerID="8380e2880db7a1c1a8b15a48693d500857a01fbf3050e1a2877a5a73f7d990c1" exitCode=0 Dec 11 10:36:29 crc kubenswrapper[4953]: I1211 10:36:29.100371 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75bfc9b94f-qfmcg" event={"ID":"586be632-9d3d-46be-9de4-5059e771edcf","Type":"ContainerDied","Data":"8380e2880db7a1c1a8b15a48693d500857a01fbf3050e1a2877a5a73f7d990c1"} Dec 11 10:36:29 crc kubenswrapper[4953]: I1211 10:36:29.108605 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"818f55f9-e432-4bb0-84a0-a4a1f76dbcf7","Type":"ContainerStarted","Data":"33c3f7cd2dbe31b39b3eff35d5bfe0b7fd8ee2f34a216900425fb62e9d79abde"} Dec 11 10:36:29 crc kubenswrapper[4953]: I1211 10:36:29.108878 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"818f55f9-e432-4bb0-84a0-a4a1f76dbcf7","Type":"ContainerStarted","Data":"045caea100029c18f28e3201473489ea7044586becb6999cf0bfb61b2932da97"} Dec 11 10:36:29 crc kubenswrapper[4953]: I1211 10:36:29.108893 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"818f55f9-e432-4bb0-84a0-a4a1f76dbcf7","Type":"ContainerStarted","Data":"d1a4cc9e02f5afe157cecb4e07567a37fb92d7b5f03fda015c6255417d7eb71b"} Dec 11 10:36:29 crc kubenswrapper[4953]: I1211 10:36:29.171460 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.171435729 podStartE2EDuration="2.171435729s" podCreationTimestamp="2025-12-11 10:36:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:36:29.126619359 +0000 UTC m=+1507.150478392" watchObservedRunningTime="2025-12-11 10:36:29.171435729 +0000 UTC m=+1507.195294762" Dec 11 10:36:29 crc kubenswrapper[4953]: I1211 10:36:29.300322 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75bfc9b94f-qfmcg" Dec 11 10:36:29 crc kubenswrapper[4953]: I1211 10:36:29.402450 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/586be632-9d3d-46be-9de4-5059e771edcf-ovsdbserver-sb\") pod \"586be632-9d3d-46be-9de4-5059e771edcf\" (UID: \"586be632-9d3d-46be-9de4-5059e771edcf\") " Dec 11 10:36:29 crc kubenswrapper[4953]: I1211 10:36:29.402633 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q9zdp\" (UniqueName: \"kubernetes.io/projected/586be632-9d3d-46be-9de4-5059e771edcf-kube-api-access-q9zdp\") pod \"586be632-9d3d-46be-9de4-5059e771edcf\" (UID: \"586be632-9d3d-46be-9de4-5059e771edcf\") " Dec 11 10:36:29 crc kubenswrapper[4953]: I1211 10:36:29.402680 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/586be632-9d3d-46be-9de4-5059e771edcf-dns-swift-storage-0\") pod \"586be632-9d3d-46be-9de4-5059e771edcf\" (UID: \"586be632-9d3d-46be-9de4-5059e771edcf\") " Dec 11 10:36:29 crc kubenswrapper[4953]: I1211 10:36:29.402714 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/586be632-9d3d-46be-9de4-5059e771edcf-dns-svc\") pod \"586be632-9d3d-46be-9de4-5059e771edcf\" (UID: \"586be632-9d3d-46be-9de4-5059e771edcf\") " Dec 11 10:36:29 crc kubenswrapper[4953]: I1211 10:36:29.402747 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/586be632-9d3d-46be-9de4-5059e771edcf-ovsdbserver-nb\") pod \"586be632-9d3d-46be-9de4-5059e771edcf\" (UID: \"586be632-9d3d-46be-9de4-5059e771edcf\") " Dec 11 10:36:29 crc kubenswrapper[4953]: I1211 10:36:29.402779 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/586be632-9d3d-46be-9de4-5059e771edcf-config\") pod \"586be632-9d3d-46be-9de4-5059e771edcf\" (UID: \"586be632-9d3d-46be-9de4-5059e771edcf\") " Dec 11 10:36:29 crc kubenswrapper[4953]: I1211 10:36:29.410009 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/586be632-9d3d-46be-9de4-5059e771edcf-kube-api-access-q9zdp" (OuterVolumeSpecName: "kube-api-access-q9zdp") pod "586be632-9d3d-46be-9de4-5059e771edcf" (UID: "586be632-9d3d-46be-9de4-5059e771edcf"). InnerVolumeSpecName "kube-api-access-q9zdp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:36:29 crc kubenswrapper[4953]: I1211 10:36:29.471704 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/586be632-9d3d-46be-9de4-5059e771edcf-config" (OuterVolumeSpecName: "config") pod "586be632-9d3d-46be-9de4-5059e771edcf" (UID: "586be632-9d3d-46be-9de4-5059e771edcf"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:36:29 crc kubenswrapper[4953]: I1211 10:36:29.475206 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/586be632-9d3d-46be-9de4-5059e771edcf-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "586be632-9d3d-46be-9de4-5059e771edcf" (UID: "586be632-9d3d-46be-9de4-5059e771edcf"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:36:29 crc kubenswrapper[4953]: I1211 10:36:29.493183 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/586be632-9d3d-46be-9de4-5059e771edcf-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "586be632-9d3d-46be-9de4-5059e771edcf" (UID: "586be632-9d3d-46be-9de4-5059e771edcf"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:36:29 crc kubenswrapper[4953]: I1211 10:36:29.500431 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/586be632-9d3d-46be-9de4-5059e771edcf-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "586be632-9d3d-46be-9de4-5059e771edcf" (UID: "586be632-9d3d-46be-9de4-5059e771edcf"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:36:29 crc kubenswrapper[4953]: I1211 10:36:29.504905 4953 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/586be632-9d3d-46be-9de4-5059e771edcf-config\") on node \"crc\" DevicePath \"\"" Dec 11 10:36:29 crc kubenswrapper[4953]: I1211 10:36:29.504936 4953 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/586be632-9d3d-46be-9de4-5059e771edcf-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 11 10:36:29 crc kubenswrapper[4953]: I1211 10:36:29.504947 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q9zdp\" (UniqueName: \"kubernetes.io/projected/586be632-9d3d-46be-9de4-5059e771edcf-kube-api-access-q9zdp\") on node \"crc\" DevicePath \"\"" Dec 11 10:36:29 crc kubenswrapper[4953]: I1211 10:36:29.504976 4953 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/586be632-9d3d-46be-9de4-5059e771edcf-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 11 10:36:29 crc kubenswrapper[4953]: I1211 10:36:29.504985 4953 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/586be632-9d3d-46be-9de4-5059e771edcf-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 11 10:36:29 crc kubenswrapper[4953]: I1211 10:36:29.511446 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/586be632-9d3d-46be-9de4-5059e771edcf-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "586be632-9d3d-46be-9de4-5059e771edcf" (UID: "586be632-9d3d-46be-9de4-5059e771edcf"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:36:29 crc kubenswrapper[4953]: I1211 10:36:29.608276 4953 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/586be632-9d3d-46be-9de4-5059e771edcf-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 11 10:36:29 crc kubenswrapper[4953]: I1211 10:36:29.630828 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-8t2m9" Dec 11 10:36:29 crc kubenswrapper[4953]: I1211 10:36:29.811694 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8092c791-0c1a-454e-9fe8-b3dcb63c3415-combined-ca-bundle\") pod \"8092c791-0c1a-454e-9fe8-b3dcb63c3415\" (UID: \"8092c791-0c1a-454e-9fe8-b3dcb63c3415\") " Dec 11 10:36:29 crc kubenswrapper[4953]: I1211 10:36:29.812070 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8092c791-0c1a-454e-9fe8-b3dcb63c3415-scripts\") pod \"8092c791-0c1a-454e-9fe8-b3dcb63c3415\" (UID: \"8092c791-0c1a-454e-9fe8-b3dcb63c3415\") " Dec 11 10:36:29 crc kubenswrapper[4953]: I1211 10:36:29.812180 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8092c791-0c1a-454e-9fe8-b3dcb63c3415-config-data\") pod \"8092c791-0c1a-454e-9fe8-b3dcb63c3415\" (UID: \"8092c791-0c1a-454e-9fe8-b3dcb63c3415\") " Dec 11 10:36:29 crc kubenswrapper[4953]: I1211 10:36:29.812772 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9mpbh\" (UniqueName: \"kubernetes.io/projected/8092c791-0c1a-454e-9fe8-b3dcb63c3415-kube-api-access-9mpbh\") pod \"8092c791-0c1a-454e-9fe8-b3dcb63c3415\" (UID: \"8092c791-0c1a-454e-9fe8-b3dcb63c3415\") " Dec 11 10:36:29 crc kubenswrapper[4953]: I1211 10:36:29.825154 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8092c791-0c1a-454e-9fe8-b3dcb63c3415-kube-api-access-9mpbh" (OuterVolumeSpecName: "kube-api-access-9mpbh") pod "8092c791-0c1a-454e-9fe8-b3dcb63c3415" (UID: "8092c791-0c1a-454e-9fe8-b3dcb63c3415"). InnerVolumeSpecName "kube-api-access-9mpbh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:36:29 crc kubenswrapper[4953]: I1211 10:36:29.841709 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8092c791-0c1a-454e-9fe8-b3dcb63c3415-config-data" (OuterVolumeSpecName: "config-data") pod "8092c791-0c1a-454e-9fe8-b3dcb63c3415" (UID: "8092c791-0c1a-454e-9fe8-b3dcb63c3415"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:36:29 crc kubenswrapper[4953]: I1211 10:36:29.841917 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8092c791-0c1a-454e-9fe8-b3dcb63c3415-scripts" (OuterVolumeSpecName: "scripts") pod "8092c791-0c1a-454e-9fe8-b3dcb63c3415" (UID: "8092c791-0c1a-454e-9fe8-b3dcb63c3415"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:36:29 crc kubenswrapper[4953]: I1211 10:36:29.844348 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8092c791-0c1a-454e-9fe8-b3dcb63c3415-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8092c791-0c1a-454e-9fe8-b3dcb63c3415" (UID: "8092c791-0c1a-454e-9fe8-b3dcb63c3415"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:36:29 crc kubenswrapper[4953]: I1211 10:36:29.915402 4953 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8092c791-0c1a-454e-9fe8-b3dcb63c3415-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 10:36:29 crc kubenswrapper[4953]: I1211 10:36:29.915452 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9mpbh\" (UniqueName: \"kubernetes.io/projected/8092c791-0c1a-454e-9fe8-b3dcb63c3415-kube-api-access-9mpbh\") on node \"crc\" DevicePath \"\"" Dec 11 10:36:29 crc kubenswrapper[4953]: I1211 10:36:29.915464 4953 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8092c791-0c1a-454e-9fe8-b3dcb63c3415-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 10:36:29 crc kubenswrapper[4953]: I1211 10:36:29.915473 4953 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8092c791-0c1a-454e-9fe8-b3dcb63c3415-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 10:36:30 crc kubenswrapper[4953]: I1211 10:36:30.129086 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75bfc9b94f-qfmcg" event={"ID":"586be632-9d3d-46be-9de4-5059e771edcf","Type":"ContainerDied","Data":"d7f5fceb502709e64de5699368af6f800334811ac687f77bec1862433c0f5f1c"} Dec 11 10:36:30 crc kubenswrapper[4953]: I1211 10:36:30.129101 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75bfc9b94f-qfmcg" Dec 11 10:36:30 crc kubenswrapper[4953]: I1211 10:36:30.129465 4953 scope.go:117] "RemoveContainer" containerID="8380e2880db7a1c1a8b15a48693d500857a01fbf3050e1a2877a5a73f7d990c1" Dec 11 10:36:30 crc kubenswrapper[4953]: I1211 10:36:30.138016 4953 generic.go:334] "Generic (PLEG): container finished" podID="33f6128e-32cb-454f-ba24-3c8e4e1cb2ba" containerID="884bf7209c56c792aa6f6119e59a431349030aeb6799dab773a8e92ad0b9f5b2" exitCode=0 Dec 11 10:36:30 crc kubenswrapper[4953]: I1211 10:36:30.138083 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-8g247" event={"ID":"33f6128e-32cb-454f-ba24-3c8e4e1cb2ba","Type":"ContainerDied","Data":"884bf7209c56c792aa6f6119e59a431349030aeb6799dab773a8e92ad0b9f5b2"} Dec 11 10:36:30 crc kubenswrapper[4953]: I1211 10:36:30.145025 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-8t2m9" Dec 11 10:36:30 crc kubenswrapper[4953]: I1211 10:36:30.145373 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-8t2m9" event={"ID":"8092c791-0c1a-454e-9fe8-b3dcb63c3415","Type":"ContainerDied","Data":"0b7b8c7026220a9a0348421f17d2899505dd32db42d72a45721d525546fc2ddf"} Dec 11 10:36:30 crc kubenswrapper[4953]: I1211 10:36:30.145428 4953 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0b7b8c7026220a9a0348421f17d2899505dd32db42d72a45721d525546fc2ddf" Dec 11 10:36:30 crc kubenswrapper[4953]: I1211 10:36:30.167946 4953 scope.go:117] "RemoveContainer" containerID="6643f662cdbb7192253da376618dc19598e53df9833c526637d583d007913afd" Dec 11 10:36:30 crc kubenswrapper[4953]: I1211 10:36:30.201930 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75bfc9b94f-qfmcg"] Dec 11 10:36:30 crc kubenswrapper[4953]: I1211 10:36:30.210752 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-75bfc9b94f-qfmcg"] Dec 11 10:36:30 crc kubenswrapper[4953]: I1211 10:36:30.295793 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 11 10:36:30 crc kubenswrapper[4953]: I1211 10:36:30.296053 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="26d03f3f-057f-4c77-82a4-c394e0732e01" containerName="nova-api-log" containerID="cri-o://905c00d9bffdea6c8106bea9fc6becce3f6025d8a8507741a6ad08e738b95d78" gracePeriod=30 Dec 11 10:36:30 crc kubenswrapper[4953]: I1211 10:36:30.296146 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="26d03f3f-057f-4c77-82a4-c394e0732e01" containerName="nova-api-api" containerID="cri-o://a7f8fd17513ee9bb15963858d05565c67c15d09f6c9fa792151763f52954317e" gracePeriod=30 Dec 11 10:36:30 crc kubenswrapper[4953]: I1211 10:36:30.318635 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 11 10:36:30 crc kubenswrapper[4953]: I1211 10:36:30.349332 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 11 10:36:30 crc kubenswrapper[4953]: I1211 10:36:30.484779 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="586be632-9d3d-46be-9de4-5059e771edcf" path="/var/lib/kubelet/pods/586be632-9d3d-46be-9de4-5059e771edcf/volumes" Dec 11 10:36:31 crc kubenswrapper[4953]: I1211 10:36:31.156616 4953 generic.go:334] "Generic (PLEG): container finished" podID="26d03f3f-057f-4c77-82a4-c394e0732e01" containerID="905c00d9bffdea6c8106bea9fc6becce3f6025d8a8507741a6ad08e738b95d78" exitCode=143 Dec 11 10:36:31 crc kubenswrapper[4953]: I1211 10:36:31.156737 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"26d03f3f-057f-4c77-82a4-c394e0732e01","Type":"ContainerDied","Data":"905c00d9bffdea6c8106bea9fc6becce3f6025d8a8507741a6ad08e738b95d78"} Dec 11 10:36:31 crc kubenswrapper[4953]: I1211 10:36:31.156876 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="9df1c590-48bc-4795-8655-114657aa49e9" containerName="nova-scheduler-scheduler" containerID="cri-o://ec925673fe6d4a03f8a09fe186a8829fa6e16cec926305276056a86bc9311fa0" gracePeriod=30 Dec 11 10:36:31 crc kubenswrapper[4953]: I1211 10:36:31.157045 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="818f55f9-e432-4bb0-84a0-a4a1f76dbcf7" containerName="nova-metadata-log" containerID="cri-o://045caea100029c18f28e3201473489ea7044586becb6999cf0bfb61b2932da97" gracePeriod=30 Dec 11 10:36:31 crc kubenswrapper[4953]: I1211 10:36:31.157051 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="818f55f9-e432-4bb0-84a0-a4a1f76dbcf7" containerName="nova-metadata-metadata" containerID="cri-o://33c3f7cd2dbe31b39b3eff35d5bfe0b7fd8ee2f34a216900425fb62e9d79abde" gracePeriod=30 Dec 11 10:36:31 crc kubenswrapper[4953]: I1211 10:36:31.474956 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-8g247" Dec 11 10:36:31 crc kubenswrapper[4953]: I1211 10:36:31.648525 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33f6128e-32cb-454f-ba24-3c8e4e1cb2ba-scripts\") pod \"33f6128e-32cb-454f-ba24-3c8e4e1cb2ba\" (UID: \"33f6128e-32cb-454f-ba24-3c8e4e1cb2ba\") " Dec 11 10:36:31 crc kubenswrapper[4953]: I1211 10:36:31.648751 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33f6128e-32cb-454f-ba24-3c8e4e1cb2ba-config-data\") pod \"33f6128e-32cb-454f-ba24-3c8e4e1cb2ba\" (UID: \"33f6128e-32cb-454f-ba24-3c8e4e1cb2ba\") " Dec 11 10:36:31 crc kubenswrapper[4953]: I1211 10:36:31.648951 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33f6128e-32cb-454f-ba24-3c8e4e1cb2ba-combined-ca-bundle\") pod \"33f6128e-32cb-454f-ba24-3c8e4e1cb2ba\" (UID: \"33f6128e-32cb-454f-ba24-3c8e4e1cb2ba\") " Dec 11 10:36:31 crc kubenswrapper[4953]: I1211 10:36:31.649040 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v8q9n\" (UniqueName: \"kubernetes.io/projected/33f6128e-32cb-454f-ba24-3c8e4e1cb2ba-kube-api-access-v8q9n\") pod \"33f6128e-32cb-454f-ba24-3c8e4e1cb2ba\" (UID: \"33f6128e-32cb-454f-ba24-3c8e4e1cb2ba\") " Dec 11 10:36:31 crc kubenswrapper[4953]: I1211 10:36:31.705875 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33f6128e-32cb-454f-ba24-3c8e4e1cb2ba-scripts" (OuterVolumeSpecName: "scripts") pod "33f6128e-32cb-454f-ba24-3c8e4e1cb2ba" (UID: "33f6128e-32cb-454f-ba24-3c8e4e1cb2ba"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:36:31 crc kubenswrapper[4953]: I1211 10:36:31.708446 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33f6128e-32cb-454f-ba24-3c8e4e1cb2ba-kube-api-access-v8q9n" (OuterVolumeSpecName: "kube-api-access-v8q9n") pod "33f6128e-32cb-454f-ba24-3c8e4e1cb2ba" (UID: "33f6128e-32cb-454f-ba24-3c8e4e1cb2ba"). InnerVolumeSpecName "kube-api-access-v8q9n". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:36:31 crc kubenswrapper[4953]: I1211 10:36:31.715810 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33f6128e-32cb-454f-ba24-3c8e4e1cb2ba-config-data" (OuterVolumeSpecName: "config-data") pod "33f6128e-32cb-454f-ba24-3c8e4e1cb2ba" (UID: "33f6128e-32cb-454f-ba24-3c8e4e1cb2ba"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:36:31 crc kubenswrapper[4953]: I1211 10:36:31.738299 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33f6128e-32cb-454f-ba24-3c8e4e1cb2ba-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "33f6128e-32cb-454f-ba24-3c8e4e1cb2ba" (UID: "33f6128e-32cb-454f-ba24-3c8e4e1cb2ba"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:36:31 crc kubenswrapper[4953]: I1211 10:36:31.751336 4953 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33f6128e-32cb-454f-ba24-3c8e4e1cb2ba-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 10:36:31 crc kubenswrapper[4953]: I1211 10:36:31.751531 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v8q9n\" (UniqueName: \"kubernetes.io/projected/33f6128e-32cb-454f-ba24-3c8e4e1cb2ba-kube-api-access-v8q9n\") on node \"crc\" DevicePath \"\"" Dec 11 10:36:31 crc kubenswrapper[4953]: I1211 10:36:31.751610 4953 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33f6128e-32cb-454f-ba24-3c8e4e1cb2ba-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 10:36:31 crc kubenswrapper[4953]: I1211 10:36:31.751668 4953 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33f6128e-32cb-454f-ba24-3c8e4e1cb2ba-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 10:36:31 crc kubenswrapper[4953]: I1211 10:36:31.800604 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 11 10:36:31 crc kubenswrapper[4953]: I1211 10:36:31.852774 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/818f55f9-e432-4bb0-84a0-a4a1f76dbcf7-nova-metadata-tls-certs\") pod \"818f55f9-e432-4bb0-84a0-a4a1f76dbcf7\" (UID: \"818f55f9-e432-4bb0-84a0-a4a1f76dbcf7\") " Dec 11 10:36:31 crc kubenswrapper[4953]: I1211 10:36:31.852848 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/818f55f9-e432-4bb0-84a0-a4a1f76dbcf7-combined-ca-bundle\") pod \"818f55f9-e432-4bb0-84a0-a4a1f76dbcf7\" (UID: \"818f55f9-e432-4bb0-84a0-a4a1f76dbcf7\") " Dec 11 10:36:31 crc kubenswrapper[4953]: I1211 10:36:31.852947 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/818f55f9-e432-4bb0-84a0-a4a1f76dbcf7-logs\") pod \"818f55f9-e432-4bb0-84a0-a4a1f76dbcf7\" (UID: \"818f55f9-e432-4bb0-84a0-a4a1f76dbcf7\") " Dec 11 10:36:31 crc kubenswrapper[4953]: I1211 10:36:31.853020 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/818f55f9-e432-4bb0-84a0-a4a1f76dbcf7-config-data\") pod \"818f55f9-e432-4bb0-84a0-a4a1f76dbcf7\" (UID: \"818f55f9-e432-4bb0-84a0-a4a1f76dbcf7\") " Dec 11 10:36:31 crc kubenswrapper[4953]: I1211 10:36:31.853121 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tfl8k\" (UniqueName: \"kubernetes.io/projected/818f55f9-e432-4bb0-84a0-a4a1f76dbcf7-kube-api-access-tfl8k\") pod \"818f55f9-e432-4bb0-84a0-a4a1f76dbcf7\" (UID: \"818f55f9-e432-4bb0-84a0-a4a1f76dbcf7\") " Dec 11 10:36:31 crc kubenswrapper[4953]: I1211 10:36:31.853696 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/818f55f9-e432-4bb0-84a0-a4a1f76dbcf7-logs" (OuterVolumeSpecName: "logs") pod "818f55f9-e432-4bb0-84a0-a4a1f76dbcf7" (UID: "818f55f9-e432-4bb0-84a0-a4a1f76dbcf7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:36:31 crc kubenswrapper[4953]: I1211 10:36:31.857778 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/818f55f9-e432-4bb0-84a0-a4a1f76dbcf7-kube-api-access-tfl8k" (OuterVolumeSpecName: "kube-api-access-tfl8k") pod "818f55f9-e432-4bb0-84a0-a4a1f76dbcf7" (UID: "818f55f9-e432-4bb0-84a0-a4a1f76dbcf7"). InnerVolumeSpecName "kube-api-access-tfl8k". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:36:31 crc kubenswrapper[4953]: I1211 10:36:31.877588 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/818f55f9-e432-4bb0-84a0-a4a1f76dbcf7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "818f55f9-e432-4bb0-84a0-a4a1f76dbcf7" (UID: "818f55f9-e432-4bb0-84a0-a4a1f76dbcf7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:36:31 crc kubenswrapper[4953]: I1211 10:36:31.880041 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/818f55f9-e432-4bb0-84a0-a4a1f76dbcf7-config-data" (OuterVolumeSpecName: "config-data") pod "818f55f9-e432-4bb0-84a0-a4a1f76dbcf7" (UID: "818f55f9-e432-4bb0-84a0-a4a1f76dbcf7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:36:31 crc kubenswrapper[4953]: I1211 10:36:31.900847 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/818f55f9-e432-4bb0-84a0-a4a1f76dbcf7-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "818f55f9-e432-4bb0-84a0-a4a1f76dbcf7" (UID: "818f55f9-e432-4bb0-84a0-a4a1f76dbcf7"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:36:31 crc kubenswrapper[4953]: I1211 10:36:31.955540 4953 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/818f55f9-e432-4bb0-84a0-a4a1f76dbcf7-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 11 10:36:31 crc kubenswrapper[4953]: I1211 10:36:31.955598 4953 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/818f55f9-e432-4bb0-84a0-a4a1f76dbcf7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 10:36:31 crc kubenswrapper[4953]: I1211 10:36:31.955610 4953 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/818f55f9-e432-4bb0-84a0-a4a1f76dbcf7-logs\") on node \"crc\" DevicePath \"\"" Dec 11 10:36:31 crc kubenswrapper[4953]: I1211 10:36:31.955619 4953 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/818f55f9-e432-4bb0-84a0-a4a1f76dbcf7-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 10:36:31 crc kubenswrapper[4953]: I1211 10:36:31.955628 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tfl8k\" (UniqueName: \"kubernetes.io/projected/818f55f9-e432-4bb0-84a0-a4a1f76dbcf7-kube-api-access-tfl8k\") on node \"crc\" DevicePath \"\"" Dec 11 10:36:32 crc kubenswrapper[4953]: I1211 10:36:32.171536 4953 generic.go:334] "Generic (PLEG): container finished" podID="818f55f9-e432-4bb0-84a0-a4a1f76dbcf7" containerID="33c3f7cd2dbe31b39b3eff35d5bfe0b7fd8ee2f34a216900425fb62e9d79abde" exitCode=0 Dec 11 10:36:32 crc kubenswrapper[4953]: I1211 10:36:32.171645 4953 generic.go:334] "Generic (PLEG): container finished" podID="818f55f9-e432-4bb0-84a0-a4a1f76dbcf7" containerID="045caea100029c18f28e3201473489ea7044586becb6999cf0bfb61b2932da97" exitCode=143 Dec 11 10:36:32 crc kubenswrapper[4953]: I1211 10:36:32.171700 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"818f55f9-e432-4bb0-84a0-a4a1f76dbcf7","Type":"ContainerDied","Data":"33c3f7cd2dbe31b39b3eff35d5bfe0b7fd8ee2f34a216900425fb62e9d79abde"} Dec 11 10:36:32 crc kubenswrapper[4953]: I1211 10:36:32.171730 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"818f55f9-e432-4bb0-84a0-a4a1f76dbcf7","Type":"ContainerDied","Data":"045caea100029c18f28e3201473489ea7044586becb6999cf0bfb61b2932da97"} Dec 11 10:36:32 crc kubenswrapper[4953]: I1211 10:36:32.171742 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"818f55f9-e432-4bb0-84a0-a4a1f76dbcf7","Type":"ContainerDied","Data":"d1a4cc9e02f5afe157cecb4e07567a37fb92d7b5f03fda015c6255417d7eb71b"} Dec 11 10:36:32 crc kubenswrapper[4953]: I1211 10:36:32.171757 4953 scope.go:117] "RemoveContainer" containerID="33c3f7cd2dbe31b39b3eff35d5bfe0b7fd8ee2f34a216900425fb62e9d79abde" Dec 11 10:36:32 crc kubenswrapper[4953]: I1211 10:36:32.171854 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 11 10:36:32 crc kubenswrapper[4953]: I1211 10:36:32.176551 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-8g247" event={"ID":"33f6128e-32cb-454f-ba24-3c8e4e1cb2ba","Type":"ContainerDied","Data":"8389aa2f6d69e90820af173fa51d663464de769d92173015a53cae4528723eb7"} Dec 11 10:36:32 crc kubenswrapper[4953]: I1211 10:36:32.176720 4953 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8389aa2f6d69e90820af173fa51d663464de769d92173015a53cae4528723eb7" Dec 11 10:36:32 crc kubenswrapper[4953]: I1211 10:36:32.176863 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-8g247" Dec 11 10:36:32 crc kubenswrapper[4953]: I1211 10:36:32.206186 4953 scope.go:117] "RemoveContainer" containerID="045caea100029c18f28e3201473489ea7044586becb6999cf0bfb61b2932da97" Dec 11 10:36:32 crc kubenswrapper[4953]: I1211 10:36:32.256064 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 11 10:36:32 crc kubenswrapper[4953]: E1211 10:36:32.260773 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="818f55f9-e432-4bb0-84a0-a4a1f76dbcf7" containerName="nova-metadata-metadata" Dec 11 10:36:32 crc kubenswrapper[4953]: I1211 10:36:32.260821 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="818f55f9-e432-4bb0-84a0-a4a1f76dbcf7" containerName="nova-metadata-metadata" Dec 11 10:36:32 crc kubenswrapper[4953]: E1211 10:36:32.260888 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33f6128e-32cb-454f-ba24-3c8e4e1cb2ba" containerName="nova-cell1-conductor-db-sync" Dec 11 10:36:32 crc kubenswrapper[4953]: I1211 10:36:32.260900 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="33f6128e-32cb-454f-ba24-3c8e4e1cb2ba" containerName="nova-cell1-conductor-db-sync" Dec 11 10:36:32 crc kubenswrapper[4953]: E1211 10:36:32.260971 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91d95335-f50e-4677-b3f9-bad2ab143c43" containerName="extract-content" Dec 11 10:36:32 crc kubenswrapper[4953]: I1211 10:36:32.260980 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="91d95335-f50e-4677-b3f9-bad2ab143c43" containerName="extract-content" Dec 11 10:36:32 crc kubenswrapper[4953]: E1211 10:36:32.261005 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="818f55f9-e432-4bb0-84a0-a4a1f76dbcf7" containerName="nova-metadata-log" Dec 11 10:36:32 crc kubenswrapper[4953]: I1211 10:36:32.261012 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="818f55f9-e432-4bb0-84a0-a4a1f76dbcf7" containerName="nova-metadata-log" Dec 11 10:36:32 crc kubenswrapper[4953]: E1211 10:36:32.261049 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8092c791-0c1a-454e-9fe8-b3dcb63c3415" containerName="nova-manage" Dec 11 10:36:32 crc kubenswrapper[4953]: I1211 10:36:32.261057 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="8092c791-0c1a-454e-9fe8-b3dcb63c3415" containerName="nova-manage" Dec 11 10:36:32 crc kubenswrapper[4953]: E1211 10:36:32.261075 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="586be632-9d3d-46be-9de4-5059e771edcf" containerName="init" Dec 11 10:36:32 crc kubenswrapper[4953]: I1211 10:36:32.261082 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="586be632-9d3d-46be-9de4-5059e771edcf" containerName="init" Dec 11 10:36:32 crc kubenswrapper[4953]: E1211 10:36:32.261101 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="586be632-9d3d-46be-9de4-5059e771edcf" containerName="dnsmasq-dns" Dec 11 10:36:32 crc kubenswrapper[4953]: I1211 10:36:32.261113 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="586be632-9d3d-46be-9de4-5059e771edcf" containerName="dnsmasq-dns" Dec 11 10:36:32 crc kubenswrapper[4953]: E1211 10:36:32.261143 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91d95335-f50e-4677-b3f9-bad2ab143c43" containerName="extract-utilities" Dec 11 10:36:32 crc kubenswrapper[4953]: I1211 10:36:32.261152 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="91d95335-f50e-4677-b3f9-bad2ab143c43" containerName="extract-utilities" Dec 11 10:36:32 crc kubenswrapper[4953]: E1211 10:36:32.261197 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91d95335-f50e-4677-b3f9-bad2ab143c43" containerName="registry-server" Dec 11 10:36:32 crc kubenswrapper[4953]: I1211 10:36:32.261206 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="91d95335-f50e-4677-b3f9-bad2ab143c43" containerName="registry-server" Dec 11 10:36:32 crc kubenswrapper[4953]: I1211 10:36:32.261974 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="8092c791-0c1a-454e-9fe8-b3dcb63c3415" containerName="nova-manage" Dec 11 10:36:32 crc kubenswrapper[4953]: I1211 10:36:32.261992 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="586be632-9d3d-46be-9de4-5059e771edcf" containerName="dnsmasq-dns" Dec 11 10:36:32 crc kubenswrapper[4953]: I1211 10:36:32.262025 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="33f6128e-32cb-454f-ba24-3c8e4e1cb2ba" containerName="nova-cell1-conductor-db-sync" Dec 11 10:36:32 crc kubenswrapper[4953]: I1211 10:36:32.262042 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="818f55f9-e432-4bb0-84a0-a4a1f76dbcf7" containerName="nova-metadata-metadata" Dec 11 10:36:32 crc kubenswrapper[4953]: I1211 10:36:32.262067 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="818f55f9-e432-4bb0-84a0-a4a1f76dbcf7" containerName="nova-metadata-log" Dec 11 10:36:32 crc kubenswrapper[4953]: I1211 10:36:32.262091 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="91d95335-f50e-4677-b3f9-bad2ab143c43" containerName="registry-server" Dec 11 10:36:32 crc kubenswrapper[4953]: I1211 10:36:32.263268 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Dec 11 10:36:32 crc kubenswrapper[4953]: I1211 10:36:32.264625 4953 scope.go:117] "RemoveContainer" containerID="33c3f7cd2dbe31b39b3eff35d5bfe0b7fd8ee2f34a216900425fb62e9d79abde" Dec 11 10:36:32 crc kubenswrapper[4953]: I1211 10:36:32.266600 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Dec 11 10:36:32 crc kubenswrapper[4953]: E1211 10:36:32.281135 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"33c3f7cd2dbe31b39b3eff35d5bfe0b7fd8ee2f34a216900425fb62e9d79abde\": container with ID starting with 33c3f7cd2dbe31b39b3eff35d5bfe0b7fd8ee2f34a216900425fb62e9d79abde not found: ID does not exist" containerID="33c3f7cd2dbe31b39b3eff35d5bfe0b7fd8ee2f34a216900425fb62e9d79abde" Dec 11 10:36:32 crc kubenswrapper[4953]: I1211 10:36:32.281203 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33c3f7cd2dbe31b39b3eff35d5bfe0b7fd8ee2f34a216900425fb62e9d79abde"} err="failed to get container status \"33c3f7cd2dbe31b39b3eff35d5bfe0b7fd8ee2f34a216900425fb62e9d79abde\": rpc error: code = NotFound desc = could not find container \"33c3f7cd2dbe31b39b3eff35d5bfe0b7fd8ee2f34a216900425fb62e9d79abde\": container with ID starting with 33c3f7cd2dbe31b39b3eff35d5bfe0b7fd8ee2f34a216900425fb62e9d79abde not found: ID does not exist" Dec 11 10:36:32 crc kubenswrapper[4953]: I1211 10:36:32.281235 4953 scope.go:117] "RemoveContainer" containerID="045caea100029c18f28e3201473489ea7044586becb6999cf0bfb61b2932da97" Dec 11 10:36:32 crc kubenswrapper[4953]: E1211 10:36:32.282906 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"045caea100029c18f28e3201473489ea7044586becb6999cf0bfb61b2932da97\": container with ID starting with 045caea100029c18f28e3201473489ea7044586becb6999cf0bfb61b2932da97 not found: ID does not exist" containerID="045caea100029c18f28e3201473489ea7044586becb6999cf0bfb61b2932da97" Dec 11 10:36:32 crc kubenswrapper[4953]: I1211 10:36:32.282994 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"045caea100029c18f28e3201473489ea7044586becb6999cf0bfb61b2932da97"} err="failed to get container status \"045caea100029c18f28e3201473489ea7044586becb6999cf0bfb61b2932da97\": rpc error: code = NotFound desc = could not find container \"045caea100029c18f28e3201473489ea7044586becb6999cf0bfb61b2932da97\": container with ID starting with 045caea100029c18f28e3201473489ea7044586becb6999cf0bfb61b2932da97 not found: ID does not exist" Dec 11 10:36:32 crc kubenswrapper[4953]: I1211 10:36:32.283114 4953 scope.go:117] "RemoveContainer" containerID="33c3f7cd2dbe31b39b3eff35d5bfe0b7fd8ee2f34a216900425fb62e9d79abde" Dec 11 10:36:32 crc kubenswrapper[4953]: I1211 10:36:32.288059 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33c3f7cd2dbe31b39b3eff35d5bfe0b7fd8ee2f34a216900425fb62e9d79abde"} err="failed to get container status \"33c3f7cd2dbe31b39b3eff35d5bfe0b7fd8ee2f34a216900425fb62e9d79abde\": rpc error: code = NotFound desc = could not find container \"33c3f7cd2dbe31b39b3eff35d5bfe0b7fd8ee2f34a216900425fb62e9d79abde\": container with ID starting with 33c3f7cd2dbe31b39b3eff35d5bfe0b7fd8ee2f34a216900425fb62e9d79abde not found: ID does not exist" Dec 11 10:36:32 crc kubenswrapper[4953]: I1211 10:36:32.288122 4953 scope.go:117] "RemoveContainer" containerID="045caea100029c18f28e3201473489ea7044586becb6999cf0bfb61b2932da97" Dec 11 10:36:32 crc kubenswrapper[4953]: I1211 10:36:32.290148 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"045caea100029c18f28e3201473489ea7044586becb6999cf0bfb61b2932da97"} err="failed to get container status \"045caea100029c18f28e3201473489ea7044586becb6999cf0bfb61b2932da97\": rpc error: code = NotFound desc = could not find container \"045caea100029c18f28e3201473489ea7044586becb6999cf0bfb61b2932da97\": container with ID starting with 045caea100029c18f28e3201473489ea7044586becb6999cf0bfb61b2932da97 not found: ID does not exist" Dec 11 10:36:32 crc kubenswrapper[4953]: I1211 10:36:32.295068 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 11 10:36:32 crc kubenswrapper[4953]: I1211 10:36:32.306956 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 11 10:36:32 crc kubenswrapper[4953]: I1211 10:36:32.317237 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Dec 11 10:36:32 crc kubenswrapper[4953]: I1211 10:36:32.326618 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 11 10:36:32 crc kubenswrapper[4953]: I1211 10:36:32.328832 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 11 10:36:32 crc kubenswrapper[4953]: I1211 10:36:32.330961 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Dec 11 10:36:32 crc kubenswrapper[4953]: I1211 10:36:32.331207 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 11 10:36:32 crc kubenswrapper[4953]: I1211 10:36:32.338095 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 11 10:36:32 crc kubenswrapper[4953]: I1211 10:36:32.470296 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c566b6b-16f8-422c-acda-0325e36103e6-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"5c566b6b-16f8-422c-acda-0325e36103e6\") " pod="openstack/nova-cell1-conductor-0" Dec 11 10:36:32 crc kubenswrapper[4953]: I1211 10:36:32.470358 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fx6ft\" (UniqueName: \"kubernetes.io/projected/5c566b6b-16f8-422c-acda-0325e36103e6-kube-api-access-fx6ft\") pod \"nova-cell1-conductor-0\" (UID: \"5c566b6b-16f8-422c-acda-0325e36103e6\") " pod="openstack/nova-cell1-conductor-0" Dec 11 10:36:32 crc kubenswrapper[4953]: I1211 10:36:32.470395 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c566b6b-16f8-422c-acda-0325e36103e6-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"5c566b6b-16f8-422c-acda-0325e36103e6\") " pod="openstack/nova-cell1-conductor-0" Dec 11 10:36:32 crc kubenswrapper[4953]: I1211 10:36:32.489764 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="818f55f9-e432-4bb0-84a0-a4a1f76dbcf7" path="/var/lib/kubelet/pods/818f55f9-e432-4bb0-84a0-a4a1f76dbcf7/volumes" Dec 11 10:36:32 crc kubenswrapper[4953]: I1211 10:36:32.572551 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4x545\" (UniqueName: \"kubernetes.io/projected/82055682-e0c4-4cf0-b034-49ba335cb911-kube-api-access-4x545\") pod \"nova-metadata-0\" (UID: \"82055682-e0c4-4cf0-b034-49ba335cb911\") " pod="openstack/nova-metadata-0" Dec 11 10:36:32 crc kubenswrapper[4953]: I1211 10:36:32.572660 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fx6ft\" (UniqueName: \"kubernetes.io/projected/5c566b6b-16f8-422c-acda-0325e36103e6-kube-api-access-fx6ft\") pod \"nova-cell1-conductor-0\" (UID: \"5c566b6b-16f8-422c-acda-0325e36103e6\") " pod="openstack/nova-cell1-conductor-0" Dec 11 10:36:32 crc kubenswrapper[4953]: I1211 10:36:32.572727 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c566b6b-16f8-422c-acda-0325e36103e6-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"5c566b6b-16f8-422c-acda-0325e36103e6\") " pod="openstack/nova-cell1-conductor-0" Dec 11 10:36:32 crc kubenswrapper[4953]: I1211 10:36:32.572798 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82055682-e0c4-4cf0-b034-49ba335cb911-config-data\") pod \"nova-metadata-0\" (UID: \"82055682-e0c4-4cf0-b034-49ba335cb911\") " pod="openstack/nova-metadata-0" Dec 11 10:36:32 crc kubenswrapper[4953]: I1211 10:36:32.572822 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82055682-e0c4-4cf0-b034-49ba335cb911-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"82055682-e0c4-4cf0-b034-49ba335cb911\") " pod="openstack/nova-metadata-0" Dec 11 10:36:32 crc kubenswrapper[4953]: I1211 10:36:32.573059 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/82055682-e0c4-4cf0-b034-49ba335cb911-logs\") pod \"nova-metadata-0\" (UID: \"82055682-e0c4-4cf0-b034-49ba335cb911\") " pod="openstack/nova-metadata-0" Dec 11 10:36:32 crc kubenswrapper[4953]: I1211 10:36:32.573118 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/82055682-e0c4-4cf0-b034-49ba335cb911-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"82055682-e0c4-4cf0-b034-49ba335cb911\") " pod="openstack/nova-metadata-0" Dec 11 10:36:32 crc kubenswrapper[4953]: I1211 10:36:32.573170 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c566b6b-16f8-422c-acda-0325e36103e6-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"5c566b6b-16f8-422c-acda-0325e36103e6\") " pod="openstack/nova-cell1-conductor-0" Dec 11 10:36:32 crc kubenswrapper[4953]: I1211 10:36:32.576966 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c566b6b-16f8-422c-acda-0325e36103e6-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"5c566b6b-16f8-422c-acda-0325e36103e6\") " pod="openstack/nova-cell1-conductor-0" Dec 11 10:36:32 crc kubenswrapper[4953]: I1211 10:36:32.577654 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c566b6b-16f8-422c-acda-0325e36103e6-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"5c566b6b-16f8-422c-acda-0325e36103e6\") " pod="openstack/nova-cell1-conductor-0" Dec 11 10:36:32 crc kubenswrapper[4953]: I1211 10:36:32.588798 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fx6ft\" (UniqueName: \"kubernetes.io/projected/5c566b6b-16f8-422c-acda-0325e36103e6-kube-api-access-fx6ft\") pod \"nova-cell1-conductor-0\" (UID: \"5c566b6b-16f8-422c-acda-0325e36103e6\") " pod="openstack/nova-cell1-conductor-0" Dec 11 10:36:32 crc kubenswrapper[4953]: I1211 10:36:32.642638 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Dec 11 10:36:32 crc kubenswrapper[4953]: I1211 10:36:32.673961 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4x545\" (UniqueName: \"kubernetes.io/projected/82055682-e0c4-4cf0-b034-49ba335cb911-kube-api-access-4x545\") pod \"nova-metadata-0\" (UID: \"82055682-e0c4-4cf0-b034-49ba335cb911\") " pod="openstack/nova-metadata-0" Dec 11 10:36:32 crc kubenswrapper[4953]: I1211 10:36:32.674065 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82055682-e0c4-4cf0-b034-49ba335cb911-config-data\") pod \"nova-metadata-0\" (UID: \"82055682-e0c4-4cf0-b034-49ba335cb911\") " pod="openstack/nova-metadata-0" Dec 11 10:36:32 crc kubenswrapper[4953]: I1211 10:36:32.674089 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82055682-e0c4-4cf0-b034-49ba335cb911-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"82055682-e0c4-4cf0-b034-49ba335cb911\") " pod="openstack/nova-metadata-0" Dec 11 10:36:32 crc kubenswrapper[4953]: I1211 10:36:32.674157 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/82055682-e0c4-4cf0-b034-49ba335cb911-logs\") pod \"nova-metadata-0\" (UID: \"82055682-e0c4-4cf0-b034-49ba335cb911\") " pod="openstack/nova-metadata-0" Dec 11 10:36:32 crc kubenswrapper[4953]: I1211 10:36:32.674198 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/82055682-e0c4-4cf0-b034-49ba335cb911-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"82055682-e0c4-4cf0-b034-49ba335cb911\") " pod="openstack/nova-metadata-0" Dec 11 10:36:32 crc kubenswrapper[4953]: I1211 10:36:32.674755 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/82055682-e0c4-4cf0-b034-49ba335cb911-logs\") pod \"nova-metadata-0\" (UID: \"82055682-e0c4-4cf0-b034-49ba335cb911\") " pod="openstack/nova-metadata-0" Dec 11 10:36:32 crc kubenswrapper[4953]: I1211 10:36:32.677926 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82055682-e0c4-4cf0-b034-49ba335cb911-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"82055682-e0c4-4cf0-b034-49ba335cb911\") " pod="openstack/nova-metadata-0" Dec 11 10:36:32 crc kubenswrapper[4953]: I1211 10:36:32.678188 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/82055682-e0c4-4cf0-b034-49ba335cb911-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"82055682-e0c4-4cf0-b034-49ba335cb911\") " pod="openstack/nova-metadata-0" Dec 11 10:36:32 crc kubenswrapper[4953]: I1211 10:36:32.678486 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82055682-e0c4-4cf0-b034-49ba335cb911-config-data\") pod \"nova-metadata-0\" (UID: \"82055682-e0c4-4cf0-b034-49ba335cb911\") " pod="openstack/nova-metadata-0" Dec 11 10:36:32 crc kubenswrapper[4953]: I1211 10:36:32.693141 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4x545\" (UniqueName: \"kubernetes.io/projected/82055682-e0c4-4cf0-b034-49ba335cb911-kube-api-access-4x545\") pod \"nova-metadata-0\" (UID: \"82055682-e0c4-4cf0-b034-49ba335cb911\") " pod="openstack/nova-metadata-0" Dec 11 10:36:32 crc kubenswrapper[4953]: E1211 10:36:32.919222 4953 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ec925673fe6d4a03f8a09fe186a8829fa6e16cec926305276056a86bc9311fa0" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 11 10:36:32 crc kubenswrapper[4953]: E1211 10:36:32.921483 4953 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ec925673fe6d4a03f8a09fe186a8829fa6e16cec926305276056a86bc9311fa0" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 11 10:36:32 crc kubenswrapper[4953]: E1211 10:36:32.922663 4953 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ec925673fe6d4a03f8a09fe186a8829fa6e16cec926305276056a86bc9311fa0" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 11 10:36:32 crc kubenswrapper[4953]: E1211 10:36:32.922734 4953 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="9df1c590-48bc-4795-8655-114657aa49e9" containerName="nova-scheduler-scheduler" Dec 11 10:36:32 crc kubenswrapper[4953]: I1211 10:36:32.954639 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 11 10:36:33 crc kubenswrapper[4953]: I1211 10:36:33.166204 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 11 10:36:33 crc kubenswrapper[4953]: I1211 10:36:33.189377 4953 generic.go:334] "Generic (PLEG): container finished" podID="9df1c590-48bc-4795-8655-114657aa49e9" containerID="ec925673fe6d4a03f8a09fe186a8829fa6e16cec926305276056a86bc9311fa0" exitCode=0 Dec 11 10:36:33 crc kubenswrapper[4953]: I1211 10:36:33.189420 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9df1c590-48bc-4795-8655-114657aa49e9","Type":"ContainerDied","Data":"ec925673fe6d4a03f8a09fe186a8829fa6e16cec926305276056a86bc9311fa0"} Dec 11 10:36:33 crc kubenswrapper[4953]: I1211 10:36:33.409648 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 11 10:36:33 crc kubenswrapper[4953]: W1211 10:36:33.413184 4953 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod82055682_e0c4_4cf0_b034_49ba335cb911.slice/crio-edd9edf2a2a254f73a3ed818f217d10dd7499c48ae3bd59b35279daed244209c WatchSource:0}: Error finding container edd9edf2a2a254f73a3ed818f217d10dd7499c48ae3bd59b35279daed244209c: Status 404 returned error can't find the container with id edd9edf2a2a254f73a3ed818f217d10dd7499c48ae3bd59b35279daed244209c Dec 11 10:36:33 crc kubenswrapper[4953]: I1211 10:36:33.629118 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 11 10:36:33 crc kubenswrapper[4953]: I1211 10:36:33.634530 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-799wh\" (UniqueName: \"kubernetes.io/projected/9df1c590-48bc-4795-8655-114657aa49e9-kube-api-access-799wh\") pod \"9df1c590-48bc-4795-8655-114657aa49e9\" (UID: \"9df1c590-48bc-4795-8655-114657aa49e9\") " Dec 11 10:36:33 crc kubenswrapper[4953]: I1211 10:36:33.634808 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9df1c590-48bc-4795-8655-114657aa49e9-combined-ca-bundle\") pod \"9df1c590-48bc-4795-8655-114657aa49e9\" (UID: \"9df1c590-48bc-4795-8655-114657aa49e9\") " Dec 11 10:36:33 crc kubenswrapper[4953]: I1211 10:36:33.634877 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9df1c590-48bc-4795-8655-114657aa49e9-config-data\") pod \"9df1c590-48bc-4795-8655-114657aa49e9\" (UID: \"9df1c590-48bc-4795-8655-114657aa49e9\") " Dec 11 10:36:33 crc kubenswrapper[4953]: I1211 10:36:33.640314 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9df1c590-48bc-4795-8655-114657aa49e9-kube-api-access-799wh" (OuterVolumeSpecName: "kube-api-access-799wh") pod "9df1c590-48bc-4795-8655-114657aa49e9" (UID: "9df1c590-48bc-4795-8655-114657aa49e9"). InnerVolumeSpecName "kube-api-access-799wh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:36:33 crc kubenswrapper[4953]: I1211 10:36:33.746442 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-799wh\" (UniqueName: \"kubernetes.io/projected/9df1c590-48bc-4795-8655-114657aa49e9-kube-api-access-799wh\") on node \"crc\" DevicePath \"\"" Dec 11 10:36:33 crc kubenswrapper[4953]: I1211 10:36:33.753092 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9df1c590-48bc-4795-8655-114657aa49e9-config-data" (OuterVolumeSpecName: "config-data") pod "9df1c590-48bc-4795-8655-114657aa49e9" (UID: "9df1c590-48bc-4795-8655-114657aa49e9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:36:33 crc kubenswrapper[4953]: I1211 10:36:33.791735 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9df1c590-48bc-4795-8655-114657aa49e9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9df1c590-48bc-4795-8655-114657aa49e9" (UID: "9df1c590-48bc-4795-8655-114657aa49e9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:36:33 crc kubenswrapper[4953]: I1211 10:36:33.848275 4953 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9df1c590-48bc-4795-8655-114657aa49e9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 10:36:33 crc kubenswrapper[4953]: I1211 10:36:33.848304 4953 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9df1c590-48bc-4795-8655-114657aa49e9-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 10:36:33 crc kubenswrapper[4953]: I1211 10:36:33.962881 4953 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-75bfc9b94f-qfmcg" podUID="586be632-9d3d-46be-9de4-5059e771edcf" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.161:5353: i/o timeout" Dec 11 10:36:34 crc kubenswrapper[4953]: I1211 10:36:34.217343 4953 generic.go:334] "Generic (PLEG): container finished" podID="26d03f3f-057f-4c77-82a4-c394e0732e01" containerID="a7f8fd17513ee9bb15963858d05565c67c15d09f6c9fa792151763f52954317e" exitCode=0 Dec 11 10:36:34 crc kubenswrapper[4953]: I1211 10:36:34.217446 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"26d03f3f-057f-4c77-82a4-c394e0732e01","Type":"ContainerDied","Data":"a7f8fd17513ee9bb15963858d05565c67c15d09f6c9fa792151763f52954317e"} Dec 11 10:36:34 crc kubenswrapper[4953]: I1211 10:36:34.219835 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9df1c590-48bc-4795-8655-114657aa49e9","Type":"ContainerDied","Data":"b391b3286663695c17f187cba0967ea7e70dcc58488bfba7b247ef5adacc94b6"} Dec 11 10:36:34 crc kubenswrapper[4953]: I1211 10:36:34.219872 4953 scope.go:117] "RemoveContainer" containerID="ec925673fe6d4a03f8a09fe186a8829fa6e16cec926305276056a86bc9311fa0" Dec 11 10:36:34 crc kubenswrapper[4953]: I1211 10:36:34.220478 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 11 10:36:34 crc kubenswrapper[4953]: I1211 10:36:34.344000 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"82055682-e0c4-4cf0-b034-49ba335cb911","Type":"ContainerStarted","Data":"1926b8cbe681d9a8c8bed729a576861309230c29f98f10d069f9c75d266143f2"} Dec 11 10:36:34 crc kubenswrapper[4953]: I1211 10:36:34.344062 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"82055682-e0c4-4cf0-b034-49ba335cb911","Type":"ContainerStarted","Data":"f5c9fb1119da88b9fa6b56a28ad73ffa87f8d578eb7285f003aadc343843a910"} Dec 11 10:36:34 crc kubenswrapper[4953]: I1211 10:36:34.344075 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"82055682-e0c4-4cf0-b034-49ba335cb911","Type":"ContainerStarted","Data":"edd9edf2a2a254f73a3ed818f217d10dd7499c48ae3bd59b35279daed244209c"} Dec 11 10:36:34 crc kubenswrapper[4953]: I1211 10:36:34.382519 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"5c566b6b-16f8-422c-acda-0325e36103e6","Type":"ContainerStarted","Data":"83eedb4ddd84362084d8ccac38fed9fcbcacfbfefe97227d1e7bf4df1164fbc0"} Dec 11 10:36:34 crc kubenswrapper[4953]: I1211 10:36:34.382592 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Dec 11 10:36:34 crc kubenswrapper[4953]: I1211 10:36:34.382608 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"5c566b6b-16f8-422c-acda-0325e36103e6","Type":"ContainerStarted","Data":"46060cb0b03d6217003dc0e2828ed30e4435ff802d981fb000c90b8daf0398fb"} Dec 11 10:36:34 crc kubenswrapper[4953]: I1211 10:36:34.401512 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.401482167 podStartE2EDuration="2.401482167s" podCreationTimestamp="2025-12-11 10:36:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:36:34.377123291 +0000 UTC m=+1512.400982344" watchObservedRunningTime="2025-12-11 10:36:34.401482167 +0000 UTC m=+1512.425341210" Dec 11 10:36:34 crc kubenswrapper[4953]: I1211 10:36:34.430338 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.430314901 podStartE2EDuration="2.430314901s" podCreationTimestamp="2025-12-11 10:36:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:36:34.404242693 +0000 UTC m=+1512.428101726" watchObservedRunningTime="2025-12-11 10:36:34.430314901 +0000 UTC m=+1512.454173934" Dec 11 10:36:34 crc kubenswrapper[4953]: I1211 10:36:34.512956 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 11 10:36:34 crc kubenswrapper[4953]: I1211 10:36:34.522911 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 11 10:36:34 crc kubenswrapper[4953]: I1211 10:36:34.540754 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Dec 11 10:36:34 crc kubenswrapper[4953]: I1211 10:36:34.549930 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Dec 11 10:36:34 crc kubenswrapper[4953]: E1211 10:36:34.550303 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26d03f3f-057f-4c77-82a4-c394e0732e01" containerName="nova-api-api" Dec 11 10:36:34 crc kubenswrapper[4953]: I1211 10:36:34.550320 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="26d03f3f-057f-4c77-82a4-c394e0732e01" containerName="nova-api-api" Dec 11 10:36:34 crc kubenswrapper[4953]: E1211 10:36:34.550354 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9df1c590-48bc-4795-8655-114657aa49e9" containerName="nova-scheduler-scheduler" Dec 11 10:36:34 crc kubenswrapper[4953]: I1211 10:36:34.550361 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="9df1c590-48bc-4795-8655-114657aa49e9" containerName="nova-scheduler-scheduler" Dec 11 10:36:34 crc kubenswrapper[4953]: E1211 10:36:34.550372 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26d03f3f-057f-4c77-82a4-c394e0732e01" containerName="nova-api-log" Dec 11 10:36:34 crc kubenswrapper[4953]: I1211 10:36:34.550378 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="26d03f3f-057f-4c77-82a4-c394e0732e01" containerName="nova-api-log" Dec 11 10:36:34 crc kubenswrapper[4953]: I1211 10:36:34.550556 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="26d03f3f-057f-4c77-82a4-c394e0732e01" containerName="nova-api-api" Dec 11 10:36:34 crc kubenswrapper[4953]: I1211 10:36:34.550598 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="9df1c590-48bc-4795-8655-114657aa49e9" containerName="nova-scheduler-scheduler" Dec 11 10:36:34 crc kubenswrapper[4953]: I1211 10:36:34.550618 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="26d03f3f-057f-4c77-82a4-c394e0732e01" containerName="nova-api-log" Dec 11 10:36:34 crc kubenswrapper[4953]: I1211 10:36:34.551193 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 11 10:36:34 crc kubenswrapper[4953]: I1211 10:36:34.555105 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Dec 11 10:36:34 crc kubenswrapper[4953]: I1211 10:36:34.570270 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 11 10:36:34 crc kubenswrapper[4953]: I1211 10:36:34.649105 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/26d03f3f-057f-4c77-82a4-c394e0732e01-logs\") pod \"26d03f3f-057f-4c77-82a4-c394e0732e01\" (UID: \"26d03f3f-057f-4c77-82a4-c394e0732e01\") " Dec 11 10:36:34 crc kubenswrapper[4953]: I1211 10:36:34.649208 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26d03f3f-057f-4c77-82a4-c394e0732e01-combined-ca-bundle\") pod \"26d03f3f-057f-4c77-82a4-c394e0732e01\" (UID: \"26d03f3f-057f-4c77-82a4-c394e0732e01\") " Dec 11 10:36:34 crc kubenswrapper[4953]: I1211 10:36:34.649247 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26d03f3f-057f-4c77-82a4-c394e0732e01-config-data\") pod \"26d03f3f-057f-4c77-82a4-c394e0732e01\" (UID: \"26d03f3f-057f-4c77-82a4-c394e0732e01\") " Dec 11 10:36:34 crc kubenswrapper[4953]: I1211 10:36:34.649340 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xk6f2\" (UniqueName: \"kubernetes.io/projected/26d03f3f-057f-4c77-82a4-c394e0732e01-kube-api-access-xk6f2\") pod \"26d03f3f-057f-4c77-82a4-c394e0732e01\" (UID: \"26d03f3f-057f-4c77-82a4-c394e0732e01\") " Dec 11 10:36:34 crc kubenswrapper[4953]: I1211 10:36:34.649544 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjqjr\" (UniqueName: \"kubernetes.io/projected/88583b85-6fea-4cce-afee-c2dd1d16c119-kube-api-access-cjqjr\") pod \"nova-scheduler-0\" (UID: \"88583b85-6fea-4cce-afee-c2dd1d16c119\") " pod="openstack/nova-scheduler-0" Dec 11 10:36:34 crc kubenswrapper[4953]: I1211 10:36:34.649648 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88583b85-6fea-4cce-afee-c2dd1d16c119-config-data\") pod \"nova-scheduler-0\" (UID: \"88583b85-6fea-4cce-afee-c2dd1d16c119\") " pod="openstack/nova-scheduler-0" Dec 11 10:36:34 crc kubenswrapper[4953]: I1211 10:36:34.649743 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88583b85-6fea-4cce-afee-c2dd1d16c119-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"88583b85-6fea-4cce-afee-c2dd1d16c119\") " pod="openstack/nova-scheduler-0" Dec 11 10:36:34 crc kubenswrapper[4953]: I1211 10:36:34.650657 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/26d03f3f-057f-4c77-82a4-c394e0732e01-logs" (OuterVolumeSpecName: "logs") pod "26d03f3f-057f-4c77-82a4-c394e0732e01" (UID: "26d03f3f-057f-4c77-82a4-c394e0732e01"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:36:34 crc kubenswrapper[4953]: I1211 10:36:34.653343 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26d03f3f-057f-4c77-82a4-c394e0732e01-kube-api-access-xk6f2" (OuterVolumeSpecName: "kube-api-access-xk6f2") pod "26d03f3f-057f-4c77-82a4-c394e0732e01" (UID: "26d03f3f-057f-4c77-82a4-c394e0732e01"). InnerVolumeSpecName "kube-api-access-xk6f2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:36:34 crc kubenswrapper[4953]: I1211 10:36:34.674659 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26d03f3f-057f-4c77-82a4-c394e0732e01-config-data" (OuterVolumeSpecName: "config-data") pod "26d03f3f-057f-4c77-82a4-c394e0732e01" (UID: "26d03f3f-057f-4c77-82a4-c394e0732e01"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:36:34 crc kubenswrapper[4953]: I1211 10:36:34.685320 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26d03f3f-057f-4c77-82a4-c394e0732e01-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "26d03f3f-057f-4c77-82a4-c394e0732e01" (UID: "26d03f3f-057f-4c77-82a4-c394e0732e01"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:36:34 crc kubenswrapper[4953]: I1211 10:36:34.751323 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88583b85-6fea-4cce-afee-c2dd1d16c119-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"88583b85-6fea-4cce-afee-c2dd1d16c119\") " pod="openstack/nova-scheduler-0" Dec 11 10:36:34 crc kubenswrapper[4953]: I1211 10:36:34.751452 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjqjr\" (UniqueName: \"kubernetes.io/projected/88583b85-6fea-4cce-afee-c2dd1d16c119-kube-api-access-cjqjr\") pod \"nova-scheduler-0\" (UID: \"88583b85-6fea-4cce-afee-c2dd1d16c119\") " pod="openstack/nova-scheduler-0" Dec 11 10:36:34 crc kubenswrapper[4953]: I1211 10:36:34.751524 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88583b85-6fea-4cce-afee-c2dd1d16c119-config-data\") pod \"nova-scheduler-0\" (UID: \"88583b85-6fea-4cce-afee-c2dd1d16c119\") " pod="openstack/nova-scheduler-0" Dec 11 10:36:34 crc kubenswrapper[4953]: I1211 10:36:34.751702 4953 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/26d03f3f-057f-4c77-82a4-c394e0732e01-logs\") on node \"crc\" DevicePath \"\"" Dec 11 10:36:34 crc kubenswrapper[4953]: I1211 10:36:34.751726 4953 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26d03f3f-057f-4c77-82a4-c394e0732e01-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 10:36:34 crc kubenswrapper[4953]: I1211 10:36:34.751741 4953 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26d03f3f-057f-4c77-82a4-c394e0732e01-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 10:36:34 crc kubenswrapper[4953]: I1211 10:36:34.751754 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xk6f2\" (UniqueName: \"kubernetes.io/projected/26d03f3f-057f-4c77-82a4-c394e0732e01-kube-api-access-xk6f2\") on node \"crc\" DevicePath \"\"" Dec 11 10:36:34 crc kubenswrapper[4953]: I1211 10:36:34.759341 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88583b85-6fea-4cce-afee-c2dd1d16c119-config-data\") pod \"nova-scheduler-0\" (UID: \"88583b85-6fea-4cce-afee-c2dd1d16c119\") " pod="openstack/nova-scheduler-0" Dec 11 10:36:34 crc kubenswrapper[4953]: I1211 10:36:34.763251 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88583b85-6fea-4cce-afee-c2dd1d16c119-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"88583b85-6fea-4cce-afee-c2dd1d16c119\") " pod="openstack/nova-scheduler-0" Dec 11 10:36:34 crc kubenswrapper[4953]: I1211 10:36:34.835250 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjqjr\" (UniqueName: \"kubernetes.io/projected/88583b85-6fea-4cce-afee-c2dd1d16c119-kube-api-access-cjqjr\") pod \"nova-scheduler-0\" (UID: \"88583b85-6fea-4cce-afee-c2dd1d16c119\") " pod="openstack/nova-scheduler-0" Dec 11 10:36:34 crc kubenswrapper[4953]: I1211 10:36:34.868353 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 11 10:36:35 crc kubenswrapper[4953]: I1211 10:36:35.393682 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"26d03f3f-057f-4c77-82a4-c394e0732e01","Type":"ContainerDied","Data":"7efddcacca3bbc9a8762a8baaceef3ec2bfacb66df8150b3e2cfd5c361508d3b"} Dec 11 10:36:35 crc kubenswrapper[4953]: I1211 10:36:35.394867 4953 scope.go:117] "RemoveContainer" containerID="a7f8fd17513ee9bb15963858d05565c67c15d09f6c9fa792151763f52954317e" Dec 11 10:36:35 crc kubenswrapper[4953]: I1211 10:36:35.393866 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 11 10:36:35 crc kubenswrapper[4953]: I1211 10:36:35.425208 4953 scope.go:117] "RemoveContainer" containerID="905c00d9bffdea6c8106bea9fc6becce3f6025d8a8507741a6ad08e738b95d78" Dec 11 10:36:35 crc kubenswrapper[4953]: I1211 10:36:35.447777 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 11 10:36:35 crc kubenswrapper[4953]: I1211 10:36:35.465715 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Dec 11 10:36:35 crc kubenswrapper[4953]: W1211 10:36:35.472016 4953 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod88583b85_6fea_4cce_afee_c2dd1d16c119.slice/crio-db63c092c7a16dd6ea4a2263317c1496c17b35997d31b0d5e99a3bd2dfec5759 WatchSource:0}: Error finding container db63c092c7a16dd6ea4a2263317c1496c17b35997d31b0d5e99a3bd2dfec5759: Status 404 returned error can't find the container with id db63c092c7a16dd6ea4a2263317c1496c17b35997d31b0d5e99a3bd2dfec5759 Dec 11 10:36:35 crc kubenswrapper[4953]: I1211 10:36:35.487255 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 11 10:36:35 crc kubenswrapper[4953]: I1211 10:36:35.495809 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 11 10:36:35 crc kubenswrapper[4953]: I1211 10:36:35.497474 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 11 10:36:35 crc kubenswrapper[4953]: I1211 10:36:35.501950 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 11 10:36:35 crc kubenswrapper[4953]: I1211 10:36:35.504784 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 11 10:36:35 crc kubenswrapper[4953]: I1211 10:36:35.585015 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7104b2f1-eb69-4aa5-877e-44393692917b-logs\") pod \"nova-api-0\" (UID: \"7104b2f1-eb69-4aa5-877e-44393692917b\") " pod="openstack/nova-api-0" Dec 11 10:36:35 crc kubenswrapper[4953]: I1211 10:36:35.585542 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7104b2f1-eb69-4aa5-877e-44393692917b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7104b2f1-eb69-4aa5-877e-44393692917b\") " pod="openstack/nova-api-0" Dec 11 10:36:35 crc kubenswrapper[4953]: I1211 10:36:35.585645 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6b8j5\" (UniqueName: \"kubernetes.io/projected/7104b2f1-eb69-4aa5-877e-44393692917b-kube-api-access-6b8j5\") pod \"nova-api-0\" (UID: \"7104b2f1-eb69-4aa5-877e-44393692917b\") " pod="openstack/nova-api-0" Dec 11 10:36:35 crc kubenswrapper[4953]: I1211 10:36:35.585703 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7104b2f1-eb69-4aa5-877e-44393692917b-config-data\") pod \"nova-api-0\" (UID: \"7104b2f1-eb69-4aa5-877e-44393692917b\") " pod="openstack/nova-api-0" Dec 11 10:36:35 crc kubenswrapper[4953]: I1211 10:36:35.688505 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6b8j5\" (UniqueName: \"kubernetes.io/projected/7104b2f1-eb69-4aa5-877e-44393692917b-kube-api-access-6b8j5\") pod \"nova-api-0\" (UID: \"7104b2f1-eb69-4aa5-877e-44393692917b\") " pod="openstack/nova-api-0" Dec 11 10:36:35 crc kubenswrapper[4953]: I1211 10:36:35.688943 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7104b2f1-eb69-4aa5-877e-44393692917b-config-data\") pod \"nova-api-0\" (UID: \"7104b2f1-eb69-4aa5-877e-44393692917b\") " pod="openstack/nova-api-0" Dec 11 10:36:35 crc kubenswrapper[4953]: I1211 10:36:35.689968 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7104b2f1-eb69-4aa5-877e-44393692917b-logs\") pod \"nova-api-0\" (UID: \"7104b2f1-eb69-4aa5-877e-44393692917b\") " pod="openstack/nova-api-0" Dec 11 10:36:35 crc kubenswrapper[4953]: I1211 10:36:35.690102 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7104b2f1-eb69-4aa5-877e-44393692917b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7104b2f1-eb69-4aa5-877e-44393692917b\") " pod="openstack/nova-api-0" Dec 11 10:36:35 crc kubenswrapper[4953]: I1211 10:36:35.690440 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7104b2f1-eb69-4aa5-877e-44393692917b-logs\") pod \"nova-api-0\" (UID: \"7104b2f1-eb69-4aa5-877e-44393692917b\") " pod="openstack/nova-api-0" Dec 11 10:36:35 crc kubenswrapper[4953]: I1211 10:36:35.693290 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7104b2f1-eb69-4aa5-877e-44393692917b-config-data\") pod \"nova-api-0\" (UID: \"7104b2f1-eb69-4aa5-877e-44393692917b\") " pod="openstack/nova-api-0" Dec 11 10:36:35 crc kubenswrapper[4953]: I1211 10:36:35.694154 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7104b2f1-eb69-4aa5-877e-44393692917b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7104b2f1-eb69-4aa5-877e-44393692917b\") " pod="openstack/nova-api-0" Dec 11 10:36:35 crc kubenswrapper[4953]: I1211 10:36:35.709329 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6b8j5\" (UniqueName: \"kubernetes.io/projected/7104b2f1-eb69-4aa5-877e-44393692917b-kube-api-access-6b8j5\") pod \"nova-api-0\" (UID: \"7104b2f1-eb69-4aa5-877e-44393692917b\") " pod="openstack/nova-api-0" Dec 11 10:36:35 crc kubenswrapper[4953]: I1211 10:36:35.980168 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 11 10:36:36 crc kubenswrapper[4953]: I1211 10:36:36.412639 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"88583b85-6fea-4cce-afee-c2dd1d16c119","Type":"ContainerStarted","Data":"368cfa6521194da5dfbcfe50116277e0d94e4595de7ae4e9091034224c726808"} Dec 11 10:36:36 crc kubenswrapper[4953]: I1211 10:36:36.413031 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"88583b85-6fea-4cce-afee-c2dd1d16c119","Type":"ContainerStarted","Data":"db63c092c7a16dd6ea4a2263317c1496c17b35997d31b0d5e99a3bd2dfec5759"} Dec 11 10:36:36 crc kubenswrapper[4953]: I1211 10:36:36.439645 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 11 10:36:36 crc kubenswrapper[4953]: I1211 10:36:36.440487 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.440475493 podStartE2EDuration="2.440475493s" podCreationTimestamp="2025-12-11 10:36:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:36:36.429132601 +0000 UTC m=+1514.452991634" watchObservedRunningTime="2025-12-11 10:36:36.440475493 +0000 UTC m=+1514.464334516" Dec 11 10:36:36 crc kubenswrapper[4953]: I1211 10:36:36.559385 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26d03f3f-057f-4c77-82a4-c394e0732e01" path="/var/lib/kubelet/pods/26d03f3f-057f-4c77-82a4-c394e0732e01/volumes" Dec 11 10:36:36 crc kubenswrapper[4953]: I1211 10:36:36.562251 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9df1c590-48bc-4795-8655-114657aa49e9" path="/var/lib/kubelet/pods/9df1c590-48bc-4795-8655-114657aa49e9/volumes" Dec 11 10:36:37 crc kubenswrapper[4953]: I1211 10:36:37.427118 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7104b2f1-eb69-4aa5-877e-44393692917b","Type":"ContainerStarted","Data":"110f33b0d9640908537ab39742fbcf0d7eab4125182ea890d59626b7442f2b0c"} Dec 11 10:36:37 crc kubenswrapper[4953]: I1211 10:36:37.427373 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7104b2f1-eb69-4aa5-877e-44393692917b","Type":"ContainerStarted","Data":"fc6e86a289e775883e04a66a567e855f92e7e73afb00014467ab72ef173b250a"} Dec 11 10:36:37 crc kubenswrapper[4953]: I1211 10:36:37.427388 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7104b2f1-eb69-4aa5-877e-44393692917b","Type":"ContainerStarted","Data":"c3122bb293e1067898c9a9df5ec96f7b4773dd9b492d9ee90794ea9a1847999c"} Dec 11 10:36:37 crc kubenswrapper[4953]: I1211 10:36:37.447333 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.447316534 podStartE2EDuration="2.447316534s" podCreationTimestamp="2025-12-11 10:36:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:36:37.444529677 +0000 UTC m=+1515.468388710" watchObservedRunningTime="2025-12-11 10:36:37.447316534 +0000 UTC m=+1515.471175567" Dec 11 10:36:37 crc kubenswrapper[4953]: I1211 10:36:37.955451 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 11 10:36:37 crc kubenswrapper[4953]: I1211 10:36:37.955626 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 11 10:36:39 crc kubenswrapper[4953]: I1211 10:36:39.869632 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Dec 11 10:36:42 crc kubenswrapper[4953]: I1211 10:36:42.685233 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Dec 11 10:36:42 crc kubenswrapper[4953]: I1211 10:36:42.976739 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 11 10:36:42 crc kubenswrapper[4953]: I1211 10:36:42.976796 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 11 10:36:43 crc kubenswrapper[4953]: I1211 10:36:43.986784 4953 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="82055682-e0c4-4cf0-b034-49ba335cb911" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.194:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 11 10:36:43 crc kubenswrapper[4953]: I1211 10:36:43.986787 4953 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="82055682-e0c4-4cf0-b034-49ba335cb911" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.194:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 11 10:36:44 crc kubenswrapper[4953]: I1211 10:36:44.868789 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Dec 11 10:36:44 crc kubenswrapper[4953]: I1211 10:36:44.900440 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Dec 11 10:36:45 crc kubenswrapper[4953]: I1211 10:36:45.892095 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Dec 11 10:36:45 crc kubenswrapper[4953]: I1211 10:36:45.981312 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 11 10:36:45 crc kubenswrapper[4953]: I1211 10:36:45.981722 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 11 10:36:45 crc kubenswrapper[4953]: I1211 10:36:45.996467 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Dec 11 10:36:47 crc kubenswrapper[4953]: I1211 10:36:47.127628 4953 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="7104b2f1-eb69-4aa5-877e-44393692917b" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.196:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 10:36:47 crc kubenswrapper[4953]: I1211 10:36:47.127731 4953 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="7104b2f1-eb69-4aa5-877e-44393692917b" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.196:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 10:36:52 crc kubenswrapper[4953]: I1211 10:36:52.961731 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 11 10:36:52 crc kubenswrapper[4953]: I1211 10:36:52.962385 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 11 10:36:52 crc kubenswrapper[4953]: I1211 10:36:52.966178 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 11 10:36:52 crc kubenswrapper[4953]: I1211 10:36:52.968760 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 11 10:36:55 crc kubenswrapper[4953]: I1211 10:36:55.896056 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 11 10:36:55 crc kubenswrapper[4953]: I1211 10:36:55.980300 4953 generic.go:334] "Generic (PLEG): container finished" podID="847529a7-a29f-4eb3-a678-3a909e4aa0f2" containerID="08e860180f18e4342a471f4999eff5be287c0baa70b6b137634d6ee40f9b0288" exitCode=137 Dec 11 10:36:55 crc kubenswrapper[4953]: I1211 10:36:55.980342 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"847529a7-a29f-4eb3-a678-3a909e4aa0f2","Type":"ContainerDied","Data":"08e860180f18e4342a471f4999eff5be287c0baa70b6b137634d6ee40f9b0288"} Dec 11 10:36:55 crc kubenswrapper[4953]: I1211 10:36:55.980367 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"847529a7-a29f-4eb3-a678-3a909e4aa0f2","Type":"ContainerDied","Data":"285dcb6dc561c46607ba5180a2be9a8496fe34ec01936e25a741c9ce9f7b7b5b"} Dec 11 10:36:55 crc kubenswrapper[4953]: I1211 10:36:55.980384 4953 scope.go:117] "RemoveContainer" containerID="08e860180f18e4342a471f4999eff5be287c0baa70b6b137634d6ee40f9b0288" Dec 11 10:36:55 crc kubenswrapper[4953]: I1211 10:36:55.980497 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 11 10:36:55 crc kubenswrapper[4953]: I1211 10:36:55.985076 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 11 10:36:55 crc kubenswrapper[4953]: I1211 10:36:55.985542 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 11 10:36:55 crc kubenswrapper[4953]: I1211 10:36:55.987979 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 11 10:36:55 crc kubenswrapper[4953]: I1211 10:36:55.999037 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 11 10:36:56 crc kubenswrapper[4953]: I1211 10:36:56.006715 4953 scope.go:117] "RemoveContainer" containerID="08e860180f18e4342a471f4999eff5be287c0baa70b6b137634d6ee40f9b0288" Dec 11 10:36:56 crc kubenswrapper[4953]: E1211 10:36:56.007950 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08e860180f18e4342a471f4999eff5be287c0baa70b6b137634d6ee40f9b0288\": container with ID starting with 08e860180f18e4342a471f4999eff5be287c0baa70b6b137634d6ee40f9b0288 not found: ID does not exist" containerID="08e860180f18e4342a471f4999eff5be287c0baa70b6b137634d6ee40f9b0288" Dec 11 10:36:56 crc kubenswrapper[4953]: I1211 10:36:56.008010 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08e860180f18e4342a471f4999eff5be287c0baa70b6b137634d6ee40f9b0288"} err="failed to get container status \"08e860180f18e4342a471f4999eff5be287c0baa70b6b137634d6ee40f9b0288\": rpc error: code = NotFound desc = could not find container \"08e860180f18e4342a471f4999eff5be287c0baa70b6b137634d6ee40f9b0288\": container with ID starting with 08e860180f18e4342a471f4999eff5be287c0baa70b6b137634d6ee40f9b0288 not found: ID does not exist" Dec 11 10:36:56 crc kubenswrapper[4953]: I1211 10:36:56.070643 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/847529a7-a29f-4eb3-a678-3a909e4aa0f2-config-data\") pod \"847529a7-a29f-4eb3-a678-3a909e4aa0f2\" (UID: \"847529a7-a29f-4eb3-a678-3a909e4aa0f2\") " Dec 11 10:36:56 crc kubenswrapper[4953]: I1211 10:36:56.070797 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kvfjv\" (UniqueName: \"kubernetes.io/projected/847529a7-a29f-4eb3-a678-3a909e4aa0f2-kube-api-access-kvfjv\") pod \"847529a7-a29f-4eb3-a678-3a909e4aa0f2\" (UID: \"847529a7-a29f-4eb3-a678-3a909e4aa0f2\") " Dec 11 10:36:56 crc kubenswrapper[4953]: I1211 10:36:56.071006 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/847529a7-a29f-4eb3-a678-3a909e4aa0f2-combined-ca-bundle\") pod \"847529a7-a29f-4eb3-a678-3a909e4aa0f2\" (UID: \"847529a7-a29f-4eb3-a678-3a909e4aa0f2\") " Dec 11 10:36:56 crc kubenswrapper[4953]: I1211 10:36:56.076554 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/847529a7-a29f-4eb3-a678-3a909e4aa0f2-kube-api-access-kvfjv" (OuterVolumeSpecName: "kube-api-access-kvfjv") pod "847529a7-a29f-4eb3-a678-3a909e4aa0f2" (UID: "847529a7-a29f-4eb3-a678-3a909e4aa0f2"). InnerVolumeSpecName "kube-api-access-kvfjv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:36:56 crc kubenswrapper[4953]: I1211 10:36:56.103889 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/847529a7-a29f-4eb3-a678-3a909e4aa0f2-config-data" (OuterVolumeSpecName: "config-data") pod "847529a7-a29f-4eb3-a678-3a909e4aa0f2" (UID: "847529a7-a29f-4eb3-a678-3a909e4aa0f2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:36:56 crc kubenswrapper[4953]: I1211 10:36:56.113758 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/847529a7-a29f-4eb3-a678-3a909e4aa0f2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "847529a7-a29f-4eb3-a678-3a909e4aa0f2" (UID: "847529a7-a29f-4eb3-a678-3a909e4aa0f2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:36:56 crc kubenswrapper[4953]: I1211 10:36:56.173001 4953 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/847529a7-a29f-4eb3-a678-3a909e4aa0f2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 10:36:56 crc kubenswrapper[4953]: I1211 10:36:56.173046 4953 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/847529a7-a29f-4eb3-a678-3a909e4aa0f2-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 10:36:56 crc kubenswrapper[4953]: I1211 10:36:56.173062 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kvfjv\" (UniqueName: \"kubernetes.io/projected/847529a7-a29f-4eb3-a678-3a909e4aa0f2-kube-api-access-kvfjv\") on node \"crc\" DevicePath \"\"" Dec 11 10:36:56 crc kubenswrapper[4953]: I1211 10:36:56.321347 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 11 10:36:56 crc kubenswrapper[4953]: I1211 10:36:56.331226 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 11 10:36:56 crc kubenswrapper[4953]: I1211 10:36:56.344875 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 11 10:36:56 crc kubenswrapper[4953]: E1211 10:36:56.345284 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="847529a7-a29f-4eb3-a678-3a909e4aa0f2" containerName="nova-cell1-novncproxy-novncproxy" Dec 11 10:36:56 crc kubenswrapper[4953]: I1211 10:36:56.345315 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="847529a7-a29f-4eb3-a678-3a909e4aa0f2" containerName="nova-cell1-novncproxy-novncproxy" Dec 11 10:36:56 crc kubenswrapper[4953]: I1211 10:36:56.345548 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="847529a7-a29f-4eb3-a678-3a909e4aa0f2" containerName="nova-cell1-novncproxy-novncproxy" Dec 11 10:36:56 crc kubenswrapper[4953]: I1211 10:36:56.346381 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 11 10:36:56 crc kubenswrapper[4953]: I1211 10:36:56.350987 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Dec 11 10:36:56 crc kubenswrapper[4953]: I1211 10:36:56.350984 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Dec 11 10:36:56 crc kubenswrapper[4953]: I1211 10:36:56.351625 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Dec 11 10:36:56 crc kubenswrapper[4953]: I1211 10:36:56.369716 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 11 10:36:56 crc kubenswrapper[4953]: I1211 10:36:56.377257 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/79a93889-ae40-4bd1-a697-5797e065231b-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"79a93889-ae40-4bd1-a697-5797e065231b\") " pod="openstack/nova-cell1-novncproxy-0" Dec 11 10:36:56 crc kubenswrapper[4953]: I1211 10:36:56.377332 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/79a93889-ae40-4bd1-a697-5797e065231b-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"79a93889-ae40-4bd1-a697-5797e065231b\") " pod="openstack/nova-cell1-novncproxy-0" Dec 11 10:36:56 crc kubenswrapper[4953]: I1211 10:36:56.377358 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9s4ck\" (UniqueName: \"kubernetes.io/projected/79a93889-ae40-4bd1-a697-5797e065231b-kube-api-access-9s4ck\") pod \"nova-cell1-novncproxy-0\" (UID: \"79a93889-ae40-4bd1-a697-5797e065231b\") " pod="openstack/nova-cell1-novncproxy-0" Dec 11 10:36:56 crc kubenswrapper[4953]: I1211 10:36:56.377399 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79a93889-ae40-4bd1-a697-5797e065231b-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"79a93889-ae40-4bd1-a697-5797e065231b\") " pod="openstack/nova-cell1-novncproxy-0" Dec 11 10:36:56 crc kubenswrapper[4953]: I1211 10:36:56.377423 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79a93889-ae40-4bd1-a697-5797e065231b-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"79a93889-ae40-4bd1-a697-5797e065231b\") " pod="openstack/nova-cell1-novncproxy-0" Dec 11 10:36:56 crc kubenswrapper[4953]: I1211 10:36:56.478955 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/79a93889-ae40-4bd1-a697-5797e065231b-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"79a93889-ae40-4bd1-a697-5797e065231b\") " pod="openstack/nova-cell1-novncproxy-0" Dec 11 10:36:56 crc kubenswrapper[4953]: I1211 10:36:56.479084 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/79a93889-ae40-4bd1-a697-5797e065231b-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"79a93889-ae40-4bd1-a697-5797e065231b\") " pod="openstack/nova-cell1-novncproxy-0" Dec 11 10:36:56 crc kubenswrapper[4953]: I1211 10:36:56.479133 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9s4ck\" (UniqueName: \"kubernetes.io/projected/79a93889-ae40-4bd1-a697-5797e065231b-kube-api-access-9s4ck\") pod \"nova-cell1-novncproxy-0\" (UID: \"79a93889-ae40-4bd1-a697-5797e065231b\") " pod="openstack/nova-cell1-novncproxy-0" Dec 11 10:36:56 crc kubenswrapper[4953]: I1211 10:36:56.479238 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79a93889-ae40-4bd1-a697-5797e065231b-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"79a93889-ae40-4bd1-a697-5797e065231b\") " pod="openstack/nova-cell1-novncproxy-0" Dec 11 10:36:56 crc kubenswrapper[4953]: I1211 10:36:56.479950 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79a93889-ae40-4bd1-a697-5797e065231b-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"79a93889-ae40-4bd1-a697-5797e065231b\") " pod="openstack/nova-cell1-novncproxy-0" Dec 11 10:36:56 crc kubenswrapper[4953]: I1211 10:36:56.484438 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/79a93889-ae40-4bd1-a697-5797e065231b-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"79a93889-ae40-4bd1-a697-5797e065231b\") " pod="openstack/nova-cell1-novncproxy-0" Dec 11 10:36:56 crc kubenswrapper[4953]: I1211 10:36:56.484636 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/79a93889-ae40-4bd1-a697-5797e065231b-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"79a93889-ae40-4bd1-a697-5797e065231b\") " pod="openstack/nova-cell1-novncproxy-0" Dec 11 10:36:56 crc kubenswrapper[4953]: I1211 10:36:56.485216 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79a93889-ae40-4bd1-a697-5797e065231b-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"79a93889-ae40-4bd1-a697-5797e065231b\") " pod="openstack/nova-cell1-novncproxy-0" Dec 11 10:36:56 crc kubenswrapper[4953]: I1211 10:36:56.486796 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="847529a7-a29f-4eb3-a678-3a909e4aa0f2" path="/var/lib/kubelet/pods/847529a7-a29f-4eb3-a678-3a909e4aa0f2/volumes" Dec 11 10:36:56 crc kubenswrapper[4953]: I1211 10:36:56.491692 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79a93889-ae40-4bd1-a697-5797e065231b-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"79a93889-ae40-4bd1-a697-5797e065231b\") " pod="openstack/nova-cell1-novncproxy-0" Dec 11 10:36:56 crc kubenswrapper[4953]: I1211 10:36:56.496552 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9s4ck\" (UniqueName: \"kubernetes.io/projected/79a93889-ae40-4bd1-a697-5797e065231b-kube-api-access-9s4ck\") pod \"nova-cell1-novncproxy-0\" (UID: \"79a93889-ae40-4bd1-a697-5797e065231b\") " pod="openstack/nova-cell1-novncproxy-0" Dec 11 10:36:56 crc kubenswrapper[4953]: I1211 10:36:56.671246 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 11 10:36:56 crc kubenswrapper[4953]: I1211 10:36:56.998418 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 11 10:36:57 crc kubenswrapper[4953]: I1211 10:36:57.003660 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 11 10:36:57 crc kubenswrapper[4953]: I1211 10:36:57.154089 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 11 10:36:57 crc kubenswrapper[4953]: W1211 10:36:57.160942 4953 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod79a93889_ae40_4bd1_a697_5797e065231b.slice/crio-6c519abe2abb5f8fdb11e1a68ab9c639af684b5e3f48b346e7ea25f62c51f921 WatchSource:0}: Error finding container 6c519abe2abb5f8fdb11e1a68ab9c639af684b5e3f48b346e7ea25f62c51f921: Status 404 returned error can't find the container with id 6c519abe2abb5f8fdb11e1a68ab9c639af684b5e3f48b346e7ea25f62c51f921 Dec 11 10:36:57 crc kubenswrapper[4953]: I1211 10:36:57.185599 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5ddd577785-sstm7"] Dec 11 10:36:57 crc kubenswrapper[4953]: I1211 10:36:57.187200 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ddd577785-sstm7" Dec 11 10:36:57 crc kubenswrapper[4953]: I1211 10:36:57.202928 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ddd577785-sstm7"] Dec 11 10:36:57 crc kubenswrapper[4953]: I1211 10:36:57.299621 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e27f6309-0ccd-4aca-ad87-0cd7a9357469-ovsdbserver-nb\") pod \"dnsmasq-dns-5ddd577785-sstm7\" (UID: \"e27f6309-0ccd-4aca-ad87-0cd7a9357469\") " pod="openstack/dnsmasq-dns-5ddd577785-sstm7" Dec 11 10:36:57 crc kubenswrapper[4953]: I1211 10:36:57.299799 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e27f6309-0ccd-4aca-ad87-0cd7a9357469-dns-svc\") pod \"dnsmasq-dns-5ddd577785-sstm7\" (UID: \"e27f6309-0ccd-4aca-ad87-0cd7a9357469\") " pod="openstack/dnsmasq-dns-5ddd577785-sstm7" Dec 11 10:36:57 crc kubenswrapper[4953]: I1211 10:36:57.299964 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e27f6309-0ccd-4aca-ad87-0cd7a9357469-dns-swift-storage-0\") pod \"dnsmasq-dns-5ddd577785-sstm7\" (UID: \"e27f6309-0ccd-4aca-ad87-0cd7a9357469\") " pod="openstack/dnsmasq-dns-5ddd577785-sstm7" Dec 11 10:36:57 crc kubenswrapper[4953]: I1211 10:36:57.300009 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e27f6309-0ccd-4aca-ad87-0cd7a9357469-config\") pod \"dnsmasq-dns-5ddd577785-sstm7\" (UID: \"e27f6309-0ccd-4aca-ad87-0cd7a9357469\") " pod="openstack/dnsmasq-dns-5ddd577785-sstm7" Dec 11 10:36:57 crc kubenswrapper[4953]: I1211 10:36:57.300075 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e27f6309-0ccd-4aca-ad87-0cd7a9357469-ovsdbserver-sb\") pod \"dnsmasq-dns-5ddd577785-sstm7\" (UID: \"e27f6309-0ccd-4aca-ad87-0cd7a9357469\") " pod="openstack/dnsmasq-dns-5ddd577785-sstm7" Dec 11 10:36:57 crc kubenswrapper[4953]: I1211 10:36:57.300107 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4knhw\" (UniqueName: \"kubernetes.io/projected/e27f6309-0ccd-4aca-ad87-0cd7a9357469-kube-api-access-4knhw\") pod \"dnsmasq-dns-5ddd577785-sstm7\" (UID: \"e27f6309-0ccd-4aca-ad87-0cd7a9357469\") " pod="openstack/dnsmasq-dns-5ddd577785-sstm7" Dec 11 10:36:57 crc kubenswrapper[4953]: I1211 10:36:57.401735 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e27f6309-0ccd-4aca-ad87-0cd7a9357469-dns-svc\") pod \"dnsmasq-dns-5ddd577785-sstm7\" (UID: \"e27f6309-0ccd-4aca-ad87-0cd7a9357469\") " pod="openstack/dnsmasq-dns-5ddd577785-sstm7" Dec 11 10:36:57 crc kubenswrapper[4953]: I1211 10:36:57.402171 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e27f6309-0ccd-4aca-ad87-0cd7a9357469-dns-swift-storage-0\") pod \"dnsmasq-dns-5ddd577785-sstm7\" (UID: \"e27f6309-0ccd-4aca-ad87-0cd7a9357469\") " pod="openstack/dnsmasq-dns-5ddd577785-sstm7" Dec 11 10:36:57 crc kubenswrapper[4953]: I1211 10:36:57.402199 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e27f6309-0ccd-4aca-ad87-0cd7a9357469-config\") pod \"dnsmasq-dns-5ddd577785-sstm7\" (UID: \"e27f6309-0ccd-4aca-ad87-0cd7a9357469\") " pod="openstack/dnsmasq-dns-5ddd577785-sstm7" Dec 11 10:36:57 crc kubenswrapper[4953]: I1211 10:36:57.402234 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e27f6309-0ccd-4aca-ad87-0cd7a9357469-ovsdbserver-sb\") pod \"dnsmasq-dns-5ddd577785-sstm7\" (UID: \"e27f6309-0ccd-4aca-ad87-0cd7a9357469\") " pod="openstack/dnsmasq-dns-5ddd577785-sstm7" Dec 11 10:36:57 crc kubenswrapper[4953]: I1211 10:36:57.402261 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4knhw\" (UniqueName: \"kubernetes.io/projected/e27f6309-0ccd-4aca-ad87-0cd7a9357469-kube-api-access-4knhw\") pod \"dnsmasq-dns-5ddd577785-sstm7\" (UID: \"e27f6309-0ccd-4aca-ad87-0cd7a9357469\") " pod="openstack/dnsmasq-dns-5ddd577785-sstm7" Dec 11 10:36:57 crc kubenswrapper[4953]: I1211 10:36:57.402377 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e27f6309-0ccd-4aca-ad87-0cd7a9357469-ovsdbserver-nb\") pod \"dnsmasq-dns-5ddd577785-sstm7\" (UID: \"e27f6309-0ccd-4aca-ad87-0cd7a9357469\") " pod="openstack/dnsmasq-dns-5ddd577785-sstm7" Dec 11 10:36:57 crc kubenswrapper[4953]: I1211 10:36:57.402847 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e27f6309-0ccd-4aca-ad87-0cd7a9357469-dns-svc\") pod \"dnsmasq-dns-5ddd577785-sstm7\" (UID: \"e27f6309-0ccd-4aca-ad87-0cd7a9357469\") " pod="openstack/dnsmasq-dns-5ddd577785-sstm7" Dec 11 10:36:57 crc kubenswrapper[4953]: I1211 10:36:57.403408 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e27f6309-0ccd-4aca-ad87-0cd7a9357469-config\") pod \"dnsmasq-dns-5ddd577785-sstm7\" (UID: \"e27f6309-0ccd-4aca-ad87-0cd7a9357469\") " pod="openstack/dnsmasq-dns-5ddd577785-sstm7" Dec 11 10:36:57 crc kubenswrapper[4953]: I1211 10:36:57.403544 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e27f6309-0ccd-4aca-ad87-0cd7a9357469-ovsdbserver-nb\") pod \"dnsmasq-dns-5ddd577785-sstm7\" (UID: \"e27f6309-0ccd-4aca-ad87-0cd7a9357469\") " pod="openstack/dnsmasq-dns-5ddd577785-sstm7" Dec 11 10:36:57 crc kubenswrapper[4953]: I1211 10:36:57.403557 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e27f6309-0ccd-4aca-ad87-0cd7a9357469-ovsdbserver-sb\") pod \"dnsmasq-dns-5ddd577785-sstm7\" (UID: \"e27f6309-0ccd-4aca-ad87-0cd7a9357469\") " pod="openstack/dnsmasq-dns-5ddd577785-sstm7" Dec 11 10:36:57 crc kubenswrapper[4953]: I1211 10:36:57.405983 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e27f6309-0ccd-4aca-ad87-0cd7a9357469-dns-swift-storage-0\") pod \"dnsmasq-dns-5ddd577785-sstm7\" (UID: \"e27f6309-0ccd-4aca-ad87-0cd7a9357469\") " pod="openstack/dnsmasq-dns-5ddd577785-sstm7" Dec 11 10:36:57 crc kubenswrapper[4953]: I1211 10:36:57.424245 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4knhw\" (UniqueName: \"kubernetes.io/projected/e27f6309-0ccd-4aca-ad87-0cd7a9357469-kube-api-access-4knhw\") pod \"dnsmasq-dns-5ddd577785-sstm7\" (UID: \"e27f6309-0ccd-4aca-ad87-0cd7a9357469\") " pod="openstack/dnsmasq-dns-5ddd577785-sstm7" Dec 11 10:36:57 crc kubenswrapper[4953]: I1211 10:36:57.646374 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ddd577785-sstm7" Dec 11 10:36:58 crc kubenswrapper[4953]: I1211 10:36:58.013231 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"79a93889-ae40-4bd1-a697-5797e065231b","Type":"ContainerStarted","Data":"bc5af3bb14085fb06d4fbb19425f7956a6394356771a345adb23fe49da34c4ef"} Dec 11 10:36:58 crc kubenswrapper[4953]: I1211 10:36:58.013562 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"79a93889-ae40-4bd1-a697-5797e065231b","Type":"ContainerStarted","Data":"6c519abe2abb5f8fdb11e1a68ab9c639af684b5e3f48b346e7ea25f62c51f921"} Dec 11 10:36:58 crc kubenswrapper[4953]: I1211 10:36:58.034002 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.03397258 podStartE2EDuration="2.03397258s" podCreationTimestamp="2025-12-11 10:36:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:36:58.032642329 +0000 UTC m=+1536.056501362" watchObservedRunningTime="2025-12-11 10:36:58.03397258 +0000 UTC m=+1536.057831613" Dec 11 10:36:58 crc kubenswrapper[4953]: I1211 10:36:58.316475 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ddd577785-sstm7"] Dec 11 10:36:59 crc kubenswrapper[4953]: I1211 10:36:59.033119 4953 generic.go:334] "Generic (PLEG): container finished" podID="e27f6309-0ccd-4aca-ad87-0cd7a9357469" containerID="171127e736e1696680b8224078e3d976deda865332d1d3c62b28d433837b0886" exitCode=0 Dec 11 10:36:59 crc kubenswrapper[4953]: I1211 10:36:59.033551 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ddd577785-sstm7" event={"ID":"e27f6309-0ccd-4aca-ad87-0cd7a9357469","Type":"ContainerDied","Data":"171127e736e1696680b8224078e3d976deda865332d1d3c62b28d433837b0886"} Dec 11 10:36:59 crc kubenswrapper[4953]: I1211 10:36:59.033706 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ddd577785-sstm7" event={"ID":"e27f6309-0ccd-4aca-ad87-0cd7a9357469","Type":"ContainerStarted","Data":"60ed0977fe7062141d08e06124136a5c59fe9dacbe6224d4d1e28a7cbdfffeb5"} Dec 11 10:36:59 crc kubenswrapper[4953]: I1211 10:36:59.705695 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 11 10:37:00 crc kubenswrapper[4953]: I1211 10:37:00.045347 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ddd577785-sstm7" event={"ID":"e27f6309-0ccd-4aca-ad87-0cd7a9357469","Type":"ContainerStarted","Data":"43c5d3cbea07b2f71a6427f7f8f0c5486326e6e637aefebb1edd4c2b3c333c07"} Dec 11 10:37:00 crc kubenswrapper[4953]: I1211 10:37:00.045510 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="7104b2f1-eb69-4aa5-877e-44393692917b" containerName="nova-api-log" containerID="cri-o://fc6e86a289e775883e04a66a567e855f92e7e73afb00014467ab72ef173b250a" gracePeriod=30 Dec 11 10:37:00 crc kubenswrapper[4953]: I1211 10:37:00.045680 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="7104b2f1-eb69-4aa5-877e-44393692917b" containerName="nova-api-api" containerID="cri-o://110f33b0d9640908537ab39742fbcf0d7eab4125182ea890d59626b7442f2b0c" gracePeriod=30 Dec 11 10:37:00 crc kubenswrapper[4953]: I1211 10:37:00.203813 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 11 10:37:00 crc kubenswrapper[4953]: I1211 10:37:00.204086 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="06de80b3-ffbb-4efd-beca-dbf2b67046fa" containerName="ceilometer-central-agent" containerID="cri-o://5629ee0af725a70de6d673a73f0b63cccae2bfd4affdec4bec8266f31c24bc77" gracePeriod=30 Dec 11 10:37:00 crc kubenswrapper[4953]: I1211 10:37:00.204266 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="06de80b3-ffbb-4efd-beca-dbf2b67046fa" containerName="ceilometer-notification-agent" containerID="cri-o://ab340757c8543dbdd6c8d1eb6549599bd86ca8a4add2ed5959efaf65989d40c2" gracePeriod=30 Dec 11 10:37:00 crc kubenswrapper[4953]: I1211 10:37:00.204287 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="06de80b3-ffbb-4efd-beca-dbf2b67046fa" containerName="sg-core" containerID="cri-o://11023c9ee9b023c7adc7cbbed1ae49ed8c4599657a8533550b9e0d297fba32d3" gracePeriod=30 Dec 11 10:37:00 crc kubenswrapper[4953]: I1211 10:37:00.204436 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="06de80b3-ffbb-4efd-beca-dbf2b67046fa" containerName="proxy-httpd" containerID="cri-o://e3d77dbc1259bf3a1ef2c7f949bc81dfba52ca6a16159f8fbf73a60eafd81b03" gracePeriod=30 Dec 11 10:37:00 crc kubenswrapper[4953]: I1211 10:37:00.229160 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5ddd577785-sstm7" podStartSLOduration=3.229136692 podStartE2EDuration="3.229136692s" podCreationTimestamp="2025-12-11 10:36:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:37:00.215000823 +0000 UTC m=+1538.238859856" watchObservedRunningTime="2025-12-11 10:37:00.229136692 +0000 UTC m=+1538.252995735" Dec 11 10:37:01 crc kubenswrapper[4953]: I1211 10:37:01.056335 4953 generic.go:334] "Generic (PLEG): container finished" podID="7104b2f1-eb69-4aa5-877e-44393692917b" containerID="fc6e86a289e775883e04a66a567e855f92e7e73afb00014467ab72ef173b250a" exitCode=143 Dec 11 10:37:01 crc kubenswrapper[4953]: I1211 10:37:01.056429 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7104b2f1-eb69-4aa5-877e-44393692917b","Type":"ContainerDied","Data":"fc6e86a289e775883e04a66a567e855f92e7e73afb00014467ab72ef173b250a"} Dec 11 10:37:01 crc kubenswrapper[4953]: I1211 10:37:01.060843 4953 generic.go:334] "Generic (PLEG): container finished" podID="06de80b3-ffbb-4efd-beca-dbf2b67046fa" containerID="e3d77dbc1259bf3a1ef2c7f949bc81dfba52ca6a16159f8fbf73a60eafd81b03" exitCode=0 Dec 11 10:37:01 crc kubenswrapper[4953]: I1211 10:37:01.060875 4953 generic.go:334] "Generic (PLEG): container finished" podID="06de80b3-ffbb-4efd-beca-dbf2b67046fa" containerID="11023c9ee9b023c7adc7cbbed1ae49ed8c4599657a8533550b9e0d297fba32d3" exitCode=2 Dec 11 10:37:01 crc kubenswrapper[4953]: I1211 10:37:01.060885 4953 generic.go:334] "Generic (PLEG): container finished" podID="06de80b3-ffbb-4efd-beca-dbf2b67046fa" containerID="5629ee0af725a70de6d673a73f0b63cccae2bfd4affdec4bec8266f31c24bc77" exitCode=0 Dec 11 10:37:01 crc kubenswrapper[4953]: I1211 10:37:01.060942 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"06de80b3-ffbb-4efd-beca-dbf2b67046fa","Type":"ContainerDied","Data":"e3d77dbc1259bf3a1ef2c7f949bc81dfba52ca6a16159f8fbf73a60eafd81b03"} Dec 11 10:37:01 crc kubenswrapper[4953]: I1211 10:37:01.060972 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"06de80b3-ffbb-4efd-beca-dbf2b67046fa","Type":"ContainerDied","Data":"11023c9ee9b023c7adc7cbbed1ae49ed8c4599657a8533550b9e0d297fba32d3"} Dec 11 10:37:01 crc kubenswrapper[4953]: I1211 10:37:01.060990 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"06de80b3-ffbb-4efd-beca-dbf2b67046fa","Type":"ContainerDied","Data":"5629ee0af725a70de6d673a73f0b63cccae2bfd4affdec4bec8266f31c24bc77"} Dec 11 10:37:01 crc kubenswrapper[4953]: I1211 10:37:01.061126 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5ddd577785-sstm7" Dec 11 10:37:01 crc kubenswrapper[4953]: I1211 10:37:01.674494 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Dec 11 10:37:04 crc kubenswrapper[4953]: I1211 10:37:04.096965 4953 generic.go:334] "Generic (PLEG): container finished" podID="7104b2f1-eb69-4aa5-877e-44393692917b" containerID="110f33b0d9640908537ab39742fbcf0d7eab4125182ea890d59626b7442f2b0c" exitCode=0 Dec 11 10:37:04 crc kubenswrapper[4953]: I1211 10:37:04.097763 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7104b2f1-eb69-4aa5-877e-44393692917b","Type":"ContainerDied","Data":"110f33b0d9640908537ab39742fbcf0d7eab4125182ea890d59626b7442f2b0c"} Dec 11 10:37:04 crc kubenswrapper[4953]: I1211 10:37:04.100694 4953 generic.go:334] "Generic (PLEG): container finished" podID="06de80b3-ffbb-4efd-beca-dbf2b67046fa" containerID="ab340757c8543dbdd6c8d1eb6549599bd86ca8a4add2ed5959efaf65989d40c2" exitCode=0 Dec 11 10:37:04 crc kubenswrapper[4953]: I1211 10:37:04.100713 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"06de80b3-ffbb-4efd-beca-dbf2b67046fa","Type":"ContainerDied","Data":"ab340757c8543dbdd6c8d1eb6549599bd86ca8a4add2ed5959efaf65989d40c2"} Dec 11 10:37:04 crc kubenswrapper[4953]: I1211 10:37:04.256155 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 11 10:37:04 crc kubenswrapper[4953]: I1211 10:37:04.384205 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 10:37:04 crc kubenswrapper[4953]: I1211 10:37:04.395174 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7104b2f1-eb69-4aa5-877e-44393692917b-config-data\") pod \"7104b2f1-eb69-4aa5-877e-44393692917b\" (UID: \"7104b2f1-eb69-4aa5-877e-44393692917b\") " Dec 11 10:37:04 crc kubenswrapper[4953]: I1211 10:37:04.395275 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6b8j5\" (UniqueName: \"kubernetes.io/projected/7104b2f1-eb69-4aa5-877e-44393692917b-kube-api-access-6b8j5\") pod \"7104b2f1-eb69-4aa5-877e-44393692917b\" (UID: \"7104b2f1-eb69-4aa5-877e-44393692917b\") " Dec 11 10:37:04 crc kubenswrapper[4953]: I1211 10:37:04.395534 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7104b2f1-eb69-4aa5-877e-44393692917b-combined-ca-bundle\") pod \"7104b2f1-eb69-4aa5-877e-44393692917b\" (UID: \"7104b2f1-eb69-4aa5-877e-44393692917b\") " Dec 11 10:37:04 crc kubenswrapper[4953]: I1211 10:37:04.396698 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7104b2f1-eb69-4aa5-877e-44393692917b-logs\") pod \"7104b2f1-eb69-4aa5-877e-44393692917b\" (UID: \"7104b2f1-eb69-4aa5-877e-44393692917b\") " Dec 11 10:37:04 crc kubenswrapper[4953]: I1211 10:37:04.397495 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7104b2f1-eb69-4aa5-877e-44393692917b-logs" (OuterVolumeSpecName: "logs") pod "7104b2f1-eb69-4aa5-877e-44393692917b" (UID: "7104b2f1-eb69-4aa5-877e-44393692917b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:37:04 crc kubenswrapper[4953]: I1211 10:37:04.397888 4953 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7104b2f1-eb69-4aa5-877e-44393692917b-logs\") on node \"crc\" DevicePath \"\"" Dec 11 10:37:04 crc kubenswrapper[4953]: I1211 10:37:04.421253 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7104b2f1-eb69-4aa5-877e-44393692917b-kube-api-access-6b8j5" (OuterVolumeSpecName: "kube-api-access-6b8j5") pod "7104b2f1-eb69-4aa5-877e-44393692917b" (UID: "7104b2f1-eb69-4aa5-877e-44393692917b"). InnerVolumeSpecName "kube-api-access-6b8j5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:37:04 crc kubenswrapper[4953]: I1211 10:37:04.449608 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7104b2f1-eb69-4aa5-877e-44393692917b-config-data" (OuterVolumeSpecName: "config-data") pod "7104b2f1-eb69-4aa5-877e-44393692917b" (UID: "7104b2f1-eb69-4aa5-877e-44393692917b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:37:04 crc kubenswrapper[4953]: I1211 10:37:04.499931 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06de80b3-ffbb-4efd-beca-dbf2b67046fa-combined-ca-bundle\") pod \"06de80b3-ffbb-4efd-beca-dbf2b67046fa\" (UID: \"06de80b3-ffbb-4efd-beca-dbf2b67046fa\") " Dec 11 10:37:04 crc kubenswrapper[4953]: I1211 10:37:04.499992 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/06de80b3-ffbb-4efd-beca-dbf2b67046fa-log-httpd\") pod \"06de80b3-ffbb-4efd-beca-dbf2b67046fa\" (UID: \"06de80b3-ffbb-4efd-beca-dbf2b67046fa\") " Dec 11 10:37:04 crc kubenswrapper[4953]: I1211 10:37:04.500070 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06de80b3-ffbb-4efd-beca-dbf2b67046fa-config-data\") pod \"06de80b3-ffbb-4efd-beca-dbf2b67046fa\" (UID: \"06de80b3-ffbb-4efd-beca-dbf2b67046fa\") " Dec 11 10:37:04 crc kubenswrapper[4953]: I1211 10:37:04.500101 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06de80b3-ffbb-4efd-beca-dbf2b67046fa-scripts\") pod \"06de80b3-ffbb-4efd-beca-dbf2b67046fa\" (UID: \"06de80b3-ffbb-4efd-beca-dbf2b67046fa\") " Dec 11 10:37:04 crc kubenswrapper[4953]: I1211 10:37:04.500201 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/06de80b3-ffbb-4efd-beca-dbf2b67046fa-ceilometer-tls-certs\") pod \"06de80b3-ffbb-4efd-beca-dbf2b67046fa\" (UID: \"06de80b3-ffbb-4efd-beca-dbf2b67046fa\") " Dec 11 10:37:04 crc kubenswrapper[4953]: I1211 10:37:04.500313 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/06de80b3-ffbb-4efd-beca-dbf2b67046fa-run-httpd\") pod \"06de80b3-ffbb-4efd-beca-dbf2b67046fa\" (UID: \"06de80b3-ffbb-4efd-beca-dbf2b67046fa\") " Dec 11 10:37:04 crc kubenswrapper[4953]: I1211 10:37:04.500454 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gvdzr\" (UniqueName: \"kubernetes.io/projected/06de80b3-ffbb-4efd-beca-dbf2b67046fa-kube-api-access-gvdzr\") pod \"06de80b3-ffbb-4efd-beca-dbf2b67046fa\" (UID: \"06de80b3-ffbb-4efd-beca-dbf2b67046fa\") " Dec 11 10:37:04 crc kubenswrapper[4953]: I1211 10:37:04.500501 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/06de80b3-ffbb-4efd-beca-dbf2b67046fa-sg-core-conf-yaml\") pod \"06de80b3-ffbb-4efd-beca-dbf2b67046fa\" (UID: \"06de80b3-ffbb-4efd-beca-dbf2b67046fa\") " Dec 11 10:37:04 crc kubenswrapper[4953]: I1211 10:37:04.501398 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7104b2f1-eb69-4aa5-877e-44393692917b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7104b2f1-eb69-4aa5-877e-44393692917b" (UID: "7104b2f1-eb69-4aa5-877e-44393692917b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:37:04 crc kubenswrapper[4953]: I1211 10:37:04.501728 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/06de80b3-ffbb-4efd-beca-dbf2b67046fa-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "06de80b3-ffbb-4efd-beca-dbf2b67046fa" (UID: "06de80b3-ffbb-4efd-beca-dbf2b67046fa"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:37:04 crc kubenswrapper[4953]: I1211 10:37:04.502076 4953 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/06de80b3-ffbb-4efd-beca-dbf2b67046fa-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 11 10:37:04 crc kubenswrapper[4953]: I1211 10:37:04.502097 4953 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7104b2f1-eb69-4aa5-877e-44393692917b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 10:37:04 crc kubenswrapper[4953]: I1211 10:37:04.502109 4953 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7104b2f1-eb69-4aa5-877e-44393692917b-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 10:37:04 crc kubenswrapper[4953]: I1211 10:37:04.502122 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6b8j5\" (UniqueName: \"kubernetes.io/projected/7104b2f1-eb69-4aa5-877e-44393692917b-kube-api-access-6b8j5\") on node \"crc\" DevicePath \"\"" Dec 11 10:37:04 crc kubenswrapper[4953]: I1211 10:37:04.505758 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/06de80b3-ffbb-4efd-beca-dbf2b67046fa-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "06de80b3-ffbb-4efd-beca-dbf2b67046fa" (UID: "06de80b3-ffbb-4efd-beca-dbf2b67046fa"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:37:04 crc kubenswrapper[4953]: I1211 10:37:04.506640 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06de80b3-ffbb-4efd-beca-dbf2b67046fa-kube-api-access-gvdzr" (OuterVolumeSpecName: "kube-api-access-gvdzr") pod "06de80b3-ffbb-4efd-beca-dbf2b67046fa" (UID: "06de80b3-ffbb-4efd-beca-dbf2b67046fa"). InnerVolumeSpecName "kube-api-access-gvdzr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:37:04 crc kubenswrapper[4953]: I1211 10:37:04.513129 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06de80b3-ffbb-4efd-beca-dbf2b67046fa-scripts" (OuterVolumeSpecName: "scripts") pod "06de80b3-ffbb-4efd-beca-dbf2b67046fa" (UID: "06de80b3-ffbb-4efd-beca-dbf2b67046fa"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:37:04 crc kubenswrapper[4953]: I1211 10:37:04.534512 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06de80b3-ffbb-4efd-beca-dbf2b67046fa-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "06de80b3-ffbb-4efd-beca-dbf2b67046fa" (UID: "06de80b3-ffbb-4efd-beca-dbf2b67046fa"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:37:04 crc kubenswrapper[4953]: I1211 10:37:04.562216 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06de80b3-ffbb-4efd-beca-dbf2b67046fa-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "06de80b3-ffbb-4efd-beca-dbf2b67046fa" (UID: "06de80b3-ffbb-4efd-beca-dbf2b67046fa"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:37:04 crc kubenswrapper[4953]: I1211 10:37:04.603143 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06de80b3-ffbb-4efd-beca-dbf2b67046fa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "06de80b3-ffbb-4efd-beca-dbf2b67046fa" (UID: "06de80b3-ffbb-4efd-beca-dbf2b67046fa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:37:04 crc kubenswrapper[4953]: I1211 10:37:04.603742 4953 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/06de80b3-ffbb-4efd-beca-dbf2b67046fa-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 11 10:37:04 crc kubenswrapper[4953]: I1211 10:37:04.603774 4953 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/06de80b3-ffbb-4efd-beca-dbf2b67046fa-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 11 10:37:04 crc kubenswrapper[4953]: I1211 10:37:04.603788 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gvdzr\" (UniqueName: \"kubernetes.io/projected/06de80b3-ffbb-4efd-beca-dbf2b67046fa-kube-api-access-gvdzr\") on node \"crc\" DevicePath \"\"" Dec 11 10:37:04 crc kubenswrapper[4953]: I1211 10:37:04.603801 4953 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/06de80b3-ffbb-4efd-beca-dbf2b67046fa-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 11 10:37:04 crc kubenswrapper[4953]: I1211 10:37:04.603812 4953 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06de80b3-ffbb-4efd-beca-dbf2b67046fa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 10:37:04 crc kubenswrapper[4953]: I1211 10:37:04.603824 4953 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06de80b3-ffbb-4efd-beca-dbf2b67046fa-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 10:37:04 crc kubenswrapper[4953]: I1211 10:37:04.626036 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06de80b3-ffbb-4efd-beca-dbf2b67046fa-config-data" (OuterVolumeSpecName: "config-data") pod "06de80b3-ffbb-4efd-beca-dbf2b67046fa" (UID: "06de80b3-ffbb-4efd-beca-dbf2b67046fa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:37:04 crc kubenswrapper[4953]: I1211 10:37:04.705652 4953 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06de80b3-ffbb-4efd-beca-dbf2b67046fa-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 10:37:05 crc kubenswrapper[4953]: I1211 10:37:05.112566 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7104b2f1-eb69-4aa5-877e-44393692917b","Type":"ContainerDied","Data":"c3122bb293e1067898c9a9df5ec96f7b4773dd9b492d9ee90794ea9a1847999c"} Dec 11 10:37:05 crc kubenswrapper[4953]: I1211 10:37:05.112605 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 11 10:37:05 crc kubenswrapper[4953]: I1211 10:37:05.113003 4953 scope.go:117] "RemoveContainer" containerID="110f33b0d9640908537ab39742fbcf0d7eab4125182ea890d59626b7442f2b0c" Dec 11 10:37:05 crc kubenswrapper[4953]: I1211 10:37:05.115426 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"06de80b3-ffbb-4efd-beca-dbf2b67046fa","Type":"ContainerDied","Data":"b338969bdf245a9b78f32e3573e9ed668362f3df5bd891213e53caccf7ab725f"} Dec 11 10:37:05 crc kubenswrapper[4953]: I1211 10:37:05.115542 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 10:37:05 crc kubenswrapper[4953]: I1211 10:37:05.138810 4953 scope.go:117] "RemoveContainer" containerID="fc6e86a289e775883e04a66a567e855f92e7e73afb00014467ab72ef173b250a" Dec 11 10:37:05 crc kubenswrapper[4953]: I1211 10:37:05.153384 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 11 10:37:05 crc kubenswrapper[4953]: I1211 10:37:05.164867 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Dec 11 10:37:05 crc kubenswrapper[4953]: I1211 10:37:05.170012 4953 scope.go:117] "RemoveContainer" containerID="e3d77dbc1259bf3a1ef2c7f949bc81dfba52ca6a16159f8fbf73a60eafd81b03" Dec 11 10:37:05 crc kubenswrapper[4953]: I1211 10:37:05.179826 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 11 10:37:05 crc kubenswrapper[4953]: I1211 10:37:05.199660 4953 scope.go:117] "RemoveContainer" containerID="11023c9ee9b023c7adc7cbbed1ae49ed8c4599657a8533550b9e0d297fba32d3" Dec 11 10:37:05 crc kubenswrapper[4953]: I1211 10:37:05.200844 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 11 10:37:05 crc kubenswrapper[4953]: I1211 10:37:05.217315 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 11 10:37:05 crc kubenswrapper[4953]: E1211 10:37:05.218049 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06de80b3-ffbb-4efd-beca-dbf2b67046fa" containerName="ceilometer-central-agent" Dec 11 10:37:05 crc kubenswrapper[4953]: I1211 10:37:05.218129 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="06de80b3-ffbb-4efd-beca-dbf2b67046fa" containerName="ceilometer-central-agent" Dec 11 10:37:05 crc kubenswrapper[4953]: E1211 10:37:05.218194 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7104b2f1-eb69-4aa5-877e-44393692917b" containerName="nova-api-log" Dec 11 10:37:05 crc kubenswrapper[4953]: I1211 10:37:05.218249 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="7104b2f1-eb69-4aa5-877e-44393692917b" containerName="nova-api-log" Dec 11 10:37:05 crc kubenswrapper[4953]: E1211 10:37:05.218352 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06de80b3-ffbb-4efd-beca-dbf2b67046fa" containerName="ceilometer-notification-agent" Dec 11 10:37:05 crc kubenswrapper[4953]: I1211 10:37:05.218409 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="06de80b3-ffbb-4efd-beca-dbf2b67046fa" containerName="ceilometer-notification-agent" Dec 11 10:37:05 crc kubenswrapper[4953]: E1211 10:37:05.218467 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7104b2f1-eb69-4aa5-877e-44393692917b" containerName="nova-api-api" Dec 11 10:37:05 crc kubenswrapper[4953]: I1211 10:37:05.218521 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="7104b2f1-eb69-4aa5-877e-44393692917b" containerName="nova-api-api" Dec 11 10:37:05 crc kubenswrapper[4953]: E1211 10:37:05.218600 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06de80b3-ffbb-4efd-beca-dbf2b67046fa" containerName="proxy-httpd" Dec 11 10:37:05 crc kubenswrapper[4953]: I1211 10:37:05.218665 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="06de80b3-ffbb-4efd-beca-dbf2b67046fa" containerName="proxy-httpd" Dec 11 10:37:05 crc kubenswrapper[4953]: E1211 10:37:05.218740 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06de80b3-ffbb-4efd-beca-dbf2b67046fa" containerName="sg-core" Dec 11 10:37:05 crc kubenswrapper[4953]: I1211 10:37:05.218799 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="06de80b3-ffbb-4efd-beca-dbf2b67046fa" containerName="sg-core" Dec 11 10:37:05 crc kubenswrapper[4953]: I1211 10:37:05.219048 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="7104b2f1-eb69-4aa5-877e-44393692917b" containerName="nova-api-api" Dec 11 10:37:05 crc kubenswrapper[4953]: I1211 10:37:05.219128 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="06de80b3-ffbb-4efd-beca-dbf2b67046fa" containerName="ceilometer-notification-agent" Dec 11 10:37:05 crc kubenswrapper[4953]: I1211 10:37:05.219195 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="06de80b3-ffbb-4efd-beca-dbf2b67046fa" containerName="ceilometer-central-agent" Dec 11 10:37:05 crc kubenswrapper[4953]: I1211 10:37:05.219259 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="7104b2f1-eb69-4aa5-877e-44393692917b" containerName="nova-api-log" Dec 11 10:37:05 crc kubenswrapper[4953]: I1211 10:37:05.219327 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="06de80b3-ffbb-4efd-beca-dbf2b67046fa" containerName="proxy-httpd" Dec 11 10:37:05 crc kubenswrapper[4953]: I1211 10:37:05.219403 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="06de80b3-ffbb-4efd-beca-dbf2b67046fa" containerName="sg-core" Dec 11 10:37:05 crc kubenswrapper[4953]: I1211 10:37:05.220519 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 11 10:37:05 crc kubenswrapper[4953]: I1211 10:37:05.226095 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 11 10:37:05 crc kubenswrapper[4953]: I1211 10:37:05.226367 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Dec 11 10:37:05 crc kubenswrapper[4953]: I1211 10:37:05.230378 4953 scope.go:117] "RemoveContainer" containerID="ab340757c8543dbdd6c8d1eb6549599bd86ca8a4add2ed5959efaf65989d40c2" Dec 11 10:37:05 crc kubenswrapper[4953]: I1211 10:37:05.232465 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 11 10:37:05 crc kubenswrapper[4953]: I1211 10:37:05.235925 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Dec 11 10:37:05 crc kubenswrapper[4953]: I1211 10:37:05.245269 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 11 10:37:05 crc kubenswrapper[4953]: I1211 10:37:05.251520 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 10:37:05 crc kubenswrapper[4953]: I1211 10:37:05.258404 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 11 10:37:05 crc kubenswrapper[4953]: I1211 10:37:05.258827 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Dec 11 10:37:05 crc kubenswrapper[4953]: I1211 10:37:05.262296 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 11 10:37:05 crc kubenswrapper[4953]: I1211 10:37:05.262681 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 11 10:37:05 crc kubenswrapper[4953]: I1211 10:37:05.288857 4953 scope.go:117] "RemoveContainer" containerID="5629ee0af725a70de6d673a73f0b63cccae2bfd4affdec4bec8266f31c24bc77" Dec 11 10:37:05 crc kubenswrapper[4953]: I1211 10:37:05.331555 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxllb\" (UniqueName: \"kubernetes.io/projected/abf03403-5280-4fe7-ac72-8d17a8a7fcb7-kube-api-access-dxllb\") pod \"nova-api-0\" (UID: \"abf03403-5280-4fe7-ac72-8d17a8a7fcb7\") " pod="openstack/nova-api-0" Dec 11 10:37:05 crc kubenswrapper[4953]: I1211 10:37:05.331698 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/abf03403-5280-4fe7-ac72-8d17a8a7fcb7-logs\") pod \"nova-api-0\" (UID: \"abf03403-5280-4fe7-ac72-8d17a8a7fcb7\") " pod="openstack/nova-api-0" Dec 11 10:37:05 crc kubenswrapper[4953]: I1211 10:37:05.331777 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/abf03403-5280-4fe7-ac72-8d17a8a7fcb7-public-tls-certs\") pod \"nova-api-0\" (UID: \"abf03403-5280-4fe7-ac72-8d17a8a7fcb7\") " pod="openstack/nova-api-0" Dec 11 10:37:05 crc kubenswrapper[4953]: I1211 10:37:05.332371 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abf03403-5280-4fe7-ac72-8d17a8a7fcb7-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"abf03403-5280-4fe7-ac72-8d17a8a7fcb7\") " pod="openstack/nova-api-0" Dec 11 10:37:05 crc kubenswrapper[4953]: I1211 10:37:05.332470 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/abf03403-5280-4fe7-ac72-8d17a8a7fcb7-config-data\") pod \"nova-api-0\" (UID: \"abf03403-5280-4fe7-ac72-8d17a8a7fcb7\") " pod="openstack/nova-api-0" Dec 11 10:37:05 crc kubenswrapper[4953]: I1211 10:37:05.333489 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/abf03403-5280-4fe7-ac72-8d17a8a7fcb7-internal-tls-certs\") pod \"nova-api-0\" (UID: \"abf03403-5280-4fe7-ac72-8d17a8a7fcb7\") " pod="openstack/nova-api-0" Dec 11 10:37:05 crc kubenswrapper[4953]: I1211 10:37:05.435048 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0fdfbbe2-a3b8-4834-9920-114c40de67dc-scripts\") pod \"ceilometer-0\" (UID: \"0fdfbbe2-a3b8-4834-9920-114c40de67dc\") " pod="openstack/ceilometer-0" Dec 11 10:37:05 crc kubenswrapper[4953]: I1211 10:37:05.435117 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/abf03403-5280-4fe7-ac72-8d17a8a7fcb7-logs\") pod \"nova-api-0\" (UID: \"abf03403-5280-4fe7-ac72-8d17a8a7fcb7\") " pod="openstack/nova-api-0" Dec 11 10:37:05 crc kubenswrapper[4953]: I1211 10:37:05.435163 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0fdfbbe2-a3b8-4834-9920-114c40de67dc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0fdfbbe2-a3b8-4834-9920-114c40de67dc\") " pod="openstack/ceilometer-0" Dec 11 10:37:05 crc kubenswrapper[4953]: I1211 10:37:05.435202 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/abf03403-5280-4fe7-ac72-8d17a8a7fcb7-public-tls-certs\") pod \"nova-api-0\" (UID: \"abf03403-5280-4fe7-ac72-8d17a8a7fcb7\") " pod="openstack/nova-api-0" Dec 11 10:37:05 crc kubenswrapper[4953]: I1211 10:37:05.435246 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0fdfbbe2-a3b8-4834-9920-114c40de67dc-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"0fdfbbe2-a3b8-4834-9920-114c40de67dc\") " pod="openstack/ceilometer-0" Dec 11 10:37:05 crc kubenswrapper[4953]: I1211 10:37:05.435286 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6mt8\" (UniqueName: \"kubernetes.io/projected/0fdfbbe2-a3b8-4834-9920-114c40de67dc-kube-api-access-g6mt8\") pod \"ceilometer-0\" (UID: \"0fdfbbe2-a3b8-4834-9920-114c40de67dc\") " pod="openstack/ceilometer-0" Dec 11 10:37:05 crc kubenswrapper[4953]: I1211 10:37:05.435307 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abf03403-5280-4fe7-ac72-8d17a8a7fcb7-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"abf03403-5280-4fe7-ac72-8d17a8a7fcb7\") " pod="openstack/nova-api-0" Dec 11 10:37:05 crc kubenswrapper[4953]: I1211 10:37:05.435329 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0fdfbbe2-a3b8-4834-9920-114c40de67dc-config-data\") pod \"ceilometer-0\" (UID: \"0fdfbbe2-a3b8-4834-9920-114c40de67dc\") " pod="openstack/ceilometer-0" Dec 11 10:37:05 crc kubenswrapper[4953]: I1211 10:37:05.435371 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/abf03403-5280-4fe7-ac72-8d17a8a7fcb7-config-data\") pod \"nova-api-0\" (UID: \"abf03403-5280-4fe7-ac72-8d17a8a7fcb7\") " pod="openstack/nova-api-0" Dec 11 10:37:05 crc kubenswrapper[4953]: I1211 10:37:05.435388 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0fdfbbe2-a3b8-4834-9920-114c40de67dc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0fdfbbe2-a3b8-4834-9920-114c40de67dc\") " pod="openstack/ceilometer-0" Dec 11 10:37:05 crc kubenswrapper[4953]: I1211 10:37:05.435409 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/abf03403-5280-4fe7-ac72-8d17a8a7fcb7-internal-tls-certs\") pod \"nova-api-0\" (UID: \"abf03403-5280-4fe7-ac72-8d17a8a7fcb7\") " pod="openstack/nova-api-0" Dec 11 10:37:05 crc kubenswrapper[4953]: I1211 10:37:05.435438 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0fdfbbe2-a3b8-4834-9920-114c40de67dc-run-httpd\") pod \"ceilometer-0\" (UID: \"0fdfbbe2-a3b8-4834-9920-114c40de67dc\") " pod="openstack/ceilometer-0" Dec 11 10:37:05 crc kubenswrapper[4953]: I1211 10:37:05.435461 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxllb\" (UniqueName: \"kubernetes.io/projected/abf03403-5280-4fe7-ac72-8d17a8a7fcb7-kube-api-access-dxllb\") pod \"nova-api-0\" (UID: \"abf03403-5280-4fe7-ac72-8d17a8a7fcb7\") " pod="openstack/nova-api-0" Dec 11 10:37:05 crc kubenswrapper[4953]: I1211 10:37:05.435481 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0fdfbbe2-a3b8-4834-9920-114c40de67dc-log-httpd\") pod \"ceilometer-0\" (UID: \"0fdfbbe2-a3b8-4834-9920-114c40de67dc\") " pod="openstack/ceilometer-0" Dec 11 10:37:05 crc kubenswrapper[4953]: I1211 10:37:05.435895 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/abf03403-5280-4fe7-ac72-8d17a8a7fcb7-logs\") pod \"nova-api-0\" (UID: \"abf03403-5280-4fe7-ac72-8d17a8a7fcb7\") " pod="openstack/nova-api-0" Dec 11 10:37:05 crc kubenswrapper[4953]: I1211 10:37:05.445253 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/abf03403-5280-4fe7-ac72-8d17a8a7fcb7-public-tls-certs\") pod \"nova-api-0\" (UID: \"abf03403-5280-4fe7-ac72-8d17a8a7fcb7\") " pod="openstack/nova-api-0" Dec 11 10:37:05 crc kubenswrapper[4953]: I1211 10:37:05.445374 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/abf03403-5280-4fe7-ac72-8d17a8a7fcb7-config-data\") pod \"nova-api-0\" (UID: \"abf03403-5280-4fe7-ac72-8d17a8a7fcb7\") " pod="openstack/nova-api-0" Dec 11 10:37:05 crc kubenswrapper[4953]: I1211 10:37:05.445436 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/abf03403-5280-4fe7-ac72-8d17a8a7fcb7-internal-tls-certs\") pod \"nova-api-0\" (UID: \"abf03403-5280-4fe7-ac72-8d17a8a7fcb7\") " pod="openstack/nova-api-0" Dec 11 10:37:05 crc kubenswrapper[4953]: I1211 10:37:05.445436 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abf03403-5280-4fe7-ac72-8d17a8a7fcb7-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"abf03403-5280-4fe7-ac72-8d17a8a7fcb7\") " pod="openstack/nova-api-0" Dec 11 10:37:05 crc kubenswrapper[4953]: I1211 10:37:05.454815 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxllb\" (UniqueName: \"kubernetes.io/projected/abf03403-5280-4fe7-ac72-8d17a8a7fcb7-kube-api-access-dxllb\") pod \"nova-api-0\" (UID: \"abf03403-5280-4fe7-ac72-8d17a8a7fcb7\") " pod="openstack/nova-api-0" Dec 11 10:37:05 crc kubenswrapper[4953]: I1211 10:37:05.536583 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6mt8\" (UniqueName: \"kubernetes.io/projected/0fdfbbe2-a3b8-4834-9920-114c40de67dc-kube-api-access-g6mt8\") pod \"ceilometer-0\" (UID: \"0fdfbbe2-a3b8-4834-9920-114c40de67dc\") " pod="openstack/ceilometer-0" Dec 11 10:37:05 crc kubenswrapper[4953]: I1211 10:37:05.536647 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0fdfbbe2-a3b8-4834-9920-114c40de67dc-config-data\") pod \"ceilometer-0\" (UID: \"0fdfbbe2-a3b8-4834-9920-114c40de67dc\") " pod="openstack/ceilometer-0" Dec 11 10:37:05 crc kubenswrapper[4953]: I1211 10:37:05.536689 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0fdfbbe2-a3b8-4834-9920-114c40de67dc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0fdfbbe2-a3b8-4834-9920-114c40de67dc\") " pod="openstack/ceilometer-0" Dec 11 10:37:05 crc kubenswrapper[4953]: I1211 10:37:05.536755 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0fdfbbe2-a3b8-4834-9920-114c40de67dc-run-httpd\") pod \"ceilometer-0\" (UID: \"0fdfbbe2-a3b8-4834-9920-114c40de67dc\") " pod="openstack/ceilometer-0" Dec 11 10:37:05 crc kubenswrapper[4953]: I1211 10:37:05.536801 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0fdfbbe2-a3b8-4834-9920-114c40de67dc-log-httpd\") pod \"ceilometer-0\" (UID: \"0fdfbbe2-a3b8-4834-9920-114c40de67dc\") " pod="openstack/ceilometer-0" Dec 11 10:37:05 crc kubenswrapper[4953]: I1211 10:37:05.536836 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0fdfbbe2-a3b8-4834-9920-114c40de67dc-scripts\") pod \"ceilometer-0\" (UID: \"0fdfbbe2-a3b8-4834-9920-114c40de67dc\") " pod="openstack/ceilometer-0" Dec 11 10:37:05 crc kubenswrapper[4953]: I1211 10:37:05.536897 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0fdfbbe2-a3b8-4834-9920-114c40de67dc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0fdfbbe2-a3b8-4834-9920-114c40de67dc\") " pod="openstack/ceilometer-0" Dec 11 10:37:05 crc kubenswrapper[4953]: I1211 10:37:05.536974 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0fdfbbe2-a3b8-4834-9920-114c40de67dc-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"0fdfbbe2-a3b8-4834-9920-114c40de67dc\") " pod="openstack/ceilometer-0" Dec 11 10:37:05 crc kubenswrapper[4953]: I1211 10:37:05.537516 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0fdfbbe2-a3b8-4834-9920-114c40de67dc-log-httpd\") pod \"ceilometer-0\" (UID: \"0fdfbbe2-a3b8-4834-9920-114c40de67dc\") " pod="openstack/ceilometer-0" Dec 11 10:37:05 crc kubenswrapper[4953]: I1211 10:37:05.537818 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0fdfbbe2-a3b8-4834-9920-114c40de67dc-run-httpd\") pod \"ceilometer-0\" (UID: \"0fdfbbe2-a3b8-4834-9920-114c40de67dc\") " pod="openstack/ceilometer-0" Dec 11 10:37:05 crc kubenswrapper[4953]: I1211 10:37:05.540663 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0fdfbbe2-a3b8-4834-9920-114c40de67dc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0fdfbbe2-a3b8-4834-9920-114c40de67dc\") " pod="openstack/ceilometer-0" Dec 11 10:37:05 crc kubenswrapper[4953]: I1211 10:37:05.541447 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0fdfbbe2-a3b8-4834-9920-114c40de67dc-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"0fdfbbe2-a3b8-4834-9920-114c40de67dc\") " pod="openstack/ceilometer-0" Dec 11 10:37:05 crc kubenswrapper[4953]: I1211 10:37:05.541547 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0fdfbbe2-a3b8-4834-9920-114c40de67dc-config-data\") pod \"ceilometer-0\" (UID: \"0fdfbbe2-a3b8-4834-9920-114c40de67dc\") " pod="openstack/ceilometer-0" Dec 11 10:37:05 crc kubenswrapper[4953]: I1211 10:37:05.543128 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0fdfbbe2-a3b8-4834-9920-114c40de67dc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0fdfbbe2-a3b8-4834-9920-114c40de67dc\") " pod="openstack/ceilometer-0" Dec 11 10:37:05 crc kubenswrapper[4953]: I1211 10:37:05.544287 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0fdfbbe2-a3b8-4834-9920-114c40de67dc-scripts\") pod \"ceilometer-0\" (UID: \"0fdfbbe2-a3b8-4834-9920-114c40de67dc\") " pod="openstack/ceilometer-0" Dec 11 10:37:05 crc kubenswrapper[4953]: I1211 10:37:05.547371 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 11 10:37:05 crc kubenswrapper[4953]: I1211 10:37:05.560594 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6mt8\" (UniqueName: \"kubernetes.io/projected/0fdfbbe2-a3b8-4834-9920-114c40de67dc-kube-api-access-g6mt8\") pod \"ceilometer-0\" (UID: \"0fdfbbe2-a3b8-4834-9920-114c40de67dc\") " pod="openstack/ceilometer-0" Dec 11 10:37:05 crc kubenswrapper[4953]: I1211 10:37:05.578676 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 10:37:06 crc kubenswrapper[4953]: I1211 10:37:06.089148 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 11 10:37:06 crc kubenswrapper[4953]: I1211 10:37:06.130933 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"abf03403-5280-4fe7-ac72-8d17a8a7fcb7","Type":"ContainerStarted","Data":"0989bc11b6c2f44953e98325cce2fcb4d06c1ca1aa1d5670ff8123be37328001"} Dec 11 10:37:06 crc kubenswrapper[4953]: I1211 10:37:06.182890 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 11 10:37:06 crc kubenswrapper[4953]: W1211 10:37:06.184459 4953 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0fdfbbe2_a3b8_4834_9920_114c40de67dc.slice/crio-f6b7bc3fac68eba454c31ca6b4b8dc0e7012a56bee2904baf3ff4895258d09bf WatchSource:0}: Error finding container f6b7bc3fac68eba454c31ca6b4b8dc0e7012a56bee2904baf3ff4895258d09bf: Status 404 returned error can't find the container with id f6b7bc3fac68eba454c31ca6b4b8dc0e7012a56bee2904baf3ff4895258d09bf Dec 11 10:37:06 crc kubenswrapper[4953]: I1211 10:37:06.485005 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06de80b3-ffbb-4efd-beca-dbf2b67046fa" path="/var/lib/kubelet/pods/06de80b3-ffbb-4efd-beca-dbf2b67046fa/volumes" Dec 11 10:37:06 crc kubenswrapper[4953]: I1211 10:37:06.486297 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7104b2f1-eb69-4aa5-877e-44393692917b" path="/var/lib/kubelet/pods/7104b2f1-eb69-4aa5-877e-44393692917b/volumes" Dec 11 10:37:06 crc kubenswrapper[4953]: I1211 10:37:06.672540 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Dec 11 10:37:06 crc kubenswrapper[4953]: I1211 10:37:06.697723 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Dec 11 10:37:07 crc kubenswrapper[4953]: I1211 10:37:07.140459 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"abf03403-5280-4fe7-ac72-8d17a8a7fcb7","Type":"ContainerStarted","Data":"4b50ba4bbcce41a88a593c3c973f004487f31dd61dfe5a71e64a83db8f9a9c2f"} Dec 11 10:37:07 crc kubenswrapper[4953]: I1211 10:37:07.140514 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"abf03403-5280-4fe7-ac72-8d17a8a7fcb7","Type":"ContainerStarted","Data":"f7ee9bc67732ebd8c3394ac44e0ed0e4085ee206951e3925b501367392391bbe"} Dec 11 10:37:07 crc kubenswrapper[4953]: I1211 10:37:07.142141 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0fdfbbe2-a3b8-4834-9920-114c40de67dc","Type":"ContainerStarted","Data":"26c557ccd567d40c3f683a4b6ace8ab2e8b7ac5434a459e3c5578f86eab6d9ef"} Dec 11 10:37:07 crc kubenswrapper[4953]: I1211 10:37:07.142178 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0fdfbbe2-a3b8-4834-9920-114c40de67dc","Type":"ContainerStarted","Data":"f6b7bc3fac68eba454c31ca6b4b8dc0e7012a56bee2904baf3ff4895258d09bf"} Dec 11 10:37:07 crc kubenswrapper[4953]: I1211 10:37:07.159364 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Dec 11 10:37:07 crc kubenswrapper[4953]: I1211 10:37:07.165891 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.165871155 podStartE2EDuration="2.165871155s" podCreationTimestamp="2025-12-11 10:37:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:37:07.158424254 +0000 UTC m=+1545.182283287" watchObservedRunningTime="2025-12-11 10:37:07.165871155 +0000 UTC m=+1545.189730188" Dec 11 10:37:07 crc kubenswrapper[4953]: I1211 10:37:07.389133 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-npwrw"] Dec 11 10:37:07 crc kubenswrapper[4953]: I1211 10:37:07.390308 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-npwrw" Dec 11 10:37:07 crc kubenswrapper[4953]: I1211 10:37:07.393319 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Dec 11 10:37:07 crc kubenswrapper[4953]: I1211 10:37:07.393699 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Dec 11 10:37:07 crc kubenswrapper[4953]: I1211 10:37:07.408495 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-npwrw"] Dec 11 10:37:07 crc kubenswrapper[4953]: I1211 10:37:07.498002 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-826zk\" (UniqueName: \"kubernetes.io/projected/d02eaa4f-c90f-4399-a3f7-661e4773b7ee-kube-api-access-826zk\") pod \"nova-cell1-cell-mapping-npwrw\" (UID: \"d02eaa4f-c90f-4399-a3f7-661e4773b7ee\") " pod="openstack/nova-cell1-cell-mapping-npwrw" Dec 11 10:37:07 crc kubenswrapper[4953]: I1211 10:37:07.498360 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d02eaa4f-c90f-4399-a3f7-661e4773b7ee-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-npwrw\" (UID: \"d02eaa4f-c90f-4399-a3f7-661e4773b7ee\") " pod="openstack/nova-cell1-cell-mapping-npwrw" Dec 11 10:37:07 crc kubenswrapper[4953]: I1211 10:37:07.498386 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d02eaa4f-c90f-4399-a3f7-661e4773b7ee-scripts\") pod \"nova-cell1-cell-mapping-npwrw\" (UID: \"d02eaa4f-c90f-4399-a3f7-661e4773b7ee\") " pod="openstack/nova-cell1-cell-mapping-npwrw" Dec 11 10:37:07 crc kubenswrapper[4953]: I1211 10:37:07.498412 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d02eaa4f-c90f-4399-a3f7-661e4773b7ee-config-data\") pod \"nova-cell1-cell-mapping-npwrw\" (UID: \"d02eaa4f-c90f-4399-a3f7-661e4773b7ee\") " pod="openstack/nova-cell1-cell-mapping-npwrw" Dec 11 10:37:07 crc kubenswrapper[4953]: I1211 10:37:07.600449 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d02eaa4f-c90f-4399-a3f7-661e4773b7ee-config-data\") pod \"nova-cell1-cell-mapping-npwrw\" (UID: \"d02eaa4f-c90f-4399-a3f7-661e4773b7ee\") " pod="openstack/nova-cell1-cell-mapping-npwrw" Dec 11 10:37:07 crc kubenswrapper[4953]: I1211 10:37:07.600610 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-826zk\" (UniqueName: \"kubernetes.io/projected/d02eaa4f-c90f-4399-a3f7-661e4773b7ee-kube-api-access-826zk\") pod \"nova-cell1-cell-mapping-npwrw\" (UID: \"d02eaa4f-c90f-4399-a3f7-661e4773b7ee\") " pod="openstack/nova-cell1-cell-mapping-npwrw" Dec 11 10:37:07 crc kubenswrapper[4953]: I1211 10:37:07.600670 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d02eaa4f-c90f-4399-a3f7-661e4773b7ee-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-npwrw\" (UID: \"d02eaa4f-c90f-4399-a3f7-661e4773b7ee\") " pod="openstack/nova-cell1-cell-mapping-npwrw" Dec 11 10:37:07 crc kubenswrapper[4953]: I1211 10:37:07.600691 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d02eaa4f-c90f-4399-a3f7-661e4773b7ee-scripts\") pod \"nova-cell1-cell-mapping-npwrw\" (UID: \"d02eaa4f-c90f-4399-a3f7-661e4773b7ee\") " pod="openstack/nova-cell1-cell-mapping-npwrw" Dec 11 10:37:07 crc kubenswrapper[4953]: I1211 10:37:07.610672 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d02eaa4f-c90f-4399-a3f7-661e4773b7ee-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-npwrw\" (UID: \"d02eaa4f-c90f-4399-a3f7-661e4773b7ee\") " pod="openstack/nova-cell1-cell-mapping-npwrw" Dec 11 10:37:07 crc kubenswrapper[4953]: I1211 10:37:07.611544 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d02eaa4f-c90f-4399-a3f7-661e4773b7ee-scripts\") pod \"nova-cell1-cell-mapping-npwrw\" (UID: \"d02eaa4f-c90f-4399-a3f7-661e4773b7ee\") " pod="openstack/nova-cell1-cell-mapping-npwrw" Dec 11 10:37:07 crc kubenswrapper[4953]: I1211 10:37:07.616331 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d02eaa4f-c90f-4399-a3f7-661e4773b7ee-config-data\") pod \"nova-cell1-cell-mapping-npwrw\" (UID: \"d02eaa4f-c90f-4399-a3f7-661e4773b7ee\") " pod="openstack/nova-cell1-cell-mapping-npwrw" Dec 11 10:37:07 crc kubenswrapper[4953]: I1211 10:37:07.639477 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-826zk\" (UniqueName: \"kubernetes.io/projected/d02eaa4f-c90f-4399-a3f7-661e4773b7ee-kube-api-access-826zk\") pod \"nova-cell1-cell-mapping-npwrw\" (UID: \"d02eaa4f-c90f-4399-a3f7-661e4773b7ee\") " pod="openstack/nova-cell1-cell-mapping-npwrw" Dec 11 10:37:07 crc kubenswrapper[4953]: I1211 10:37:07.648778 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5ddd577785-sstm7" Dec 11 10:37:07 crc kubenswrapper[4953]: I1211 10:37:07.713243 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-npwrw" Dec 11 10:37:07 crc kubenswrapper[4953]: I1211 10:37:07.731934 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-557bbc7df7-rcxzb"] Dec 11 10:37:07 crc kubenswrapper[4953]: I1211 10:37:07.732158 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-557bbc7df7-rcxzb" podUID="180fcc74-0e86-469b-b328-ccdf05971726" containerName="dnsmasq-dns" containerID="cri-o://e90fc695beb9c3d6e53cf585d31e1f95c25f2f4f10bf03bb714d94fd13d3ffd3" gracePeriod=10 Dec 11 10:37:08 crc kubenswrapper[4953]: I1211 10:37:08.160201 4953 generic.go:334] "Generic (PLEG): container finished" podID="180fcc74-0e86-469b-b328-ccdf05971726" containerID="e90fc695beb9c3d6e53cf585d31e1f95c25f2f4f10bf03bb714d94fd13d3ffd3" exitCode=0 Dec 11 10:37:08 crc kubenswrapper[4953]: I1211 10:37:08.160305 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-557bbc7df7-rcxzb" event={"ID":"180fcc74-0e86-469b-b328-ccdf05971726","Type":"ContainerDied","Data":"e90fc695beb9c3d6e53cf585d31e1f95c25f2f4f10bf03bb714d94fd13d3ffd3"} Dec 11 10:37:08 crc kubenswrapper[4953]: I1211 10:37:08.311887 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-npwrw"] Dec 11 10:37:08 crc kubenswrapper[4953]: W1211 10:37:08.330931 4953 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd02eaa4f_c90f_4399_a3f7_661e4773b7ee.slice/crio-fe5ba61e576f3bd5a446357b7717ccfd94b461c7aca0c0593260289fdc72bcf3 WatchSource:0}: Error finding container fe5ba61e576f3bd5a446357b7717ccfd94b461c7aca0c0593260289fdc72bcf3: Status 404 returned error can't find the container with id fe5ba61e576f3bd5a446357b7717ccfd94b461c7aca0c0593260289fdc72bcf3 Dec 11 10:37:08 crc kubenswrapper[4953]: I1211 10:37:08.406200 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-557bbc7df7-rcxzb" Dec 11 10:37:08 crc kubenswrapper[4953]: I1211 10:37:08.415445 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9nk7b\" (UniqueName: \"kubernetes.io/projected/180fcc74-0e86-469b-b328-ccdf05971726-kube-api-access-9nk7b\") pod \"180fcc74-0e86-469b-b328-ccdf05971726\" (UID: \"180fcc74-0e86-469b-b328-ccdf05971726\") " Dec 11 10:37:08 crc kubenswrapper[4953]: I1211 10:37:08.415513 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/180fcc74-0e86-469b-b328-ccdf05971726-dns-svc\") pod \"180fcc74-0e86-469b-b328-ccdf05971726\" (UID: \"180fcc74-0e86-469b-b328-ccdf05971726\") " Dec 11 10:37:08 crc kubenswrapper[4953]: I1211 10:37:08.415564 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/180fcc74-0e86-469b-b328-ccdf05971726-ovsdbserver-sb\") pod \"180fcc74-0e86-469b-b328-ccdf05971726\" (UID: \"180fcc74-0e86-469b-b328-ccdf05971726\") " Dec 11 10:37:08 crc kubenswrapper[4953]: I1211 10:37:08.415645 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/180fcc74-0e86-469b-b328-ccdf05971726-dns-swift-storage-0\") pod \"180fcc74-0e86-469b-b328-ccdf05971726\" (UID: \"180fcc74-0e86-469b-b328-ccdf05971726\") " Dec 11 10:37:08 crc kubenswrapper[4953]: I1211 10:37:08.415677 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/180fcc74-0e86-469b-b328-ccdf05971726-config\") pod \"180fcc74-0e86-469b-b328-ccdf05971726\" (UID: \"180fcc74-0e86-469b-b328-ccdf05971726\") " Dec 11 10:37:08 crc kubenswrapper[4953]: I1211 10:37:08.415732 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/180fcc74-0e86-469b-b328-ccdf05971726-ovsdbserver-nb\") pod \"180fcc74-0e86-469b-b328-ccdf05971726\" (UID: \"180fcc74-0e86-469b-b328-ccdf05971726\") " Dec 11 10:37:08 crc kubenswrapper[4953]: I1211 10:37:08.420100 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/180fcc74-0e86-469b-b328-ccdf05971726-kube-api-access-9nk7b" (OuterVolumeSpecName: "kube-api-access-9nk7b") pod "180fcc74-0e86-469b-b328-ccdf05971726" (UID: "180fcc74-0e86-469b-b328-ccdf05971726"). InnerVolumeSpecName "kube-api-access-9nk7b". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:37:08 crc kubenswrapper[4953]: I1211 10:37:08.489674 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/180fcc74-0e86-469b-b328-ccdf05971726-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "180fcc74-0e86-469b-b328-ccdf05971726" (UID: "180fcc74-0e86-469b-b328-ccdf05971726"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:37:08 crc kubenswrapper[4953]: I1211 10:37:08.495064 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/180fcc74-0e86-469b-b328-ccdf05971726-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "180fcc74-0e86-469b-b328-ccdf05971726" (UID: "180fcc74-0e86-469b-b328-ccdf05971726"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:37:08 crc kubenswrapper[4953]: I1211 10:37:08.503705 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/180fcc74-0e86-469b-b328-ccdf05971726-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "180fcc74-0e86-469b-b328-ccdf05971726" (UID: "180fcc74-0e86-469b-b328-ccdf05971726"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:37:08 crc kubenswrapper[4953]: I1211 10:37:08.517278 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9nk7b\" (UniqueName: \"kubernetes.io/projected/180fcc74-0e86-469b-b328-ccdf05971726-kube-api-access-9nk7b\") on node \"crc\" DevicePath \"\"" Dec 11 10:37:08 crc kubenswrapper[4953]: I1211 10:37:08.517305 4953 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/180fcc74-0e86-469b-b328-ccdf05971726-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 11 10:37:08 crc kubenswrapper[4953]: I1211 10:37:08.517315 4953 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/180fcc74-0e86-469b-b328-ccdf05971726-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 11 10:37:08 crc kubenswrapper[4953]: I1211 10:37:08.517325 4953 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/180fcc74-0e86-469b-b328-ccdf05971726-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 11 10:37:08 crc kubenswrapper[4953]: I1211 10:37:08.517920 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/180fcc74-0e86-469b-b328-ccdf05971726-config" (OuterVolumeSpecName: "config") pod "180fcc74-0e86-469b-b328-ccdf05971726" (UID: "180fcc74-0e86-469b-b328-ccdf05971726"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:37:08 crc kubenswrapper[4953]: I1211 10:37:08.518269 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/180fcc74-0e86-469b-b328-ccdf05971726-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "180fcc74-0e86-469b-b328-ccdf05971726" (UID: "180fcc74-0e86-469b-b328-ccdf05971726"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:37:08 crc kubenswrapper[4953]: I1211 10:37:08.620601 4953 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/180fcc74-0e86-469b-b328-ccdf05971726-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 11 10:37:08 crc kubenswrapper[4953]: I1211 10:37:08.620947 4953 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/180fcc74-0e86-469b-b328-ccdf05971726-config\") on node \"crc\" DevicePath \"\"" Dec 11 10:37:09 crc kubenswrapper[4953]: I1211 10:37:09.170805 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0fdfbbe2-a3b8-4834-9920-114c40de67dc","Type":"ContainerStarted","Data":"24a20114145ea26b514ff1c0db96904c68235dddedc19cbb3ebee0b622fd84b3"} Dec 11 10:37:09 crc kubenswrapper[4953]: I1211 10:37:09.171149 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0fdfbbe2-a3b8-4834-9920-114c40de67dc","Type":"ContainerStarted","Data":"833b6f02c978f12986b237387138803da1e2d0773b34467c4d1a5b383a6b7409"} Dec 11 10:37:09 crc kubenswrapper[4953]: I1211 10:37:09.173463 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-557bbc7df7-rcxzb" event={"ID":"180fcc74-0e86-469b-b328-ccdf05971726","Type":"ContainerDied","Data":"7a898f598e255693454cd15728f317d46b3f29bd9c040ee9b106d6d413dc38a9"} Dec 11 10:37:09 crc kubenswrapper[4953]: I1211 10:37:09.173497 4953 scope.go:117] "RemoveContainer" containerID="e90fc695beb9c3d6e53cf585d31e1f95c25f2f4f10bf03bb714d94fd13d3ffd3" Dec 11 10:37:09 crc kubenswrapper[4953]: I1211 10:37:09.173517 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-557bbc7df7-rcxzb" Dec 11 10:37:09 crc kubenswrapper[4953]: I1211 10:37:09.175187 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-npwrw" event={"ID":"d02eaa4f-c90f-4399-a3f7-661e4773b7ee","Type":"ContainerStarted","Data":"1bc9138327f744de0ebda2b78524028f949e14cc2e96f94f06d591c861c22632"} Dec 11 10:37:09 crc kubenswrapper[4953]: I1211 10:37:09.175244 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-npwrw" event={"ID":"d02eaa4f-c90f-4399-a3f7-661e4773b7ee","Type":"ContainerStarted","Data":"fe5ba61e576f3bd5a446357b7717ccfd94b461c7aca0c0593260289fdc72bcf3"} Dec 11 10:37:09 crc kubenswrapper[4953]: I1211 10:37:09.193955 4953 scope.go:117] "RemoveContainer" containerID="1a361c23e77ccc1c64af016c246fb4ac4d67817bbaa2ab7ded619ddc594f7a26" Dec 11 10:37:09 crc kubenswrapper[4953]: I1211 10:37:09.195076 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-npwrw" podStartSLOduration=2.195059218 podStartE2EDuration="2.195059218s" podCreationTimestamp="2025-12-11 10:37:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:37:09.1906287 +0000 UTC m=+1547.214487733" watchObservedRunningTime="2025-12-11 10:37:09.195059218 +0000 UTC m=+1547.218918251" Dec 11 10:37:09 crc kubenswrapper[4953]: I1211 10:37:09.221394 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-557bbc7df7-rcxzb"] Dec 11 10:37:09 crc kubenswrapper[4953]: I1211 10:37:09.230059 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-557bbc7df7-rcxzb"] Dec 11 10:37:10 crc kubenswrapper[4953]: I1211 10:37:10.486223 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="180fcc74-0e86-469b-b328-ccdf05971726" path="/var/lib/kubelet/pods/180fcc74-0e86-469b-b328-ccdf05971726/volumes" Dec 11 10:37:11 crc kubenswrapper[4953]: I1211 10:37:11.222850 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0fdfbbe2-a3b8-4834-9920-114c40de67dc","Type":"ContainerStarted","Data":"7bcde1f160b621a411c4432d1c9223855ce56dae0721cd858ef2d9f01ba8fc4f"} Dec 11 10:37:11 crc kubenswrapper[4953]: I1211 10:37:11.224395 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 11 10:37:11 crc kubenswrapper[4953]: I1211 10:37:11.255323 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.217147127 podStartE2EDuration="6.255302703s" podCreationTimestamp="2025-12-11 10:37:05 +0000 UTC" firstStartedPulling="2025-12-11 10:37:06.186835318 +0000 UTC m=+1544.210694351" lastFinishedPulling="2025-12-11 10:37:10.224990894 +0000 UTC m=+1548.248849927" observedRunningTime="2025-12-11 10:37:11.250784382 +0000 UTC m=+1549.274643435" watchObservedRunningTime="2025-12-11 10:37:11.255302703 +0000 UTC m=+1549.279161726" Dec 11 10:37:14 crc kubenswrapper[4953]: I1211 10:37:14.252910 4953 generic.go:334] "Generic (PLEG): container finished" podID="d02eaa4f-c90f-4399-a3f7-661e4773b7ee" containerID="1bc9138327f744de0ebda2b78524028f949e14cc2e96f94f06d591c861c22632" exitCode=0 Dec 11 10:37:14 crc kubenswrapper[4953]: I1211 10:37:14.252985 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-npwrw" event={"ID":"d02eaa4f-c90f-4399-a3f7-661e4773b7ee","Type":"ContainerDied","Data":"1bc9138327f744de0ebda2b78524028f949e14cc2e96f94f06d591c861c22632"} Dec 11 10:37:15 crc kubenswrapper[4953]: I1211 10:37:15.553485 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 11 10:37:15 crc kubenswrapper[4953]: I1211 10:37:15.553906 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 11 10:37:15 crc kubenswrapper[4953]: I1211 10:37:15.691564 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-npwrw" Dec 11 10:37:15 crc kubenswrapper[4953]: I1211 10:37:15.769416 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d02eaa4f-c90f-4399-a3f7-661e4773b7ee-combined-ca-bundle\") pod \"d02eaa4f-c90f-4399-a3f7-661e4773b7ee\" (UID: \"d02eaa4f-c90f-4399-a3f7-661e4773b7ee\") " Dec 11 10:37:15 crc kubenswrapper[4953]: I1211 10:37:15.769536 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d02eaa4f-c90f-4399-a3f7-661e4773b7ee-scripts\") pod \"d02eaa4f-c90f-4399-a3f7-661e4773b7ee\" (UID: \"d02eaa4f-c90f-4399-a3f7-661e4773b7ee\") " Dec 11 10:37:15 crc kubenswrapper[4953]: I1211 10:37:15.769692 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d02eaa4f-c90f-4399-a3f7-661e4773b7ee-config-data\") pod \"d02eaa4f-c90f-4399-a3f7-661e4773b7ee\" (UID: \"d02eaa4f-c90f-4399-a3f7-661e4773b7ee\") " Dec 11 10:37:15 crc kubenswrapper[4953]: I1211 10:37:15.769797 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-826zk\" (UniqueName: \"kubernetes.io/projected/d02eaa4f-c90f-4399-a3f7-661e4773b7ee-kube-api-access-826zk\") pod \"d02eaa4f-c90f-4399-a3f7-661e4773b7ee\" (UID: \"d02eaa4f-c90f-4399-a3f7-661e4773b7ee\") " Dec 11 10:37:15 crc kubenswrapper[4953]: I1211 10:37:15.778858 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d02eaa4f-c90f-4399-a3f7-661e4773b7ee-kube-api-access-826zk" (OuterVolumeSpecName: "kube-api-access-826zk") pod "d02eaa4f-c90f-4399-a3f7-661e4773b7ee" (UID: "d02eaa4f-c90f-4399-a3f7-661e4773b7ee"). InnerVolumeSpecName "kube-api-access-826zk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:37:15 crc kubenswrapper[4953]: I1211 10:37:15.788793 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d02eaa4f-c90f-4399-a3f7-661e4773b7ee-scripts" (OuterVolumeSpecName: "scripts") pod "d02eaa4f-c90f-4399-a3f7-661e4773b7ee" (UID: "d02eaa4f-c90f-4399-a3f7-661e4773b7ee"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:37:15 crc kubenswrapper[4953]: I1211 10:37:15.808507 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d02eaa4f-c90f-4399-a3f7-661e4773b7ee-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d02eaa4f-c90f-4399-a3f7-661e4773b7ee" (UID: "d02eaa4f-c90f-4399-a3f7-661e4773b7ee"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:37:15 crc kubenswrapper[4953]: I1211 10:37:15.808840 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d02eaa4f-c90f-4399-a3f7-661e4773b7ee-config-data" (OuterVolumeSpecName: "config-data") pod "d02eaa4f-c90f-4399-a3f7-661e4773b7ee" (UID: "d02eaa4f-c90f-4399-a3f7-661e4773b7ee"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:37:15 crc kubenswrapper[4953]: I1211 10:37:15.873793 4953 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d02eaa4f-c90f-4399-a3f7-661e4773b7ee-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 10:37:15 crc kubenswrapper[4953]: I1211 10:37:15.874213 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-826zk\" (UniqueName: \"kubernetes.io/projected/d02eaa4f-c90f-4399-a3f7-661e4773b7ee-kube-api-access-826zk\") on node \"crc\" DevicePath \"\"" Dec 11 10:37:15 crc kubenswrapper[4953]: I1211 10:37:15.874235 4953 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d02eaa4f-c90f-4399-a3f7-661e4773b7ee-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 10:37:15 crc kubenswrapper[4953]: I1211 10:37:15.874252 4953 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d02eaa4f-c90f-4399-a3f7-661e4773b7ee-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 10:37:16 crc kubenswrapper[4953]: I1211 10:37:16.274608 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-npwrw" event={"ID":"d02eaa4f-c90f-4399-a3f7-661e4773b7ee","Type":"ContainerDied","Data":"fe5ba61e576f3bd5a446357b7717ccfd94b461c7aca0c0593260289fdc72bcf3"} Dec 11 10:37:16 crc kubenswrapper[4953]: I1211 10:37:16.274653 4953 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe5ba61e576f3bd5a446357b7717ccfd94b461c7aca0c0593260289fdc72bcf3" Dec 11 10:37:16 crc kubenswrapper[4953]: I1211 10:37:16.274714 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-npwrw" Dec 11 10:37:16 crc kubenswrapper[4953]: I1211 10:37:16.484975 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 11 10:37:16 crc kubenswrapper[4953]: I1211 10:37:16.485198 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="abf03403-5280-4fe7-ac72-8d17a8a7fcb7" containerName="nova-api-log" containerID="cri-o://f7ee9bc67732ebd8c3394ac44e0ed0e4085ee206951e3925b501367392391bbe" gracePeriod=30 Dec 11 10:37:16 crc kubenswrapper[4953]: I1211 10:37:16.485291 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="abf03403-5280-4fe7-ac72-8d17a8a7fcb7" containerName="nova-api-api" containerID="cri-o://4b50ba4bbcce41a88a593c3c973f004487f31dd61dfe5a71e64a83db8f9a9c2f" gracePeriod=30 Dec 11 10:37:16 crc kubenswrapper[4953]: I1211 10:37:16.494608 4953 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="abf03403-5280-4fe7-ac72-8d17a8a7fcb7" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.199:8774/\": EOF" Dec 11 10:37:16 crc kubenswrapper[4953]: I1211 10:37:16.494744 4953 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="abf03403-5280-4fe7-ac72-8d17a8a7fcb7" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.199:8774/\": EOF" Dec 11 10:37:16 crc kubenswrapper[4953]: I1211 10:37:16.495260 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 11 10:37:16 crc kubenswrapper[4953]: I1211 10:37:16.495529 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="88583b85-6fea-4cce-afee-c2dd1d16c119" containerName="nova-scheduler-scheduler" containerID="cri-o://368cfa6521194da5dfbcfe50116277e0d94e4595de7ae4e9091034224c726808" gracePeriod=30 Dec 11 10:37:16 crc kubenswrapper[4953]: I1211 10:37:16.522707 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 11 10:37:16 crc kubenswrapper[4953]: I1211 10:37:16.522976 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="82055682-e0c4-4cf0-b034-49ba335cb911" containerName="nova-metadata-log" containerID="cri-o://f5c9fb1119da88b9fa6b56a28ad73ffa87f8d578eb7285f003aadc343843a910" gracePeriod=30 Dec 11 10:37:16 crc kubenswrapper[4953]: I1211 10:37:16.523145 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="82055682-e0c4-4cf0-b034-49ba335cb911" containerName="nova-metadata-metadata" containerID="cri-o://1926b8cbe681d9a8c8bed729a576861309230c29f98f10d069f9c75d266143f2" gracePeriod=30 Dec 11 10:37:17 crc kubenswrapper[4953]: I1211 10:37:17.290212 4953 generic.go:334] "Generic (PLEG): container finished" podID="82055682-e0c4-4cf0-b034-49ba335cb911" containerID="f5c9fb1119da88b9fa6b56a28ad73ffa87f8d578eb7285f003aadc343843a910" exitCode=143 Dec 11 10:37:17 crc kubenswrapper[4953]: I1211 10:37:17.290292 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"82055682-e0c4-4cf0-b034-49ba335cb911","Type":"ContainerDied","Data":"f5c9fb1119da88b9fa6b56a28ad73ffa87f8d578eb7285f003aadc343843a910"} Dec 11 10:37:17 crc kubenswrapper[4953]: I1211 10:37:17.292285 4953 generic.go:334] "Generic (PLEG): container finished" podID="abf03403-5280-4fe7-ac72-8d17a8a7fcb7" containerID="f7ee9bc67732ebd8c3394ac44e0ed0e4085ee206951e3925b501367392391bbe" exitCode=143 Dec 11 10:37:17 crc kubenswrapper[4953]: I1211 10:37:17.292335 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"abf03403-5280-4fe7-ac72-8d17a8a7fcb7","Type":"ContainerDied","Data":"f7ee9bc67732ebd8c3394ac44e0ed0e4085ee206951e3925b501367392391bbe"} Dec 11 10:37:18 crc kubenswrapper[4953]: I1211 10:37:18.193986 4953 patch_prober.go:28] interesting pod/machine-config-daemon-q2898 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 10:37:18 crc kubenswrapper[4953]: I1211 10:37:18.194336 4953 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 10:37:19 crc kubenswrapper[4953]: I1211 10:37:19.312852 4953 generic.go:334] "Generic (PLEG): container finished" podID="88583b85-6fea-4cce-afee-c2dd1d16c119" containerID="368cfa6521194da5dfbcfe50116277e0d94e4595de7ae4e9091034224c726808" exitCode=0 Dec 11 10:37:19 crc kubenswrapper[4953]: I1211 10:37:19.312969 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"88583b85-6fea-4cce-afee-c2dd1d16c119","Type":"ContainerDied","Data":"368cfa6521194da5dfbcfe50116277e0d94e4595de7ae4e9091034224c726808"} Dec 11 10:37:19 crc kubenswrapper[4953]: I1211 10:37:19.670833 4953 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="82055682-e0c4-4cf0-b034-49ba335cb911" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.194:8775/\": read tcp 10.217.0.2:40300->10.217.0.194:8775: read: connection reset by peer" Dec 11 10:37:19 crc kubenswrapper[4953]: I1211 10:37:19.670873 4953 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="82055682-e0c4-4cf0-b034-49ba335cb911" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.194:8775/\": read tcp 10.217.0.2:40296->10.217.0.194:8775: read: connection reset by peer" Dec 11 10:37:19 crc kubenswrapper[4953]: I1211 10:37:19.675891 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 11 10:37:19 crc kubenswrapper[4953]: I1211 10:37:19.804241 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cjqjr\" (UniqueName: \"kubernetes.io/projected/88583b85-6fea-4cce-afee-c2dd1d16c119-kube-api-access-cjqjr\") pod \"88583b85-6fea-4cce-afee-c2dd1d16c119\" (UID: \"88583b85-6fea-4cce-afee-c2dd1d16c119\") " Dec 11 10:37:19 crc kubenswrapper[4953]: I1211 10:37:19.804691 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88583b85-6fea-4cce-afee-c2dd1d16c119-config-data\") pod \"88583b85-6fea-4cce-afee-c2dd1d16c119\" (UID: \"88583b85-6fea-4cce-afee-c2dd1d16c119\") " Dec 11 10:37:19 crc kubenswrapper[4953]: I1211 10:37:19.804729 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88583b85-6fea-4cce-afee-c2dd1d16c119-combined-ca-bundle\") pod \"88583b85-6fea-4cce-afee-c2dd1d16c119\" (UID: \"88583b85-6fea-4cce-afee-c2dd1d16c119\") " Dec 11 10:37:19 crc kubenswrapper[4953]: I1211 10:37:19.810994 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88583b85-6fea-4cce-afee-c2dd1d16c119-kube-api-access-cjqjr" (OuterVolumeSpecName: "kube-api-access-cjqjr") pod "88583b85-6fea-4cce-afee-c2dd1d16c119" (UID: "88583b85-6fea-4cce-afee-c2dd1d16c119"). InnerVolumeSpecName "kube-api-access-cjqjr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:37:19 crc kubenswrapper[4953]: I1211 10:37:19.839506 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88583b85-6fea-4cce-afee-c2dd1d16c119-config-data" (OuterVolumeSpecName: "config-data") pod "88583b85-6fea-4cce-afee-c2dd1d16c119" (UID: "88583b85-6fea-4cce-afee-c2dd1d16c119"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:37:19 crc kubenswrapper[4953]: I1211 10:37:19.868325 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88583b85-6fea-4cce-afee-c2dd1d16c119-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "88583b85-6fea-4cce-afee-c2dd1d16c119" (UID: "88583b85-6fea-4cce-afee-c2dd1d16c119"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:37:19 crc kubenswrapper[4953]: I1211 10:37:19.909631 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cjqjr\" (UniqueName: \"kubernetes.io/projected/88583b85-6fea-4cce-afee-c2dd1d16c119-kube-api-access-cjqjr\") on node \"crc\" DevicePath \"\"" Dec 11 10:37:19 crc kubenswrapper[4953]: I1211 10:37:19.909669 4953 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88583b85-6fea-4cce-afee-c2dd1d16c119-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 10:37:19 crc kubenswrapper[4953]: I1211 10:37:19.909682 4953 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88583b85-6fea-4cce-afee-c2dd1d16c119-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 10:37:20 crc kubenswrapper[4953]: I1211 10:37:20.065797 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 11 10:37:20 crc kubenswrapper[4953]: I1211 10:37:20.112418 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4x545\" (UniqueName: \"kubernetes.io/projected/82055682-e0c4-4cf0-b034-49ba335cb911-kube-api-access-4x545\") pod \"82055682-e0c4-4cf0-b034-49ba335cb911\" (UID: \"82055682-e0c4-4cf0-b034-49ba335cb911\") " Dec 11 10:37:20 crc kubenswrapper[4953]: I1211 10:37:20.112647 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/82055682-e0c4-4cf0-b034-49ba335cb911-logs\") pod \"82055682-e0c4-4cf0-b034-49ba335cb911\" (UID: \"82055682-e0c4-4cf0-b034-49ba335cb911\") " Dec 11 10:37:20 crc kubenswrapper[4953]: I1211 10:37:20.112694 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82055682-e0c4-4cf0-b034-49ba335cb911-combined-ca-bundle\") pod \"82055682-e0c4-4cf0-b034-49ba335cb911\" (UID: \"82055682-e0c4-4cf0-b034-49ba335cb911\") " Dec 11 10:37:20 crc kubenswrapper[4953]: I1211 10:37:20.112743 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/82055682-e0c4-4cf0-b034-49ba335cb911-nova-metadata-tls-certs\") pod \"82055682-e0c4-4cf0-b034-49ba335cb911\" (UID: \"82055682-e0c4-4cf0-b034-49ba335cb911\") " Dec 11 10:37:20 crc kubenswrapper[4953]: I1211 10:37:20.112798 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82055682-e0c4-4cf0-b034-49ba335cb911-config-data\") pod \"82055682-e0c4-4cf0-b034-49ba335cb911\" (UID: \"82055682-e0c4-4cf0-b034-49ba335cb911\") " Dec 11 10:37:20 crc kubenswrapper[4953]: I1211 10:37:20.113744 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/82055682-e0c4-4cf0-b034-49ba335cb911-logs" (OuterVolumeSpecName: "logs") pod "82055682-e0c4-4cf0-b034-49ba335cb911" (UID: "82055682-e0c4-4cf0-b034-49ba335cb911"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:37:20 crc kubenswrapper[4953]: I1211 10:37:20.130861 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82055682-e0c4-4cf0-b034-49ba335cb911-kube-api-access-4x545" (OuterVolumeSpecName: "kube-api-access-4x545") pod "82055682-e0c4-4cf0-b034-49ba335cb911" (UID: "82055682-e0c4-4cf0-b034-49ba335cb911"). InnerVolumeSpecName "kube-api-access-4x545". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:37:20 crc kubenswrapper[4953]: I1211 10:37:20.143829 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82055682-e0c4-4cf0-b034-49ba335cb911-config-data" (OuterVolumeSpecName: "config-data") pod "82055682-e0c4-4cf0-b034-49ba335cb911" (UID: "82055682-e0c4-4cf0-b034-49ba335cb911"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:37:20 crc kubenswrapper[4953]: I1211 10:37:20.152790 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82055682-e0c4-4cf0-b034-49ba335cb911-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "82055682-e0c4-4cf0-b034-49ba335cb911" (UID: "82055682-e0c4-4cf0-b034-49ba335cb911"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:37:20 crc kubenswrapper[4953]: I1211 10:37:20.176971 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82055682-e0c4-4cf0-b034-49ba335cb911-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "82055682-e0c4-4cf0-b034-49ba335cb911" (UID: "82055682-e0c4-4cf0-b034-49ba335cb911"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:37:20 crc kubenswrapper[4953]: I1211 10:37:20.215489 4953 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/82055682-e0c4-4cf0-b034-49ba335cb911-logs\") on node \"crc\" DevicePath \"\"" Dec 11 10:37:20 crc kubenswrapper[4953]: I1211 10:37:20.215623 4953 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82055682-e0c4-4cf0-b034-49ba335cb911-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 10:37:20 crc kubenswrapper[4953]: I1211 10:37:20.215644 4953 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/82055682-e0c4-4cf0-b034-49ba335cb911-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 11 10:37:20 crc kubenswrapper[4953]: I1211 10:37:20.215656 4953 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82055682-e0c4-4cf0-b034-49ba335cb911-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 10:37:20 crc kubenswrapper[4953]: I1211 10:37:20.215687 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4x545\" (UniqueName: \"kubernetes.io/projected/82055682-e0c4-4cf0-b034-49ba335cb911-kube-api-access-4x545\") on node \"crc\" DevicePath \"\"" Dec 11 10:37:20 crc kubenswrapper[4953]: I1211 10:37:20.337416 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"88583b85-6fea-4cce-afee-c2dd1d16c119","Type":"ContainerDied","Data":"db63c092c7a16dd6ea4a2263317c1496c17b35997d31b0d5e99a3bd2dfec5759"} Dec 11 10:37:20 crc kubenswrapper[4953]: I1211 10:37:20.337477 4953 scope.go:117] "RemoveContainer" containerID="368cfa6521194da5dfbcfe50116277e0d94e4595de7ae4e9091034224c726808" Dec 11 10:37:20 crc kubenswrapper[4953]: I1211 10:37:20.337637 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 11 10:37:20 crc kubenswrapper[4953]: I1211 10:37:20.339195 4953 generic.go:334] "Generic (PLEG): container finished" podID="82055682-e0c4-4cf0-b034-49ba335cb911" containerID="1926b8cbe681d9a8c8bed729a576861309230c29f98f10d069f9c75d266143f2" exitCode=0 Dec 11 10:37:20 crc kubenswrapper[4953]: I1211 10:37:20.339237 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"82055682-e0c4-4cf0-b034-49ba335cb911","Type":"ContainerDied","Data":"1926b8cbe681d9a8c8bed729a576861309230c29f98f10d069f9c75d266143f2"} Dec 11 10:37:20 crc kubenswrapper[4953]: I1211 10:37:20.339263 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"82055682-e0c4-4cf0-b034-49ba335cb911","Type":"ContainerDied","Data":"edd9edf2a2a254f73a3ed818f217d10dd7499c48ae3bd59b35279daed244209c"} Dec 11 10:37:20 crc kubenswrapper[4953]: I1211 10:37:20.339259 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 11 10:37:20 crc kubenswrapper[4953]: I1211 10:37:20.359841 4953 scope.go:117] "RemoveContainer" containerID="1926b8cbe681d9a8c8bed729a576861309230c29f98f10d069f9c75d266143f2" Dec 11 10:37:20 crc kubenswrapper[4953]: I1211 10:37:20.378742 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 11 10:37:20 crc kubenswrapper[4953]: I1211 10:37:20.394896 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Dec 11 10:37:20 crc kubenswrapper[4953]: I1211 10:37:20.396892 4953 scope.go:117] "RemoveContainer" containerID="f5c9fb1119da88b9fa6b56a28ad73ffa87f8d578eb7285f003aadc343843a910" Dec 11 10:37:20 crc kubenswrapper[4953]: I1211 10:37:20.414422 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 11 10:37:20 crc kubenswrapper[4953]: I1211 10:37:20.430806 4953 scope.go:117] "RemoveContainer" containerID="1926b8cbe681d9a8c8bed729a576861309230c29f98f10d069f9c75d266143f2" Dec 11 10:37:20 crc kubenswrapper[4953]: E1211 10:37:20.433872 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1926b8cbe681d9a8c8bed729a576861309230c29f98f10d069f9c75d266143f2\": container with ID starting with 1926b8cbe681d9a8c8bed729a576861309230c29f98f10d069f9c75d266143f2 not found: ID does not exist" containerID="1926b8cbe681d9a8c8bed729a576861309230c29f98f10d069f9c75d266143f2" Dec 11 10:37:20 crc kubenswrapper[4953]: I1211 10:37:20.433931 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1926b8cbe681d9a8c8bed729a576861309230c29f98f10d069f9c75d266143f2"} err="failed to get container status \"1926b8cbe681d9a8c8bed729a576861309230c29f98f10d069f9c75d266143f2\": rpc error: code = NotFound desc = could not find container \"1926b8cbe681d9a8c8bed729a576861309230c29f98f10d069f9c75d266143f2\": container with ID starting with 1926b8cbe681d9a8c8bed729a576861309230c29f98f10d069f9c75d266143f2 not found: ID does not exist" Dec 11 10:37:20 crc kubenswrapper[4953]: I1211 10:37:20.433963 4953 scope.go:117] "RemoveContainer" containerID="f5c9fb1119da88b9fa6b56a28ad73ffa87f8d578eb7285f003aadc343843a910" Dec 11 10:37:20 crc kubenswrapper[4953]: I1211 10:37:20.434060 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Dec 11 10:37:20 crc kubenswrapper[4953]: E1211 10:37:20.435439 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f5c9fb1119da88b9fa6b56a28ad73ffa87f8d578eb7285f003aadc343843a910\": container with ID starting with f5c9fb1119da88b9fa6b56a28ad73ffa87f8d578eb7285f003aadc343843a910 not found: ID does not exist" containerID="f5c9fb1119da88b9fa6b56a28ad73ffa87f8d578eb7285f003aadc343843a910" Dec 11 10:37:20 crc kubenswrapper[4953]: I1211 10:37:20.435470 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5c9fb1119da88b9fa6b56a28ad73ffa87f8d578eb7285f003aadc343843a910"} err="failed to get container status \"f5c9fb1119da88b9fa6b56a28ad73ffa87f8d578eb7285f003aadc343843a910\": rpc error: code = NotFound desc = could not find container \"f5c9fb1119da88b9fa6b56a28ad73ffa87f8d578eb7285f003aadc343843a910\": container with ID starting with f5c9fb1119da88b9fa6b56a28ad73ffa87f8d578eb7285f003aadc343843a910 not found: ID does not exist" Dec 11 10:37:20 crc kubenswrapper[4953]: I1211 10:37:20.447690 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Dec 11 10:37:20 crc kubenswrapper[4953]: E1211 10:37:20.448269 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82055682-e0c4-4cf0-b034-49ba335cb911" containerName="nova-metadata-metadata" Dec 11 10:37:20 crc kubenswrapper[4953]: I1211 10:37:20.448294 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="82055682-e0c4-4cf0-b034-49ba335cb911" containerName="nova-metadata-metadata" Dec 11 10:37:20 crc kubenswrapper[4953]: E1211 10:37:20.448318 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88583b85-6fea-4cce-afee-c2dd1d16c119" containerName="nova-scheduler-scheduler" Dec 11 10:37:20 crc kubenswrapper[4953]: I1211 10:37:20.448327 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="88583b85-6fea-4cce-afee-c2dd1d16c119" containerName="nova-scheduler-scheduler" Dec 11 10:37:20 crc kubenswrapper[4953]: E1211 10:37:20.448361 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="180fcc74-0e86-469b-b328-ccdf05971726" containerName="dnsmasq-dns" Dec 11 10:37:20 crc kubenswrapper[4953]: I1211 10:37:20.448374 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="180fcc74-0e86-469b-b328-ccdf05971726" containerName="dnsmasq-dns" Dec 11 10:37:20 crc kubenswrapper[4953]: E1211 10:37:20.448396 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82055682-e0c4-4cf0-b034-49ba335cb911" containerName="nova-metadata-log" Dec 11 10:37:20 crc kubenswrapper[4953]: I1211 10:37:20.448405 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="82055682-e0c4-4cf0-b034-49ba335cb911" containerName="nova-metadata-log" Dec 11 10:37:20 crc kubenswrapper[4953]: E1211 10:37:20.448417 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="180fcc74-0e86-469b-b328-ccdf05971726" containerName="init" Dec 11 10:37:20 crc kubenswrapper[4953]: I1211 10:37:20.448424 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="180fcc74-0e86-469b-b328-ccdf05971726" containerName="init" Dec 11 10:37:20 crc kubenswrapper[4953]: E1211 10:37:20.448437 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d02eaa4f-c90f-4399-a3f7-661e4773b7ee" containerName="nova-manage" Dec 11 10:37:20 crc kubenswrapper[4953]: I1211 10:37:20.448482 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="d02eaa4f-c90f-4399-a3f7-661e4773b7ee" containerName="nova-manage" Dec 11 10:37:20 crc kubenswrapper[4953]: I1211 10:37:20.448751 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="82055682-e0c4-4cf0-b034-49ba335cb911" containerName="nova-metadata-metadata" Dec 11 10:37:20 crc kubenswrapper[4953]: I1211 10:37:20.448771 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="82055682-e0c4-4cf0-b034-49ba335cb911" containerName="nova-metadata-log" Dec 11 10:37:20 crc kubenswrapper[4953]: I1211 10:37:20.448786 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="d02eaa4f-c90f-4399-a3f7-661e4773b7ee" containerName="nova-manage" Dec 11 10:37:20 crc kubenswrapper[4953]: I1211 10:37:20.448793 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="180fcc74-0e86-469b-b328-ccdf05971726" containerName="dnsmasq-dns" Dec 11 10:37:20 crc kubenswrapper[4953]: I1211 10:37:20.448807 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="88583b85-6fea-4cce-afee-c2dd1d16c119" containerName="nova-scheduler-scheduler" Dec 11 10:37:20 crc kubenswrapper[4953]: I1211 10:37:20.449621 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 11 10:37:20 crc kubenswrapper[4953]: I1211 10:37:20.451706 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Dec 11 10:37:20 crc kubenswrapper[4953]: I1211 10:37:20.466119 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 11 10:37:20 crc kubenswrapper[4953]: I1211 10:37:20.468493 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 11 10:37:20 crc kubenswrapper[4953]: I1211 10:37:20.471717 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Dec 11 10:37:20 crc kubenswrapper[4953]: I1211 10:37:20.471741 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 11 10:37:20 crc kubenswrapper[4953]: I1211 10:37:20.490482 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82055682-e0c4-4cf0-b034-49ba335cb911" path="/var/lib/kubelet/pods/82055682-e0c4-4cf0-b034-49ba335cb911/volumes" Dec 11 10:37:20 crc kubenswrapper[4953]: I1211 10:37:20.491721 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88583b85-6fea-4cce-afee-c2dd1d16c119" path="/var/lib/kubelet/pods/88583b85-6fea-4cce-afee-c2dd1d16c119/volumes" Dec 11 10:37:20 crc kubenswrapper[4953]: I1211 10:37:20.492410 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 11 10:37:20 crc kubenswrapper[4953]: I1211 10:37:20.499020 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 11 10:37:20 crc kubenswrapper[4953]: I1211 10:37:20.524376 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cd4593de-19d2-47c1-b6b0-b9c0e46e1107-logs\") pod \"nova-metadata-0\" (UID: \"cd4593de-19d2-47c1-b6b0-b9c0e46e1107\") " pod="openstack/nova-metadata-0" Dec 11 10:37:20 crc kubenswrapper[4953]: I1211 10:37:20.524857 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7af3727e-8096-420d-b8d0-95988a5d36db-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"7af3727e-8096-420d-b8d0-95988a5d36db\") " pod="openstack/nova-scheduler-0" Dec 11 10:37:20 crc kubenswrapper[4953]: I1211 10:37:20.525004 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xl6kg\" (UniqueName: \"kubernetes.io/projected/cd4593de-19d2-47c1-b6b0-b9c0e46e1107-kube-api-access-xl6kg\") pod \"nova-metadata-0\" (UID: \"cd4593de-19d2-47c1-b6b0-b9c0e46e1107\") " pod="openstack/nova-metadata-0" Dec 11 10:37:20 crc kubenswrapper[4953]: I1211 10:37:20.525088 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd4593de-19d2-47c1-b6b0-b9c0e46e1107-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"cd4593de-19d2-47c1-b6b0-b9c0e46e1107\") " pod="openstack/nova-metadata-0" Dec 11 10:37:20 crc kubenswrapper[4953]: I1211 10:37:20.525152 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7af3727e-8096-420d-b8d0-95988a5d36db-config-data\") pod \"nova-scheduler-0\" (UID: \"7af3727e-8096-420d-b8d0-95988a5d36db\") " pod="openstack/nova-scheduler-0" Dec 11 10:37:20 crc kubenswrapper[4953]: I1211 10:37:20.525362 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd4593de-19d2-47c1-b6b0-b9c0e46e1107-config-data\") pod \"nova-metadata-0\" (UID: \"cd4593de-19d2-47c1-b6b0-b9c0e46e1107\") " pod="openstack/nova-metadata-0" Dec 11 10:37:20 crc kubenswrapper[4953]: I1211 10:37:20.525719 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrt7p\" (UniqueName: \"kubernetes.io/projected/7af3727e-8096-420d-b8d0-95988a5d36db-kube-api-access-mrt7p\") pod \"nova-scheduler-0\" (UID: \"7af3727e-8096-420d-b8d0-95988a5d36db\") " pod="openstack/nova-scheduler-0" Dec 11 10:37:20 crc kubenswrapper[4953]: I1211 10:37:20.525763 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd4593de-19d2-47c1-b6b0-b9c0e46e1107-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"cd4593de-19d2-47c1-b6b0-b9c0e46e1107\") " pod="openstack/nova-metadata-0" Dec 11 10:37:20 crc kubenswrapper[4953]: I1211 10:37:20.627766 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrt7p\" (UniqueName: \"kubernetes.io/projected/7af3727e-8096-420d-b8d0-95988a5d36db-kube-api-access-mrt7p\") pod \"nova-scheduler-0\" (UID: \"7af3727e-8096-420d-b8d0-95988a5d36db\") " pod="openstack/nova-scheduler-0" Dec 11 10:37:20 crc kubenswrapper[4953]: I1211 10:37:20.627855 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd4593de-19d2-47c1-b6b0-b9c0e46e1107-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"cd4593de-19d2-47c1-b6b0-b9c0e46e1107\") " pod="openstack/nova-metadata-0" Dec 11 10:37:20 crc kubenswrapper[4953]: I1211 10:37:20.627896 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cd4593de-19d2-47c1-b6b0-b9c0e46e1107-logs\") pod \"nova-metadata-0\" (UID: \"cd4593de-19d2-47c1-b6b0-b9c0e46e1107\") " pod="openstack/nova-metadata-0" Dec 11 10:37:20 crc kubenswrapper[4953]: I1211 10:37:20.627976 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7af3727e-8096-420d-b8d0-95988a5d36db-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"7af3727e-8096-420d-b8d0-95988a5d36db\") " pod="openstack/nova-scheduler-0" Dec 11 10:37:20 crc kubenswrapper[4953]: I1211 10:37:20.628264 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xl6kg\" (UniqueName: \"kubernetes.io/projected/cd4593de-19d2-47c1-b6b0-b9c0e46e1107-kube-api-access-xl6kg\") pod \"nova-metadata-0\" (UID: \"cd4593de-19d2-47c1-b6b0-b9c0e46e1107\") " pod="openstack/nova-metadata-0" Dec 11 10:37:20 crc kubenswrapper[4953]: I1211 10:37:20.628298 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd4593de-19d2-47c1-b6b0-b9c0e46e1107-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"cd4593de-19d2-47c1-b6b0-b9c0e46e1107\") " pod="openstack/nova-metadata-0" Dec 11 10:37:20 crc kubenswrapper[4953]: I1211 10:37:20.628328 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7af3727e-8096-420d-b8d0-95988a5d36db-config-data\") pod \"nova-scheduler-0\" (UID: \"7af3727e-8096-420d-b8d0-95988a5d36db\") " pod="openstack/nova-scheduler-0" Dec 11 10:37:20 crc kubenswrapper[4953]: I1211 10:37:20.628390 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd4593de-19d2-47c1-b6b0-b9c0e46e1107-config-data\") pod \"nova-metadata-0\" (UID: \"cd4593de-19d2-47c1-b6b0-b9c0e46e1107\") " pod="openstack/nova-metadata-0" Dec 11 10:37:20 crc kubenswrapper[4953]: I1211 10:37:20.628705 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cd4593de-19d2-47c1-b6b0-b9c0e46e1107-logs\") pod \"nova-metadata-0\" (UID: \"cd4593de-19d2-47c1-b6b0-b9c0e46e1107\") " pod="openstack/nova-metadata-0" Dec 11 10:37:20 crc kubenswrapper[4953]: I1211 10:37:20.633949 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd4593de-19d2-47c1-b6b0-b9c0e46e1107-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"cd4593de-19d2-47c1-b6b0-b9c0e46e1107\") " pod="openstack/nova-metadata-0" Dec 11 10:37:20 crc kubenswrapper[4953]: I1211 10:37:20.634413 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7af3727e-8096-420d-b8d0-95988a5d36db-config-data\") pod \"nova-scheduler-0\" (UID: \"7af3727e-8096-420d-b8d0-95988a5d36db\") " pod="openstack/nova-scheduler-0" Dec 11 10:37:20 crc kubenswrapper[4953]: I1211 10:37:20.634558 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd4593de-19d2-47c1-b6b0-b9c0e46e1107-config-data\") pod \"nova-metadata-0\" (UID: \"cd4593de-19d2-47c1-b6b0-b9c0e46e1107\") " pod="openstack/nova-metadata-0" Dec 11 10:37:20 crc kubenswrapper[4953]: I1211 10:37:20.635974 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7af3727e-8096-420d-b8d0-95988a5d36db-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"7af3727e-8096-420d-b8d0-95988a5d36db\") " pod="openstack/nova-scheduler-0" Dec 11 10:37:20 crc kubenswrapper[4953]: I1211 10:37:20.636410 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd4593de-19d2-47c1-b6b0-b9c0e46e1107-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"cd4593de-19d2-47c1-b6b0-b9c0e46e1107\") " pod="openstack/nova-metadata-0" Dec 11 10:37:20 crc kubenswrapper[4953]: I1211 10:37:20.645396 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrt7p\" (UniqueName: \"kubernetes.io/projected/7af3727e-8096-420d-b8d0-95988a5d36db-kube-api-access-mrt7p\") pod \"nova-scheduler-0\" (UID: \"7af3727e-8096-420d-b8d0-95988a5d36db\") " pod="openstack/nova-scheduler-0" Dec 11 10:37:20 crc kubenswrapper[4953]: I1211 10:37:20.697541 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xl6kg\" (UniqueName: \"kubernetes.io/projected/cd4593de-19d2-47c1-b6b0-b9c0e46e1107-kube-api-access-xl6kg\") pod \"nova-metadata-0\" (UID: \"cd4593de-19d2-47c1-b6b0-b9c0e46e1107\") " pod="openstack/nova-metadata-0" Dec 11 10:37:20 crc kubenswrapper[4953]: I1211 10:37:20.772197 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 11 10:37:20 crc kubenswrapper[4953]: I1211 10:37:20.799730 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 11 10:37:21 crc kubenswrapper[4953]: I1211 10:37:21.264329 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 11 10:37:21 crc kubenswrapper[4953]: I1211 10:37:21.356738 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 11 10:37:21 crc kubenswrapper[4953]: I1211 10:37:21.389750 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"cd4593de-19d2-47c1-b6b0-b9c0e46e1107","Type":"ContainerStarted","Data":"5febf106fc861bdba9ab3c21cea0374dd05c3d46b644b1e6266d7eb01aa1ff7e"} Dec 11 10:37:22 crc kubenswrapper[4953]: I1211 10:37:22.400613 4953 generic.go:334] "Generic (PLEG): container finished" podID="abf03403-5280-4fe7-ac72-8d17a8a7fcb7" containerID="4b50ba4bbcce41a88a593c3c973f004487f31dd61dfe5a71e64a83db8f9a9c2f" exitCode=0 Dec 11 10:37:22 crc kubenswrapper[4953]: I1211 10:37:22.400697 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"abf03403-5280-4fe7-ac72-8d17a8a7fcb7","Type":"ContainerDied","Data":"4b50ba4bbcce41a88a593c3c973f004487f31dd61dfe5a71e64a83db8f9a9c2f"} Dec 11 10:37:22 crc kubenswrapper[4953]: I1211 10:37:22.401270 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"abf03403-5280-4fe7-ac72-8d17a8a7fcb7","Type":"ContainerDied","Data":"0989bc11b6c2f44953e98325cce2fcb4d06c1ca1aa1d5670ff8123be37328001"} Dec 11 10:37:22 crc kubenswrapper[4953]: I1211 10:37:22.401289 4953 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0989bc11b6c2f44953e98325cce2fcb4d06c1ca1aa1d5670ff8123be37328001" Dec 11 10:37:22 crc kubenswrapper[4953]: I1211 10:37:22.404418 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"7af3727e-8096-420d-b8d0-95988a5d36db","Type":"ContainerStarted","Data":"76b1adf1ecb9cc73cce6fab14903ebf309e0061c7db3b0247296d4d28611c686"} Dec 11 10:37:22 crc kubenswrapper[4953]: I1211 10:37:22.404450 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"7af3727e-8096-420d-b8d0-95988a5d36db","Type":"ContainerStarted","Data":"86825f2ee0b416e454f9379aa6d0427a47b3a0136dbdcb16fc2c48e6946ce5b8"} Dec 11 10:37:22 crc kubenswrapper[4953]: I1211 10:37:22.407070 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"cd4593de-19d2-47c1-b6b0-b9c0e46e1107","Type":"ContainerStarted","Data":"d01aaa77da386e9baab54f2e6b436105ab0703db857b3d9adc7c4e2df8f0e6e2"} Dec 11 10:37:22 crc kubenswrapper[4953]: I1211 10:37:22.407109 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"cd4593de-19d2-47c1-b6b0-b9c0e46e1107","Type":"ContainerStarted","Data":"f81a9c3634afcd79a633362dcf52201c0a4c001fbfe1929486695ab342d99feb"} Dec 11 10:37:22 crc kubenswrapper[4953]: I1211 10:37:22.438282 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.438263083 podStartE2EDuration="2.438263083s" podCreationTimestamp="2025-12-11 10:37:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:37:22.423408695 +0000 UTC m=+1560.447267728" watchObservedRunningTime="2025-12-11 10:37:22.438263083 +0000 UTC m=+1560.462122116" Dec 11 10:37:22 crc kubenswrapper[4953]: I1211 10:37:22.440872 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 11 10:37:22 crc kubenswrapper[4953]: I1211 10:37:22.452524 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.45250025 podStartE2EDuration="2.45250025s" podCreationTimestamp="2025-12-11 10:37:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:37:22.443363963 +0000 UTC m=+1560.467223016" watchObservedRunningTime="2025-12-11 10:37:22.45250025 +0000 UTC m=+1560.476359283" Dec 11 10:37:22 crc kubenswrapper[4953]: I1211 10:37:22.488295 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/abf03403-5280-4fe7-ac72-8d17a8a7fcb7-internal-tls-certs\") pod \"abf03403-5280-4fe7-ac72-8d17a8a7fcb7\" (UID: \"abf03403-5280-4fe7-ac72-8d17a8a7fcb7\") " Dec 11 10:37:22 crc kubenswrapper[4953]: I1211 10:37:22.488512 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abf03403-5280-4fe7-ac72-8d17a8a7fcb7-combined-ca-bundle\") pod \"abf03403-5280-4fe7-ac72-8d17a8a7fcb7\" (UID: \"abf03403-5280-4fe7-ac72-8d17a8a7fcb7\") " Dec 11 10:37:22 crc kubenswrapper[4953]: I1211 10:37:22.488554 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/abf03403-5280-4fe7-ac72-8d17a8a7fcb7-logs\") pod \"abf03403-5280-4fe7-ac72-8d17a8a7fcb7\" (UID: \"abf03403-5280-4fe7-ac72-8d17a8a7fcb7\") " Dec 11 10:37:22 crc kubenswrapper[4953]: I1211 10:37:22.488629 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/abf03403-5280-4fe7-ac72-8d17a8a7fcb7-public-tls-certs\") pod \"abf03403-5280-4fe7-ac72-8d17a8a7fcb7\" (UID: \"abf03403-5280-4fe7-ac72-8d17a8a7fcb7\") " Dec 11 10:37:22 crc kubenswrapper[4953]: I1211 10:37:22.488673 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dxllb\" (UniqueName: \"kubernetes.io/projected/abf03403-5280-4fe7-ac72-8d17a8a7fcb7-kube-api-access-dxllb\") pod \"abf03403-5280-4fe7-ac72-8d17a8a7fcb7\" (UID: \"abf03403-5280-4fe7-ac72-8d17a8a7fcb7\") " Dec 11 10:37:22 crc kubenswrapper[4953]: I1211 10:37:22.488723 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/abf03403-5280-4fe7-ac72-8d17a8a7fcb7-config-data\") pod \"abf03403-5280-4fe7-ac72-8d17a8a7fcb7\" (UID: \"abf03403-5280-4fe7-ac72-8d17a8a7fcb7\") " Dec 11 10:37:22 crc kubenswrapper[4953]: I1211 10:37:22.492116 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/abf03403-5280-4fe7-ac72-8d17a8a7fcb7-logs" (OuterVolumeSpecName: "logs") pod "abf03403-5280-4fe7-ac72-8d17a8a7fcb7" (UID: "abf03403-5280-4fe7-ac72-8d17a8a7fcb7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:37:22 crc kubenswrapper[4953]: I1211 10:37:22.498091 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/abf03403-5280-4fe7-ac72-8d17a8a7fcb7-kube-api-access-dxllb" (OuterVolumeSpecName: "kube-api-access-dxllb") pod "abf03403-5280-4fe7-ac72-8d17a8a7fcb7" (UID: "abf03403-5280-4fe7-ac72-8d17a8a7fcb7"). InnerVolumeSpecName "kube-api-access-dxllb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:37:22 crc kubenswrapper[4953]: I1211 10:37:22.530768 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/abf03403-5280-4fe7-ac72-8d17a8a7fcb7-config-data" (OuterVolumeSpecName: "config-data") pod "abf03403-5280-4fe7-ac72-8d17a8a7fcb7" (UID: "abf03403-5280-4fe7-ac72-8d17a8a7fcb7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:37:22 crc kubenswrapper[4953]: I1211 10:37:22.542030 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/abf03403-5280-4fe7-ac72-8d17a8a7fcb7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "abf03403-5280-4fe7-ac72-8d17a8a7fcb7" (UID: "abf03403-5280-4fe7-ac72-8d17a8a7fcb7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:37:22 crc kubenswrapper[4953]: I1211 10:37:22.562327 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/abf03403-5280-4fe7-ac72-8d17a8a7fcb7-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "abf03403-5280-4fe7-ac72-8d17a8a7fcb7" (UID: "abf03403-5280-4fe7-ac72-8d17a8a7fcb7"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:37:22 crc kubenswrapper[4953]: I1211 10:37:22.577690 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/abf03403-5280-4fe7-ac72-8d17a8a7fcb7-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "abf03403-5280-4fe7-ac72-8d17a8a7fcb7" (UID: "abf03403-5280-4fe7-ac72-8d17a8a7fcb7"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:37:22 crc kubenswrapper[4953]: I1211 10:37:22.591232 4953 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/abf03403-5280-4fe7-ac72-8d17a8a7fcb7-logs\") on node \"crc\" DevicePath \"\"" Dec 11 10:37:22 crc kubenswrapper[4953]: I1211 10:37:22.591261 4953 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/abf03403-5280-4fe7-ac72-8d17a8a7fcb7-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 11 10:37:22 crc kubenswrapper[4953]: I1211 10:37:22.591273 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dxllb\" (UniqueName: \"kubernetes.io/projected/abf03403-5280-4fe7-ac72-8d17a8a7fcb7-kube-api-access-dxllb\") on node \"crc\" DevicePath \"\"" Dec 11 10:37:22 crc kubenswrapper[4953]: I1211 10:37:22.591283 4953 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/abf03403-5280-4fe7-ac72-8d17a8a7fcb7-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 10:37:22 crc kubenswrapper[4953]: I1211 10:37:22.591291 4953 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/abf03403-5280-4fe7-ac72-8d17a8a7fcb7-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 11 10:37:22 crc kubenswrapper[4953]: I1211 10:37:22.591300 4953 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abf03403-5280-4fe7-ac72-8d17a8a7fcb7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 10:37:23 crc kubenswrapper[4953]: I1211 10:37:23.415226 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 11 10:37:23 crc kubenswrapper[4953]: I1211 10:37:23.447617 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 11 10:37:23 crc kubenswrapper[4953]: I1211 10:37:23.458031 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Dec 11 10:37:23 crc kubenswrapper[4953]: I1211 10:37:23.474087 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 11 10:37:23 crc kubenswrapper[4953]: E1211 10:37:23.474533 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abf03403-5280-4fe7-ac72-8d17a8a7fcb7" containerName="nova-api-log" Dec 11 10:37:23 crc kubenswrapper[4953]: I1211 10:37:23.474551 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="abf03403-5280-4fe7-ac72-8d17a8a7fcb7" containerName="nova-api-log" Dec 11 10:37:23 crc kubenswrapper[4953]: E1211 10:37:23.474593 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abf03403-5280-4fe7-ac72-8d17a8a7fcb7" containerName="nova-api-api" Dec 11 10:37:23 crc kubenswrapper[4953]: I1211 10:37:23.474601 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="abf03403-5280-4fe7-ac72-8d17a8a7fcb7" containerName="nova-api-api" Dec 11 10:37:23 crc kubenswrapper[4953]: I1211 10:37:23.474812 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="abf03403-5280-4fe7-ac72-8d17a8a7fcb7" containerName="nova-api-api" Dec 11 10:37:23 crc kubenswrapper[4953]: I1211 10:37:23.474831 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="abf03403-5280-4fe7-ac72-8d17a8a7fcb7" containerName="nova-api-log" Dec 11 10:37:23 crc kubenswrapper[4953]: I1211 10:37:23.475928 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 11 10:37:23 crc kubenswrapper[4953]: I1211 10:37:23.478758 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Dec 11 10:37:23 crc kubenswrapper[4953]: I1211 10:37:23.479223 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Dec 11 10:37:23 crc kubenswrapper[4953]: I1211 10:37:23.479627 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 11 10:37:23 crc kubenswrapper[4953]: I1211 10:37:23.489919 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 11 10:37:23 crc kubenswrapper[4953]: I1211 10:37:23.512364 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b66dbe7-edd9-4e23-a3d0-0661efe89ac6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4b66dbe7-edd9-4e23-a3d0-0661efe89ac6\") " pod="openstack/nova-api-0" Dec 11 10:37:23 crc kubenswrapper[4953]: I1211 10:37:23.512445 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b66dbe7-edd9-4e23-a3d0-0661efe89ac6-internal-tls-certs\") pod \"nova-api-0\" (UID: \"4b66dbe7-edd9-4e23-a3d0-0661efe89ac6\") " pod="openstack/nova-api-0" Dec 11 10:37:23 crc kubenswrapper[4953]: I1211 10:37:23.512499 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b66dbe7-edd9-4e23-a3d0-0661efe89ac6-logs\") pod \"nova-api-0\" (UID: \"4b66dbe7-edd9-4e23-a3d0-0661efe89ac6\") " pod="openstack/nova-api-0" Dec 11 10:37:23 crc kubenswrapper[4953]: I1211 10:37:23.512518 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b66dbe7-edd9-4e23-a3d0-0661efe89ac6-public-tls-certs\") pod \"nova-api-0\" (UID: \"4b66dbe7-edd9-4e23-a3d0-0661efe89ac6\") " pod="openstack/nova-api-0" Dec 11 10:37:23 crc kubenswrapper[4953]: I1211 10:37:23.512599 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lgdf\" (UniqueName: \"kubernetes.io/projected/4b66dbe7-edd9-4e23-a3d0-0661efe89ac6-kube-api-access-4lgdf\") pod \"nova-api-0\" (UID: \"4b66dbe7-edd9-4e23-a3d0-0661efe89ac6\") " pod="openstack/nova-api-0" Dec 11 10:37:23 crc kubenswrapper[4953]: I1211 10:37:23.512729 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b66dbe7-edd9-4e23-a3d0-0661efe89ac6-config-data\") pod \"nova-api-0\" (UID: \"4b66dbe7-edd9-4e23-a3d0-0661efe89ac6\") " pod="openstack/nova-api-0" Dec 11 10:37:23 crc kubenswrapper[4953]: I1211 10:37:23.615078 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b66dbe7-edd9-4e23-a3d0-0661efe89ac6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4b66dbe7-edd9-4e23-a3d0-0661efe89ac6\") " pod="openstack/nova-api-0" Dec 11 10:37:23 crc kubenswrapper[4953]: I1211 10:37:23.615385 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b66dbe7-edd9-4e23-a3d0-0661efe89ac6-internal-tls-certs\") pod \"nova-api-0\" (UID: \"4b66dbe7-edd9-4e23-a3d0-0661efe89ac6\") " pod="openstack/nova-api-0" Dec 11 10:37:23 crc kubenswrapper[4953]: I1211 10:37:23.615508 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b66dbe7-edd9-4e23-a3d0-0661efe89ac6-logs\") pod \"nova-api-0\" (UID: \"4b66dbe7-edd9-4e23-a3d0-0661efe89ac6\") " pod="openstack/nova-api-0" Dec 11 10:37:23 crc kubenswrapper[4953]: I1211 10:37:23.615605 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b66dbe7-edd9-4e23-a3d0-0661efe89ac6-public-tls-certs\") pod \"nova-api-0\" (UID: \"4b66dbe7-edd9-4e23-a3d0-0661efe89ac6\") " pod="openstack/nova-api-0" Dec 11 10:37:23 crc kubenswrapper[4953]: I1211 10:37:23.615736 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4lgdf\" (UniqueName: \"kubernetes.io/projected/4b66dbe7-edd9-4e23-a3d0-0661efe89ac6-kube-api-access-4lgdf\") pod \"nova-api-0\" (UID: \"4b66dbe7-edd9-4e23-a3d0-0661efe89ac6\") " pod="openstack/nova-api-0" Dec 11 10:37:23 crc kubenswrapper[4953]: I1211 10:37:23.615817 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b66dbe7-edd9-4e23-a3d0-0661efe89ac6-logs\") pod \"nova-api-0\" (UID: \"4b66dbe7-edd9-4e23-a3d0-0661efe89ac6\") " pod="openstack/nova-api-0" Dec 11 10:37:23 crc kubenswrapper[4953]: I1211 10:37:23.616029 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b66dbe7-edd9-4e23-a3d0-0661efe89ac6-config-data\") pod \"nova-api-0\" (UID: \"4b66dbe7-edd9-4e23-a3d0-0661efe89ac6\") " pod="openstack/nova-api-0" Dec 11 10:37:23 crc kubenswrapper[4953]: I1211 10:37:23.619959 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b66dbe7-edd9-4e23-a3d0-0661efe89ac6-internal-tls-certs\") pod \"nova-api-0\" (UID: \"4b66dbe7-edd9-4e23-a3d0-0661efe89ac6\") " pod="openstack/nova-api-0" Dec 11 10:37:23 crc kubenswrapper[4953]: I1211 10:37:23.620451 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b66dbe7-edd9-4e23-a3d0-0661efe89ac6-public-tls-certs\") pod \"nova-api-0\" (UID: \"4b66dbe7-edd9-4e23-a3d0-0661efe89ac6\") " pod="openstack/nova-api-0" Dec 11 10:37:23 crc kubenswrapper[4953]: I1211 10:37:23.620470 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b66dbe7-edd9-4e23-a3d0-0661efe89ac6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4b66dbe7-edd9-4e23-a3d0-0661efe89ac6\") " pod="openstack/nova-api-0" Dec 11 10:37:23 crc kubenswrapper[4953]: I1211 10:37:23.623085 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b66dbe7-edd9-4e23-a3d0-0661efe89ac6-config-data\") pod \"nova-api-0\" (UID: \"4b66dbe7-edd9-4e23-a3d0-0661efe89ac6\") " pod="openstack/nova-api-0" Dec 11 10:37:23 crc kubenswrapper[4953]: I1211 10:37:23.784380 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4lgdf\" (UniqueName: \"kubernetes.io/projected/4b66dbe7-edd9-4e23-a3d0-0661efe89ac6-kube-api-access-4lgdf\") pod \"nova-api-0\" (UID: \"4b66dbe7-edd9-4e23-a3d0-0661efe89ac6\") " pod="openstack/nova-api-0" Dec 11 10:37:23 crc kubenswrapper[4953]: I1211 10:37:23.794099 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 11 10:37:24 crc kubenswrapper[4953]: I1211 10:37:24.284424 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 11 10:37:24 crc kubenswrapper[4953]: W1211 10:37:24.285674 4953 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4b66dbe7_edd9_4e23_a3d0_0661efe89ac6.slice/crio-e1ee2d4c9b8918cc2f47f83d4f058d2615bd70bb7a00d1ea815df94e5ab86f52 WatchSource:0}: Error finding container e1ee2d4c9b8918cc2f47f83d4f058d2615bd70bb7a00d1ea815df94e5ab86f52: Status 404 returned error can't find the container with id e1ee2d4c9b8918cc2f47f83d4f058d2615bd70bb7a00d1ea815df94e5ab86f52 Dec 11 10:37:24 crc kubenswrapper[4953]: I1211 10:37:24.426466 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4b66dbe7-edd9-4e23-a3d0-0661efe89ac6","Type":"ContainerStarted","Data":"e1ee2d4c9b8918cc2f47f83d4f058d2615bd70bb7a00d1ea815df94e5ab86f52"} Dec 11 10:37:24 crc kubenswrapper[4953]: I1211 10:37:24.485905 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="abf03403-5280-4fe7-ac72-8d17a8a7fcb7" path="/var/lib/kubelet/pods/abf03403-5280-4fe7-ac72-8d17a8a7fcb7/volumes" Dec 11 10:37:25 crc kubenswrapper[4953]: I1211 10:37:25.437628 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4b66dbe7-edd9-4e23-a3d0-0661efe89ac6","Type":"ContainerStarted","Data":"39ba09432c8d47141f48eb0a06529b605d51f099d8537d288c6ec875cebae528"} Dec 11 10:37:25 crc kubenswrapper[4953]: I1211 10:37:25.437992 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4b66dbe7-edd9-4e23-a3d0-0661efe89ac6","Type":"ContainerStarted","Data":"33acb5b8399e690c332876bb46d0a8aa9f480f6d6435312361f99da160bb499a"} Dec 11 10:37:25 crc kubenswrapper[4953]: I1211 10:37:25.463333 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.46327563 podStartE2EDuration="2.46327563s" podCreationTimestamp="2025-12-11 10:37:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:37:25.461742991 +0000 UTC m=+1563.485602024" watchObservedRunningTime="2025-12-11 10:37:25.46327563 +0000 UTC m=+1563.487134663" Dec 11 10:37:25 crc kubenswrapper[4953]: I1211 10:37:25.773598 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Dec 11 10:37:25 crc kubenswrapper[4953]: I1211 10:37:25.801097 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 11 10:37:25 crc kubenswrapper[4953]: I1211 10:37:25.801186 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 11 10:37:30 crc kubenswrapper[4953]: I1211 10:37:30.774211 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Dec 11 10:37:30 crc kubenswrapper[4953]: I1211 10:37:30.800177 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 11 10:37:30 crc kubenswrapper[4953]: I1211 10:37:30.800223 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 11 10:37:30 crc kubenswrapper[4953]: I1211 10:37:30.809937 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Dec 11 10:37:31 crc kubenswrapper[4953]: I1211 10:37:31.690970 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Dec 11 10:37:31 crc kubenswrapper[4953]: I1211 10:37:31.810714 4953 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="cd4593de-19d2-47c1-b6b0-b9c0e46e1107" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.203:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 10:37:31 crc kubenswrapper[4953]: I1211 10:37:31.810714 4953 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="cd4593de-19d2-47c1-b6b0-b9c0e46e1107" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.203:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 11 10:37:33 crc kubenswrapper[4953]: I1211 10:37:33.795583 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 11 10:37:33 crc kubenswrapper[4953]: I1211 10:37:33.797159 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 11 10:37:34 crc kubenswrapper[4953]: I1211 10:37:34.811724 4953 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="4b66dbe7-edd9-4e23-a3d0-0661efe89ac6" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.204:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 11 10:37:34 crc kubenswrapper[4953]: I1211 10:37:34.811778 4953 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="4b66dbe7-edd9-4e23-a3d0-0661efe89ac6" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.204:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 10:37:35 crc kubenswrapper[4953]: I1211 10:37:35.591883 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Dec 11 10:37:40 crc kubenswrapper[4953]: I1211 10:37:40.805539 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 11 10:37:40 crc kubenswrapper[4953]: I1211 10:37:40.808487 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 11 10:37:40 crc kubenswrapper[4953]: I1211 10:37:40.813150 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 11 10:37:41 crc kubenswrapper[4953]: I1211 10:37:41.758018 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 11 10:37:43 crc kubenswrapper[4953]: I1211 10:37:43.814744 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 11 10:37:43 crc kubenswrapper[4953]: I1211 10:37:43.815143 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 11 10:37:43 crc kubenswrapper[4953]: I1211 10:37:43.815387 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 11 10:37:43 crc kubenswrapper[4953]: I1211 10:37:43.815437 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 11 10:37:43 crc kubenswrapper[4953]: I1211 10:37:43.820500 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 11 10:37:43 crc kubenswrapper[4953]: I1211 10:37:43.823247 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 11 10:37:48 crc kubenswrapper[4953]: I1211 10:37:48.193479 4953 patch_prober.go:28] interesting pod/machine-config-daemon-q2898 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 10:37:48 crc kubenswrapper[4953]: I1211 10:37:48.194886 4953 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 10:38:02 crc kubenswrapper[4953]: I1211 10:38:02.288829 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Dec 11 10:38:02 crc kubenswrapper[4953]: I1211 10:38:02.289627 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstackclient" podUID="56f7d9a7-e24f-4b47-b829-7adcad2b0a60" containerName="openstackclient" containerID="cri-o://34c2fd04e3b65f3a279e40c0af6591784d2ebe01fd4833cd51539e8756e2eea7" gracePeriod=2 Dec 11 10:38:02 crc kubenswrapper[4953]: I1211 10:38:02.304620 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Dec 11 10:38:02 crc kubenswrapper[4953]: I1211 10:38:02.798865 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 11 10:38:02 crc kubenswrapper[4953]: I1211 10:38:02.836076 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinderaf3a-account-delete-rtt56"] Dec 11 10:38:02 crc kubenswrapper[4953]: E1211 10:38:02.879727 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56f7d9a7-e24f-4b47-b829-7adcad2b0a60" containerName="openstackclient" Dec 11 10:38:02 crc kubenswrapper[4953]: I1211 10:38:02.880360 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="56f7d9a7-e24f-4b47-b829-7adcad2b0a60" containerName="openstackclient" Dec 11 10:38:02 crc kubenswrapper[4953]: I1211 10:38:02.881382 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="56f7d9a7-e24f-4b47-b829-7adcad2b0a60" containerName="openstackclient" Dec 11 10:38:02 crc kubenswrapper[4953]: I1211 10:38:02.888269 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinderaf3a-account-delete-rtt56" Dec 11 10:38:02 crc kubenswrapper[4953]: I1211 10:38:02.949133 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2ff6\" (UniqueName: \"kubernetes.io/projected/46cd550e-17c8-4cd2-a5e0-9746edf42836-kube-api-access-v2ff6\") pod \"cinderaf3a-account-delete-rtt56\" (UID: \"46cd550e-17c8-4cd2-a5e0-9746edf42836\") " pod="openstack/cinderaf3a-account-delete-rtt56" Dec 11 10:38:02 crc kubenswrapper[4953]: I1211 10:38:02.949928 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/46cd550e-17c8-4cd2-a5e0-9746edf42836-operator-scripts\") pod \"cinderaf3a-account-delete-rtt56\" (UID: \"46cd550e-17c8-4cd2-a5e0-9746edf42836\") " pod="openstack/cinderaf3a-account-delete-rtt56" Dec 11 10:38:02 crc kubenswrapper[4953]: E1211 10:38:02.950489 4953 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Dec 11 10:38:02 crc kubenswrapper[4953]: E1211 10:38:02.950542 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/01196778-96de-4f79-b9ac-e01243f86ebb-config-data podName:01196778-96de-4f79-b9ac-e01243f86ebb nodeName:}" failed. No retries permitted until 2025-12-11 10:38:03.450525589 +0000 UTC m=+1601.474384622 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/01196778-96de-4f79-b9ac-e01243f86ebb-config-data") pod "rabbitmq-cell1-server-0" (UID: "01196778-96de-4f79-b9ac-e01243f86ebb") : configmap "rabbitmq-cell1-config-data" not found Dec 11 10:38:02 crc kubenswrapper[4953]: I1211 10:38:02.977224 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinderaf3a-account-delete-rtt56"] Dec 11 10:38:03 crc kubenswrapper[4953]: I1211 10:38:03.050388 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron242b-account-delete-v47hk"] Dec 11 10:38:03 crc kubenswrapper[4953]: I1211 10:38:03.051834 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron242b-account-delete-v47hk" Dec 11 10:38:03 crc kubenswrapper[4953]: I1211 10:38:03.053109 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/46cd550e-17c8-4cd2-a5e0-9746edf42836-operator-scripts\") pod \"cinderaf3a-account-delete-rtt56\" (UID: \"46cd550e-17c8-4cd2-a5e0-9746edf42836\") " pod="openstack/cinderaf3a-account-delete-rtt56" Dec 11 10:38:03 crc kubenswrapper[4953]: I1211 10:38:03.053206 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2ff6\" (UniqueName: \"kubernetes.io/projected/46cd550e-17c8-4cd2-a5e0-9746edf42836-kube-api-access-v2ff6\") pod \"cinderaf3a-account-delete-rtt56\" (UID: \"46cd550e-17c8-4cd2-a5e0-9746edf42836\") " pod="openstack/cinderaf3a-account-delete-rtt56" Dec 11 10:38:03 crc kubenswrapper[4953]: I1211 10:38:03.054775 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/46cd550e-17c8-4cd2-a5e0-9746edf42836-operator-scripts\") pod \"cinderaf3a-account-delete-rtt56\" (UID: \"46cd550e-17c8-4cd2-a5e0-9746edf42836\") " pod="openstack/cinderaf3a-account-delete-rtt56" Dec 11 10:38:03 crc kubenswrapper[4953]: I1211 10:38:03.096073 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-n6pxp"] Dec 11 10:38:03 crc kubenswrapper[4953]: I1211 10:38:03.100799 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v2ff6\" (UniqueName: \"kubernetes.io/projected/46cd550e-17c8-4cd2-a5e0-9746edf42836-kube-api-access-v2ff6\") pod \"cinderaf3a-account-delete-rtt56\" (UID: \"46cd550e-17c8-4cd2-a5e0-9746edf42836\") " pod="openstack/cinderaf3a-account-delete-rtt56" Dec 11 10:38:03 crc kubenswrapper[4953]: I1211 10:38:03.152184 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-northd-0"] Dec 11 10:38:03 crc kubenswrapper[4953]: I1211 10:38:03.152471 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-northd-0" podUID="4287349e-ff2e-483c-9ede-08ec5e03a2b4" containerName="ovn-northd" containerID="cri-o://c0f6853d6258372aa9946f5e58c9f253d8e32cbaa5a5914801b2de468c7d1703" gracePeriod=30 Dec 11 10:38:03 crc kubenswrapper[4953]: I1211 10:38:03.152630 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-northd-0" podUID="4287349e-ff2e-483c-9ede-08ec5e03a2b4" containerName="openstack-network-exporter" containerID="cri-o://08ed06bcd9932bd8cfb8cd17406a3860f1745658a72ff1d03735d51b925d7e64" gracePeriod=30 Dec 11 10:38:03 crc kubenswrapper[4953]: I1211 10:38:03.157283 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/992b7c13-c6c6-4641-9c9a-3d8bfbd5029c-operator-scripts\") pod \"neutron242b-account-delete-v47hk\" (UID: \"992b7c13-c6c6-4641-9c9a-3d8bfbd5029c\") " pod="openstack/neutron242b-account-delete-v47hk" Dec 11 10:38:03 crc kubenswrapper[4953]: I1211 10:38:03.157513 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgsqs\" (UniqueName: \"kubernetes.io/projected/992b7c13-c6c6-4641-9c9a-3d8bfbd5029c-kube-api-access-mgsqs\") pod \"neutron242b-account-delete-v47hk\" (UID: \"992b7c13-c6c6-4641-9c9a-3d8bfbd5029c\") " pod="openstack/neutron242b-account-delete-v47hk" Dec 11 10:38:03 crc kubenswrapper[4953]: I1211 10:38:03.282755 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-metrics-tqd68"] Dec 11 10:38:03 crc kubenswrapper[4953]: I1211 10:38:03.283168 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-metrics-tqd68" podUID="48902dd9-8c9f-4983-b8dd-6f22f4382a19" containerName="openstack-network-exporter" containerID="cri-o://9700ecceafec88bb52bb9474793b818e8b7aef5592713f65ce2d49b857374493" gracePeriod=30 Dec 11 10:38:03 crc kubenswrapper[4953]: I1211 10:38:03.286626 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinderaf3a-account-delete-rtt56" Dec 11 10:38:03 crc kubenswrapper[4953]: I1211 10:38:03.289262 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/992b7c13-c6c6-4641-9c9a-3d8bfbd5029c-operator-scripts\") pod \"neutron242b-account-delete-v47hk\" (UID: \"992b7c13-c6c6-4641-9c9a-3d8bfbd5029c\") " pod="openstack/neutron242b-account-delete-v47hk" Dec 11 10:38:03 crc kubenswrapper[4953]: I1211 10:38:03.289526 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mgsqs\" (UniqueName: \"kubernetes.io/projected/992b7c13-c6c6-4641-9c9a-3d8bfbd5029c-kube-api-access-mgsqs\") pod \"neutron242b-account-delete-v47hk\" (UID: \"992b7c13-c6c6-4641-9c9a-3d8bfbd5029c\") " pod="openstack/neutron242b-account-delete-v47hk" Dec 11 10:38:03 crc kubenswrapper[4953]: I1211 10:38:03.291627 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/992b7c13-c6c6-4641-9c9a-3d8bfbd5029c-operator-scripts\") pod \"neutron242b-account-delete-v47hk\" (UID: \"992b7c13-c6c6-4641-9c9a-3d8bfbd5029c\") " pod="openstack/neutron242b-account-delete-v47hk" Dec 11 10:38:03 crc kubenswrapper[4953]: I1211 10:38:03.320643 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ovs-mbtwm"] Dec 11 10:38:03 crc kubenswrapper[4953]: I1211 10:38:03.347217 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron242b-account-delete-v47hk"] Dec 11 10:38:03 crc kubenswrapper[4953]: I1211 10:38:03.364717 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mgsqs\" (UniqueName: \"kubernetes.io/projected/992b7c13-c6c6-4641-9c9a-3d8bfbd5029c-kube-api-access-mgsqs\") pod \"neutron242b-account-delete-v47hk\" (UID: \"992b7c13-c6c6-4641-9c9a-3d8bfbd5029c\") " pod="openstack/neutron242b-account-delete-v47hk" Dec 11 10:38:03 crc kubenswrapper[4953]: I1211 10:38:03.387272 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-fzjwm"] Dec 11 10:38:03 crc kubenswrapper[4953]: I1211 10:38:03.447630 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-fzjwm"] Dec 11 10:38:03 crc kubenswrapper[4953]: I1211 10:38:03.467666 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron242b-account-delete-v47hk" Dec 11 10:38:03 crc kubenswrapper[4953]: E1211 10:38:03.500287 4953 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Dec 11 10:38:03 crc kubenswrapper[4953]: E1211 10:38:03.500380 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/01196778-96de-4f79-b9ac-e01243f86ebb-config-data podName:01196778-96de-4f79-b9ac-e01243f86ebb nodeName:}" failed. No retries permitted until 2025-12-11 10:38:04.500361661 +0000 UTC m=+1602.524220694 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/01196778-96de-4f79-b9ac-e01243f86ebb-config-data") pod "rabbitmq-cell1-server-0" (UID: "01196778-96de-4f79-b9ac-e01243f86ebb") : configmap "rabbitmq-cell1-config-data" not found Dec 11 10:38:03 crc kubenswrapper[4953]: I1211 10:38:03.572400 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placementa6a0-account-delete-vhpnd"] Dec 11 10:38:03 crc kubenswrapper[4953]: I1211 10:38:03.579638 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placementa6a0-account-delete-vhpnd" Dec 11 10:38:03 crc kubenswrapper[4953]: I1211 10:38:03.638216 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance5aff-account-delete-5hksm"] Dec 11 10:38:03 crc kubenswrapper[4953]: I1211 10:38:03.650283 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance5aff-account-delete-5hksm" Dec 11 10:38:03 crc kubenswrapper[4953]: I1211 10:38:03.675644 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placementa6a0-account-delete-vhpnd"] Dec 11 10:38:03 crc kubenswrapper[4953]: I1211 10:38:03.712731 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-gnwcw"] Dec 11 10:38:03 crc kubenswrapper[4953]: I1211 10:38:03.718948 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdx4n\" (UniqueName: \"kubernetes.io/projected/3aee1a2c-6a1e-48c0-9491-3f61371047eb-kube-api-access-vdx4n\") pod \"placementa6a0-account-delete-vhpnd\" (UID: \"3aee1a2c-6a1e-48c0-9491-3f61371047eb\") " pod="openstack/placementa6a0-account-delete-vhpnd" Dec 11 10:38:03 crc kubenswrapper[4953]: I1211 10:38:03.836050 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3aee1a2c-6a1e-48c0-9491-3f61371047eb-operator-scripts\") pod \"placementa6a0-account-delete-vhpnd\" (UID: \"3aee1a2c-6a1e-48c0-9491-3f61371047eb\") " pod="openstack/placementa6a0-account-delete-vhpnd" Dec 11 10:38:03 crc kubenswrapper[4953]: E1211 10:38:03.856685 4953 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c0f6853d6258372aa9946f5e58c9f253d8e32cbaa5a5914801b2de468c7d1703" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Dec 11 10:38:03 crc kubenswrapper[4953]: I1211 10:38:03.865133 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance5aff-account-delete-5hksm"] Dec 11 10:38:03 crc kubenswrapper[4953]: I1211 10:38:03.880321 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-gnwcw"] Dec 11 10:38:03 crc kubenswrapper[4953]: E1211 10:38:03.895005 4953 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c0f6853d6258372aa9946f5e58c9f253d8e32cbaa5a5914801b2de468c7d1703" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Dec 11 10:38:03 crc kubenswrapper[4953]: E1211 10:38:03.896848 4953 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c0f6853d6258372aa9946f5e58c9f253d8e32cbaa5a5914801b2de468c7d1703" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Dec 11 10:38:03 crc kubenswrapper[4953]: E1211 10:38:03.896995 4953 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-northd-0" podUID="4287349e-ff2e-483c-9ede-08ec5e03a2b4" containerName="ovn-northd" Dec 11 10:38:03 crc kubenswrapper[4953]: I1211 10:38:03.909789 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 11 10:38:03 crc kubenswrapper[4953]: I1211 10:38:03.931951 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-kv78b"] Dec 11 10:38:03 crc kubenswrapper[4953]: I1211 10:38:03.943567 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e6515789-e6f6-4aa3-83f3-4fc58f862dc9-operator-scripts\") pod \"glance5aff-account-delete-5hksm\" (UID: \"e6515789-e6f6-4aa3-83f3-4fc58f862dc9\") " pod="openstack/glance5aff-account-delete-5hksm" Dec 11 10:38:03 crc kubenswrapper[4953]: I1211 10:38:03.949040 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ln9z6\" (UniqueName: \"kubernetes.io/projected/e6515789-e6f6-4aa3-83f3-4fc58f862dc9-kube-api-access-ln9z6\") pod \"glance5aff-account-delete-5hksm\" (UID: \"e6515789-e6f6-4aa3-83f3-4fc58f862dc9\") " pod="openstack/glance5aff-account-delete-5hksm" Dec 11 10:38:03 crc kubenswrapper[4953]: I1211 10:38:03.949156 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vdx4n\" (UniqueName: \"kubernetes.io/projected/3aee1a2c-6a1e-48c0-9491-3f61371047eb-kube-api-access-vdx4n\") pod \"placementa6a0-account-delete-vhpnd\" (UID: \"3aee1a2c-6a1e-48c0-9491-3f61371047eb\") " pod="openstack/placementa6a0-account-delete-vhpnd" Dec 11 10:38:03 crc kubenswrapper[4953]: I1211 10:38:03.949283 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3aee1a2c-6a1e-48c0-9491-3f61371047eb-operator-scripts\") pod \"placementa6a0-account-delete-vhpnd\" (UID: \"3aee1a2c-6a1e-48c0-9491-3f61371047eb\") " pod="openstack/placementa6a0-account-delete-vhpnd" Dec 11 10:38:03 crc kubenswrapper[4953]: I1211 10:38:03.950386 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3aee1a2c-6a1e-48c0-9491-3f61371047eb-operator-scripts\") pod \"placementa6a0-account-delete-vhpnd\" (UID: \"3aee1a2c-6a1e-48c0-9491-3f61371047eb\") " pod="openstack/placementa6a0-account-delete-vhpnd" Dec 11 10:38:03 crc kubenswrapper[4953]: I1211 10:38:03.987489 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-kv78b"] Dec 11 10:38:03 crc kubenswrapper[4953]: I1211 10:38:03.987763 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-vzw7v"] Dec 11 10:38:04 crc kubenswrapper[4953]: I1211 10:38:04.008654 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdx4n\" (UniqueName: \"kubernetes.io/projected/3aee1a2c-6a1e-48c0-9491-3f61371047eb-kube-api-access-vdx4n\") pod \"placementa6a0-account-delete-vhpnd\" (UID: \"3aee1a2c-6a1e-48c0-9491-3f61371047eb\") " pod="openstack/placementa6a0-account-delete-vhpnd" Dec 11 10:38:04 crc kubenswrapper[4953]: I1211 10:38:04.013738 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-vzw7v"] Dec 11 10:38:04 crc kubenswrapper[4953]: I1211 10:38:04.041512 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican3c8c-account-delete-kzsq8"] Dec 11 10:38:04 crc kubenswrapper[4953]: I1211 10:38:04.043753 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican3c8c-account-delete-kzsq8" Dec 11 10:38:04 crc kubenswrapper[4953]: I1211 10:38:04.053488 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ln9z6\" (UniqueName: \"kubernetes.io/projected/e6515789-e6f6-4aa3-83f3-4fc58f862dc9-kube-api-access-ln9z6\") pod \"glance5aff-account-delete-5hksm\" (UID: \"e6515789-e6f6-4aa3-83f3-4fc58f862dc9\") " pod="openstack/glance5aff-account-delete-5hksm" Dec 11 10:38:04 crc kubenswrapper[4953]: I1211 10:38:04.053742 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e6515789-e6f6-4aa3-83f3-4fc58f862dc9-operator-scripts\") pod \"glance5aff-account-delete-5hksm\" (UID: \"e6515789-e6f6-4aa3-83f3-4fc58f862dc9\") " pod="openstack/glance5aff-account-delete-5hksm" Dec 11 10:38:04 crc kubenswrapper[4953]: I1211 10:38:04.054466 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e6515789-e6f6-4aa3-83f3-4fc58f862dc9-operator-scripts\") pod \"glance5aff-account-delete-5hksm\" (UID: \"e6515789-e6f6-4aa3-83f3-4fc58f862dc9\") " pod="openstack/glance5aff-account-delete-5hksm" Dec 11 10:38:04 crc kubenswrapper[4953]: I1211 10:38:04.067400 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican3c8c-account-delete-kzsq8"] Dec 11 10:38:04 crc kubenswrapper[4953]: I1211 10:38:04.075851 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ln9z6\" (UniqueName: \"kubernetes.io/projected/e6515789-e6f6-4aa3-83f3-4fc58f862dc9-kube-api-access-ln9z6\") pod \"glance5aff-account-delete-5hksm\" (UID: \"e6515789-e6f6-4aa3-83f3-4fc58f862dc9\") " pod="openstack/glance5aff-account-delete-5hksm" Dec 11 10:38:04 crc kubenswrapper[4953]: I1211 10:38:04.086550 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-vj92k"] Dec 11 10:38:04 crc kubenswrapper[4953]: I1211 10:38:04.137852 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-vj92k"] Dec 11 10:38:04 crc kubenswrapper[4953]: I1211 10:38:04.142009 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placementa6a0-account-delete-vhpnd" Dec 11 10:38:04 crc kubenswrapper[4953]: I1211 10:38:04.158901 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b09879bd-62c8-4810-ad58-09db28d6afb5-operator-scripts\") pod \"barbican3c8c-account-delete-kzsq8\" (UID: \"b09879bd-62c8-4810-ad58-09db28d6afb5\") " pod="openstack/barbican3c8c-account-delete-kzsq8" Dec 11 10:38:04 crc kubenswrapper[4953]: I1211 10:38:04.159068 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xt9pr\" (UniqueName: \"kubernetes.io/projected/b09879bd-62c8-4810-ad58-09db28d6afb5-kube-api-access-xt9pr\") pod \"barbican3c8c-account-delete-kzsq8\" (UID: \"b09879bd-62c8-4810-ad58-09db28d6afb5\") " pod="openstack/barbican3c8c-account-delete-kzsq8" Dec 11 10:38:04 crc kubenswrapper[4953]: E1211 10:38:04.161391 4953 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Dec 11 10:38:04 crc kubenswrapper[4953]: E1211 10:38:04.161511 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b29c8985-0d8c-4382-9969-29422929136f-config-data podName:b29c8985-0d8c-4382-9969-29422929136f nodeName:}" failed. No retries permitted until 2025-12-11 10:38:04.661461789 +0000 UTC m=+1602.685320882 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/b29c8985-0d8c-4382-9969-29422929136f-config-data") pod "rabbitmq-server-0" (UID: "b29c8985-0d8c-4382-9969-29422929136f") : configmap "rabbitmq-config-data" not found Dec 11 10:38:04 crc kubenswrapper[4953]: I1211 10:38:04.188560 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/novacell0caaa-account-delete-n4fck"] Dec 11 10:38:04 crc kubenswrapper[4953]: I1211 10:38:04.209221 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/novacell0caaa-account-delete-n4fck" Dec 11 10:38:04 crc kubenswrapper[4953]: I1211 10:38:04.207467 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance5aff-account-delete-5hksm" Dec 11 10:38:04 crc kubenswrapper[4953]: I1211 10:38:04.261002 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xt9pr\" (UniqueName: \"kubernetes.io/projected/b09879bd-62c8-4810-ad58-09db28d6afb5-kube-api-access-xt9pr\") pod \"barbican3c8c-account-delete-kzsq8\" (UID: \"b09879bd-62c8-4810-ad58-09db28d6afb5\") " pod="openstack/barbican3c8c-account-delete-kzsq8" Dec 11 10:38:04 crc kubenswrapper[4953]: I1211 10:38:04.261131 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b09879bd-62c8-4810-ad58-09db28d6afb5-operator-scripts\") pod \"barbican3c8c-account-delete-kzsq8\" (UID: \"b09879bd-62c8-4810-ad58-09db28d6afb5\") " pod="openstack/barbican3c8c-account-delete-kzsq8" Dec 11 10:38:04 crc kubenswrapper[4953]: I1211 10:38:04.261922 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b09879bd-62c8-4810-ad58-09db28d6afb5-operator-scripts\") pod \"barbican3c8c-account-delete-kzsq8\" (UID: \"b09879bd-62c8-4810-ad58-09db28d6afb5\") " pod="openstack/barbican3c8c-account-delete-kzsq8" Dec 11 10:38:04 crc kubenswrapper[4953]: I1211 10:38:04.280098 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/novacell0caaa-account-delete-n4fck"] Dec 11 10:38:04 crc kubenswrapper[4953]: I1211 10:38:04.336301 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-tqd68_48902dd9-8c9f-4983-b8dd-6f22f4382a19/openstack-network-exporter/0.log" Dec 11 10:38:04 crc kubenswrapper[4953]: I1211 10:38:04.336348 4953 generic.go:334] "Generic (PLEG): container finished" podID="48902dd9-8c9f-4983-b8dd-6f22f4382a19" containerID="9700ecceafec88bb52bb9474793b818e8b7aef5592713f65ce2d49b857374493" exitCode=2 Dec 11 10:38:04 crc kubenswrapper[4953]: I1211 10:38:04.336380 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-tqd68" event={"ID":"48902dd9-8c9f-4983-b8dd-6f22f4382a19","Type":"ContainerDied","Data":"9700ecceafec88bb52bb9474793b818e8b7aef5592713f65ce2d49b857374493"} Dec 11 10:38:04 crc kubenswrapper[4953]: I1211 10:38:04.345618 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 11 10:38:04 crc kubenswrapper[4953]: I1211 10:38:04.345879 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="d1833793-1408-450f-8a7e-e01e6048edd5" containerName="cinder-scheduler" containerID="cri-o://6633d2d60118f289461651ca377abc04f8eae490967bd314f612d43a8c179596" gracePeriod=30 Dec 11 10:38:04 crc kubenswrapper[4953]: I1211 10:38:04.346281 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="d1833793-1408-450f-8a7e-e01e6048edd5" containerName="probe" containerID="cri-o://a1dd894fb738f43b760b8725bd438e6786b826d8bd5ea6ec40ebf1c67bee2cc0" gracePeriod=30 Dec 11 10:38:04 crc kubenswrapper[4953]: I1211 10:38:04.364983 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t47tk\" (UniqueName: \"kubernetes.io/projected/12df8687-e24e-47fb-802c-3ab978ed04fd-kube-api-access-t47tk\") pod \"novacell0caaa-account-delete-n4fck\" (UID: \"12df8687-e24e-47fb-802c-3ab978ed04fd\") " pod="openstack/novacell0caaa-account-delete-n4fck" Dec 11 10:38:04 crc kubenswrapper[4953]: I1211 10:38:04.365044 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/12df8687-e24e-47fb-802c-3ab978ed04fd-operator-scripts\") pod \"novacell0caaa-account-delete-n4fck\" (UID: \"12df8687-e24e-47fb-802c-3ab978ed04fd\") " pod="openstack/novacell0caaa-account-delete-n4fck" Dec 11 10:38:04 crc kubenswrapper[4953]: I1211 10:38:04.423793 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xt9pr\" (UniqueName: \"kubernetes.io/projected/b09879bd-62c8-4810-ad58-09db28d6afb5-kube-api-access-xt9pr\") pod \"barbican3c8c-account-delete-kzsq8\" (UID: \"b09879bd-62c8-4810-ad58-09db28d6afb5\") " pod="openstack/barbican3c8c-account-delete-kzsq8" Dec 11 10:38:04 crc kubenswrapper[4953]: I1211 10:38:04.484527 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t47tk\" (UniqueName: \"kubernetes.io/projected/12df8687-e24e-47fb-802c-3ab978ed04fd-kube-api-access-t47tk\") pod \"novacell0caaa-account-delete-n4fck\" (UID: \"12df8687-e24e-47fb-802c-3ab978ed04fd\") " pod="openstack/novacell0caaa-account-delete-n4fck" Dec 11 10:38:04 crc kubenswrapper[4953]: I1211 10:38:04.485068 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/12df8687-e24e-47fb-802c-3ab978ed04fd-operator-scripts\") pod \"novacell0caaa-account-delete-n4fck\" (UID: \"12df8687-e24e-47fb-802c-3ab978ed04fd\") " pod="openstack/novacell0caaa-account-delete-n4fck" Dec 11 10:38:04 crc kubenswrapper[4953]: I1211 10:38:04.636247 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/12df8687-e24e-47fb-802c-3ab978ed04fd-operator-scripts\") pod \"novacell0caaa-account-delete-n4fck\" (UID: \"12df8687-e24e-47fb-802c-3ab978ed04fd\") " pod="openstack/novacell0caaa-account-delete-n4fck" Dec 11 10:38:04 crc kubenswrapper[4953]: E1211 10:38:04.676926 4953 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Dec 11 10:38:04 crc kubenswrapper[4953]: E1211 10:38:04.677012 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b29c8985-0d8c-4382-9969-29422929136f-config-data podName:b29c8985-0d8c-4382-9969-29422929136f nodeName:}" failed. No retries permitted until 2025-12-11 10:38:05.676992323 +0000 UTC m=+1603.700851356 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/b29c8985-0d8c-4382-9969-29422929136f-config-data") pod "rabbitmq-server-0" (UID: "b29c8985-0d8c-4382-9969-29422929136f") : configmap "rabbitmq-config-data" not found Dec 11 10:38:04 crc kubenswrapper[4953]: I1211 10:38:04.678329 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican3c8c-account-delete-kzsq8" Dec 11 10:38:04 crc kubenswrapper[4953]: E1211 10:38:04.679604 4953 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Dec 11 10:38:04 crc kubenswrapper[4953]: E1211 10:38:04.679669 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/01196778-96de-4f79-b9ac-e01243f86ebb-config-data podName:01196778-96de-4f79-b9ac-e01243f86ebb nodeName:}" failed. No retries permitted until 2025-12-11 10:38:06.679645846 +0000 UTC m=+1604.703504869 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/01196778-96de-4f79-b9ac-e01243f86ebb-config-data") pod "rabbitmq-cell1-server-0" (UID: "01196778-96de-4f79-b9ac-e01243f86ebb") : configmap "rabbitmq-cell1-config-data" not found Dec 11 10:38:04 crc kubenswrapper[4953]: I1211 10:38:04.715557 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t47tk\" (UniqueName: \"kubernetes.io/projected/12df8687-e24e-47fb-802c-3ab978ed04fd-kube-api-access-t47tk\") pod \"novacell0caaa-account-delete-n4fck\" (UID: \"12df8687-e24e-47fb-802c-3ab978ed04fd\") " pod="openstack/novacell0caaa-account-delete-n4fck" Dec 11 10:38:04 crc kubenswrapper[4953]: I1211 10:38:04.763007 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24b23531-3ad1-4b46-88f6-e930d79b6556" path="/var/lib/kubelet/pods/24b23531-3ad1-4b46-88f6-e930d79b6556/volumes" Dec 11 10:38:04 crc kubenswrapper[4953]: I1211 10:38:04.766732 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe16c5e-1161-4e0d-83d4-9f07a2643a6a" path="/var/lib/kubelet/pods/5fe16c5e-1161-4e0d-83d4-9f07a2643a6a/volumes" Dec 11 10:38:04 crc kubenswrapper[4953]: I1211 10:38:04.771113 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad3e4c7f-96eb-40bd-bcdd-7e3771908b6a" path="/var/lib/kubelet/pods/ad3e4c7f-96eb-40bd-bcdd-7e3771908b6a/volumes" Dec 11 10:38:04 crc kubenswrapper[4953]: I1211 10:38:04.776256 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c46c2893-6218-455e-a4ee-cf1b4cda45b7" path="/var/lib/kubelet/pods/c46c2893-6218-455e-a4ee-cf1b4cda45b7/volumes" Dec 11 10:38:04 crc kubenswrapper[4953]: I1211 10:38:04.780893 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f099a9d1-d895-4fdc-84cc-28df6fb24db0" path="/var/lib/kubelet/pods/f099a9d1-d895-4fdc-84cc-28df6fb24db0/volumes" Dec 11 10:38:04 crc kubenswrapper[4953]: I1211 10:38:04.782496 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 11 10:38:04 crc kubenswrapper[4953]: I1211 10:38:04.782541 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-8t2m9"] Dec 11 10:38:04 crc kubenswrapper[4953]: I1211 10:38:04.786917 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-nb-0" podUID="b4237606-fdcf-403b-8e5a-1bbb4a2e38de" containerName="openstack-network-exporter" containerID="cri-o://e70fe7fa2779f3637bf42c139e92bf6db02367cfe5162c9dbcefd534285e7752" gracePeriod=300 Dec 11 10:38:04 crc kubenswrapper[4953]: I1211 10:38:04.802772 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-npwrw"] Dec 11 10:38:04 crc kubenswrapper[4953]: E1211 10:38:04.811960 4953 handlers.go:78] "Exec lifecycle hook for Container in Pod failed" err="command '/usr/share/ovn/scripts/ovn-ctl stop_controller' exited with 137: " execCommand=["/usr/share/ovn/scripts/ovn-ctl","stop_controller"] containerName="ovn-controller" pod="openstack/ovn-controller-n6pxp" message=< Dec 11 10:38:04 crc kubenswrapper[4953]: Exiting ovn-controller (1) [ OK ] Dec 11 10:38:04 crc kubenswrapper[4953]: > Dec 11 10:38:04 crc kubenswrapper[4953]: E1211 10:38:04.811996 4953 kuberuntime_container.go:691] "PreStop hook failed" err="command '/usr/share/ovn/scripts/ovn-ctl stop_controller' exited with 137: " pod="openstack/ovn-controller-n6pxp" podUID="498f7a43-7db9-42e8-b722-a5fb6ae4749f" containerName="ovn-controller" containerID="cri-o://d0b7c04c7aac708c8d19088fd2a98707adc64c19e1992cf63c2b85b7be925ba4" Dec 11 10:38:04 crc kubenswrapper[4953]: I1211 10:38:04.812029 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-n6pxp" podUID="498f7a43-7db9-42e8-b722-a5fb6ae4749f" containerName="ovn-controller" containerID="cri-o://d0b7c04c7aac708c8d19088fd2a98707adc64c19e1992cf63c2b85b7be925ba4" gracePeriod=29 Dec 11 10:38:04 crc kubenswrapper[4953]: I1211 10:38:04.830639 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-8t2m9"] Dec 11 10:38:04 crc kubenswrapper[4953]: I1211 10:38:04.844585 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-npwrw"] Dec 11 10:38:04 crc kubenswrapper[4953]: I1211 10:38:04.857418 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/novaapi339c-account-delete-l2kws"] Dec 11 10:38:04 crc kubenswrapper[4953]: I1211 10:38:04.880925 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/novaapi339c-account-delete-l2kws" Dec 11 10:38:04 crc kubenswrapper[4953]: I1211 10:38:04.906162 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/novaapi339c-account-delete-l2kws"] Dec 11 10:38:04 crc kubenswrapper[4953]: I1211 10:38:04.957765 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/novacell0caaa-account-delete-n4fck" Dec 11 10:38:04 crc kubenswrapper[4953]: I1211 10:38:04.973999 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-nb-0" podUID="b4237606-fdcf-403b-8e5a-1bbb4a2e38de" containerName="ovsdbserver-nb" containerID="cri-o://16b6376ca3b41c1f6e9ee55d0479d0566772d86be8f749eb1b02c4edcfa051b9" gracePeriod=300 Dec 11 10:38:04 crc kubenswrapper[4953]: I1211 10:38:04.993189 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wcrgq\" (UniqueName: \"kubernetes.io/projected/10e32559-b465-4538-af8b-9dd3deedf2b9-kube-api-access-wcrgq\") pod \"novaapi339c-account-delete-l2kws\" (UID: \"10e32559-b465-4538-af8b-9dd3deedf2b9\") " pod="openstack/novaapi339c-account-delete-l2kws" Dec 11 10:38:04 crc kubenswrapper[4953]: I1211 10:38:04.993450 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/10e32559-b465-4538-af8b-9dd3deedf2b9-operator-scripts\") pod \"novaapi339c-account-delete-l2kws\" (UID: \"10e32559-b465-4538-af8b-9dd3deedf2b9\") " pod="openstack/novaapi339c-account-delete-l2kws" Dec 11 10:38:05 crc kubenswrapper[4953]: I1211 10:38:04.999542 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 11 10:38:05 crc kubenswrapper[4953]: I1211 10:38:05.000248 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-sb-0" podUID="8bb60f7f-22a1-48c2-8815-f2c6cfddfc2e" containerName="openstack-network-exporter" containerID="cri-o://8c899ec3f19ce335b2f89755f8a4e4532bfe9f417bd7fb76d6371e306044ac4e" gracePeriod=300 Dec 11 10:38:05 crc kubenswrapper[4953]: I1211 10:38:05.012477 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Dec 11 10:38:05 crc kubenswrapper[4953]: I1211 10:38:05.014118 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="4b1b7520-f52c-4a2a-98e5-16ac7460bade" containerName="cinder-api" containerID="cri-o://7be2bbaefa3e689cb3eb71687b4eaaaa7ace9bf5c6191bc5de9d655c138598a0" gracePeriod=30 Dec 11 10:38:05 crc kubenswrapper[4953]: I1211 10:38:05.014145 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="4b1b7520-f52c-4a2a-98e5-16ac7460bade" containerName="cinder-api-log" containerID="cri-o://6b9b936ddf7f45582285d8d2e0da2428665ce0beadde06342951de718bfb19dc" gracePeriod=30 Dec 11 10:38:05 crc kubenswrapper[4953]: I1211 10:38:05.029939 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-677c7c8c9c-gh7rd"] Dec 11 10:38:05 crc kubenswrapper[4953]: I1211 10:38:05.030185 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-677c7c8c9c-gh7rd" podUID="261b522a-b786-4b2b-975c-43f1cc0d8ccf" containerName="neutron-api" containerID="cri-o://15faef1b4ad4c5d4d8142bd02ca5c8b72aa84f70caf14fbea0d98e763e1ee6d8" gracePeriod=30 Dec 11 10:38:05 crc kubenswrapper[4953]: I1211 10:38:05.030667 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-677c7c8c9c-gh7rd" podUID="261b522a-b786-4b2b-975c-43f1cc0d8ccf" containerName="neutron-httpd" containerID="cri-o://8ec34f149eb7b0df59ed60ac6fbbd810019ea5b30d0ab842e625394e2d8c2226" gracePeriod=30 Dec 11 10:38:05 crc kubenswrapper[4953]: I1211 10:38:05.042793 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-7567d9469d-rx5dx"] Dec 11 10:38:05 crc kubenswrapper[4953]: I1211 10:38:05.043080 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-7567d9469d-rx5dx" podUID="345a513a-93a0-4e23-9266-3eeaf3ff0c10" containerName="placement-log" containerID="cri-o://111e78ea1225285d6f9cf9e61ccddd3adee93f71a7ea5c5159526554c821ed7c" gracePeriod=30 Dec 11 10:38:05 crc kubenswrapper[4953]: I1211 10:38:05.043591 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-7567d9469d-rx5dx" podUID="345a513a-93a0-4e23-9266-3eeaf3ff0c10" containerName="placement-api" containerID="cri-o://dfc2a9a94740a5c1e7c18669633ef308479efaeb144cead2c91d20383752f603" gracePeriod=30 Dec 11 10:38:05 crc kubenswrapper[4953]: I1211 10:38:05.088033 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/novacell1e356-account-delete-h5n9c"] Dec 11 10:38:05 crc kubenswrapper[4953]: I1211 10:38:05.092513 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/novacell1e356-account-delete-h5n9c" Dec 11 10:38:05 crc kubenswrapper[4953]: I1211 10:38:05.096134 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wcrgq\" (UniqueName: \"kubernetes.io/projected/10e32559-b465-4538-af8b-9dd3deedf2b9-kube-api-access-wcrgq\") pod \"novaapi339c-account-delete-l2kws\" (UID: \"10e32559-b465-4538-af8b-9dd3deedf2b9\") " pod="openstack/novaapi339c-account-delete-l2kws" Dec 11 10:38:05 crc kubenswrapper[4953]: I1211 10:38:05.096188 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/10e32559-b465-4538-af8b-9dd3deedf2b9-operator-scripts\") pod \"novaapi339c-account-delete-l2kws\" (UID: \"10e32559-b465-4538-af8b-9dd3deedf2b9\") " pod="openstack/novaapi339c-account-delete-l2kws" Dec 11 10:38:05 crc kubenswrapper[4953]: I1211 10:38:05.097861 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/10e32559-b465-4538-af8b-9dd3deedf2b9-operator-scripts\") pod \"novaapi339c-account-delete-l2kws\" (UID: \"10e32559-b465-4538-af8b-9dd3deedf2b9\") " pod="openstack/novaapi339c-account-delete-l2kws" Dec 11 10:38:05 crc kubenswrapper[4953]: I1211 10:38:05.140208 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-storage-0"] Dec 11 10:38:05 crc kubenswrapper[4953]: I1211 10:38:05.140859 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="7be1c768-78bb-476b-b51d-8e4fe80b8500" containerName="account-server" containerID="cri-o://8f4c46cb4b9e3e20f278144150f92781df0603ba1ce189953a04f830ee3bc004" gracePeriod=30 Dec 11 10:38:05 crc kubenswrapper[4953]: I1211 10:38:05.140926 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="7be1c768-78bb-476b-b51d-8e4fe80b8500" containerName="object-expirer" containerID="cri-o://55455d29b2f9f09dccbeb1ee95244b733e578206c81bca651b8b08a2abc3da6f" gracePeriod=30 Dec 11 10:38:05 crc kubenswrapper[4953]: I1211 10:38:05.140917 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="7be1c768-78bb-476b-b51d-8e4fe80b8500" containerName="object-server" containerID="cri-o://42ee56a6413b971f972dd83deea70f7f4ed0f5bd15d3d8739f47c3de625b36da" gracePeriod=30 Dec 11 10:38:05 crc kubenswrapper[4953]: I1211 10:38:05.140979 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="7be1c768-78bb-476b-b51d-8e4fe80b8500" containerName="container-updater" containerID="cri-o://ee01036005d992c399d8891c4088b620c28089677482095eb23ddbcf5787ed0f" gracePeriod=30 Dec 11 10:38:05 crc kubenswrapper[4953]: I1211 10:38:05.141034 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="7be1c768-78bb-476b-b51d-8e4fe80b8500" containerName="container-auditor" containerID="cri-o://8271a6a07ac8401063b754218c3eb89ceb4f2d9d019082057eb897dcd5350656" gracePeriod=30 Dec 11 10:38:05 crc kubenswrapper[4953]: I1211 10:38:05.141077 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="7be1c768-78bb-476b-b51d-8e4fe80b8500" containerName="container-replicator" containerID="cri-o://47e0171f5c393def51346598fe0050490ca2584402ed6532e4a68c71c29d1284" gracePeriod=30 Dec 11 10:38:05 crc kubenswrapper[4953]: I1211 10:38:05.141121 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="7be1c768-78bb-476b-b51d-8e4fe80b8500" containerName="object-updater" containerID="cri-o://c7fa20846bc15438ea48e549cb0457b5fdbbcd2598a4d940ee938fb4fb3a9db3" gracePeriod=30 Dec 11 10:38:05 crc kubenswrapper[4953]: I1211 10:38:05.141142 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="7be1c768-78bb-476b-b51d-8e4fe80b8500" containerName="account-reaper" containerID="cri-o://84916ff0808e4afae4bbc6dc9c0bfcc649e85608c78bcca53fc062955964d97f" gracePeriod=30 Dec 11 10:38:05 crc kubenswrapper[4953]: I1211 10:38:05.141183 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="7be1c768-78bb-476b-b51d-8e4fe80b8500" containerName="account-auditor" containerID="cri-o://8bd2acaf8a28b1f1656e66014334ca8748f846ad6e8ad38b27cb4bdf466f3173" gracePeriod=30 Dec 11 10:38:05 crc kubenswrapper[4953]: I1211 10:38:05.141129 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="7be1c768-78bb-476b-b51d-8e4fe80b8500" containerName="container-server" containerID="cri-o://bf1b66be16060aee36932d81a73465cd1174ad5e0ce2ac136fa9b17ea2beb026" gracePeriod=30 Dec 11 10:38:05 crc kubenswrapper[4953]: I1211 10:38:05.141224 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="7be1c768-78bb-476b-b51d-8e4fe80b8500" containerName="account-replicator" containerID="cri-o://679d2553c36012b1b180157877c057ec44f2c2462adfbbecdb5379d3b623b02c" gracePeriod=30 Dec 11 10:38:05 crc kubenswrapper[4953]: I1211 10:38:05.141244 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="7be1c768-78bb-476b-b51d-8e4fe80b8500" containerName="object-auditor" containerID="cri-o://bc0e3f085ef80ef3d58ffae3ef2a52f5bf40447e1f3f4fae4ba935bd88ae1802" gracePeriod=30 Dec 11 10:38:05 crc kubenswrapper[4953]: I1211 10:38:05.141329 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="7be1c768-78bb-476b-b51d-8e4fe80b8500" containerName="object-replicator" containerID="cri-o://8981b379cfe002ec1ffbcd789bf3f9088d55241543514d305383406d070e9749" gracePeriod=30 Dec 11 10:38:05 crc kubenswrapper[4953]: I1211 10:38:05.141491 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="7be1c768-78bb-476b-b51d-8e4fe80b8500" containerName="swift-recon-cron" containerID="cri-o://510beded97d4416b8880cd56e6120af1f949d427769ab1ee2c169557d12d5494" gracePeriod=30 Dec 11 10:38:05 crc kubenswrapper[4953]: I1211 10:38:05.141505 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="7be1c768-78bb-476b-b51d-8e4fe80b8500" containerName="rsync" containerID="cri-o://3c65359d49ee68c46b25f7c48cca23725c2a07a228cfed6a3b8c90cef4f401ce" gracePeriod=30 Dec 11 10:38:05 crc kubenswrapper[4953]: I1211 10:38:05.195169 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ddd577785-sstm7"] Dec 11 10:38:05 crc kubenswrapper[4953]: I1211 10:38:05.195611 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5ddd577785-sstm7" podUID="e27f6309-0ccd-4aca-ad87-0cd7a9357469" containerName="dnsmasq-dns" containerID="cri-o://43c5d3cbea07b2f71a6427f7f8f0c5486326e6e637aefebb1edd4c2b3c333c07" gracePeriod=10 Dec 11 10:38:05 crc kubenswrapper[4953]: I1211 10:38:05.202129 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c62085a-9722-4020-a26f-2adee83f78c8-operator-scripts\") pod \"novacell1e356-account-delete-h5n9c\" (UID: \"9c62085a-9722-4020-a26f-2adee83f78c8\") " pod="openstack/novacell1e356-account-delete-h5n9c" Dec 11 10:38:05 crc kubenswrapper[4953]: I1211 10:38:05.202391 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-999fx\" (UniqueName: \"kubernetes.io/projected/9c62085a-9722-4020-a26f-2adee83f78c8-kube-api-access-999fx\") pod \"novacell1e356-account-delete-h5n9c\" (UID: \"9c62085a-9722-4020-a26f-2adee83f78c8\") " pod="openstack/novacell1e356-account-delete-h5n9c" Dec 11 10:38:05 crc kubenswrapper[4953]: I1211 10:38:05.203170 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wcrgq\" (UniqueName: \"kubernetes.io/projected/10e32559-b465-4538-af8b-9dd3deedf2b9-kube-api-access-wcrgq\") pod \"novaapi339c-account-delete-l2kws\" (UID: \"10e32559-b465-4538-af8b-9dd3deedf2b9\") " pod="openstack/novaapi339c-account-delete-l2kws" Dec 11 10:38:05 crc kubenswrapper[4953]: I1211 10:38:05.222064 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/novaapi339c-account-delete-l2kws" Dec 11 10:38:05 crc kubenswrapper[4953]: I1211 10:38:05.228431 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 11 10:38:05 crc kubenswrapper[4953]: I1211 10:38:05.231031 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="e067a835-8a1a-4672-aaea-b8c101109018" containerName="glance-httpd" containerID="cri-o://30c55b9a63cff189be97f461ce82cf19d069820c204b72f08733751b6e4d8e3b" gracePeriod=30 Dec 11 10:38:05 crc kubenswrapper[4953]: I1211 10:38:05.231178 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="e067a835-8a1a-4672-aaea-b8c101109018" containerName="glance-log" containerID="cri-o://2e9a60ec1684ff881133bf906166805dce055256199aa98702401b39a20c68d8" gracePeriod=30 Dec 11 10:38:05 crc kubenswrapper[4953]: I1211 10:38:05.275854 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/novacell1e356-account-delete-h5n9c"] Dec 11 10:38:05 crc kubenswrapper[4953]: I1211 10:38:05.305418 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 11 10:38:05 crc kubenswrapper[4953]: I1211 10:38:05.305702 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="7b77681a-0823-42e6-b0a4-2af1ce955970" containerName="glance-log" containerID="cri-o://3dd428abe094a4785fe247c46053c25a62247016c38cd9af55762bbf581ab80f" gracePeriod=30 Dec 11 10:38:05 crc kubenswrapper[4953]: I1211 10:38:05.305863 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="7b77681a-0823-42e6-b0a4-2af1ce955970" containerName="glance-httpd" containerID="cri-o://4221eaf86758a08993df7de85552e51a217b8b7260281a70c92cd1a666135bc7" gracePeriod=30 Dec 11 10:38:05 crc kubenswrapper[4953]: I1211 10:38:05.315436 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c62085a-9722-4020-a26f-2adee83f78c8-operator-scripts\") pod \"novacell1e356-account-delete-h5n9c\" (UID: \"9c62085a-9722-4020-a26f-2adee83f78c8\") " pod="openstack/novacell1e356-account-delete-h5n9c" Dec 11 10:38:05 crc kubenswrapper[4953]: E1211 10:38:05.315686 4953 configmap.go:193] Couldn't get configMap openstack/openstack-cell1-scripts: configmap "openstack-cell1-scripts" not found Dec 11 10:38:05 crc kubenswrapper[4953]: E1211 10:38:05.315760 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9c62085a-9722-4020-a26f-2adee83f78c8-operator-scripts podName:9c62085a-9722-4020-a26f-2adee83f78c8 nodeName:}" failed. No retries permitted until 2025-12-11 10:38:05.815741288 +0000 UTC m=+1603.839600321 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/9c62085a-9722-4020-a26f-2adee83f78c8-operator-scripts") pod "novacell1e356-account-delete-h5n9c" (UID: "9c62085a-9722-4020-a26f-2adee83f78c8") : configmap "openstack-cell1-scripts" not found Dec 11 10:38:05 crc kubenswrapper[4953]: I1211 10:38:05.316035 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-999fx\" (UniqueName: \"kubernetes.io/projected/9c62085a-9722-4020-a26f-2adee83f78c8-kube-api-access-999fx\") pod \"novacell1e356-account-delete-h5n9c\" (UID: \"9c62085a-9722-4020-a26f-2adee83f78c8\") " pod="openstack/novacell1e356-account-delete-h5n9c" Dec 11 10:38:05 crc kubenswrapper[4953]: E1211 10:38:05.328848 4953 projected.go:194] Error preparing data for projected volume kube-api-access-999fx for pod openstack/novacell1e356-account-delete-h5n9c: failed to fetch token: serviceaccounts "galera-openstack-cell1" not found Dec 11 10:38:05 crc kubenswrapper[4953]: E1211 10:38:05.328907 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9c62085a-9722-4020-a26f-2adee83f78c8-kube-api-access-999fx podName:9c62085a-9722-4020-a26f-2adee83f78c8 nodeName:}" failed. No retries permitted until 2025-12-11 10:38:05.828889442 +0000 UTC m=+1603.852748475 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-999fx" (UniqueName: "kubernetes.io/projected/9c62085a-9722-4020-a26f-2adee83f78c8-kube-api-access-999fx") pod "novacell1e356-account-delete-h5n9c" (UID: "9c62085a-9722-4020-a26f-2adee83f78c8") : failed to fetch token: serviceaccounts "galera-openstack-cell1" not found Dec 11 10:38:05 crc kubenswrapper[4953]: I1211 10:38:05.332481 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-t4cc8"] Dec 11 10:38:05 crc kubenswrapper[4953]: I1211 10:38:05.333856 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-sb-0" podUID="8bb60f7f-22a1-48c2-8815-f2c6cfddfc2e" containerName="ovsdbserver-sb" containerID="cri-o://b7f497b107b8e8652a7f168df902d76edf4cc8c0d003e369a126e81b80c2c81c" gracePeriod=300 Dec 11 10:38:05 crc kubenswrapper[4953]: I1211 10:38:05.405363 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-t4cc8"] Dec 11 10:38:05 crc kubenswrapper[4953]: I1211 10:38:05.496688 4953 generic.go:334] "Generic (PLEG): container finished" podID="4287349e-ff2e-483c-9ede-08ec5e03a2b4" containerID="08ed06bcd9932bd8cfb8cd17406a3860f1745658a72ff1d03735d51b925d7e64" exitCode=2 Dec 11 10:38:05 crc kubenswrapper[4953]: I1211 10:38:05.496814 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"4287349e-ff2e-483c-9ede-08ec5e03a2b4","Type":"ContainerDied","Data":"08ed06bcd9932bd8cfb8cd17406a3860f1745658a72ff1d03735d51b925d7e64"} Dec 11 10:38:05 crc kubenswrapper[4953]: I1211 10:38:05.512695 4953 generic.go:334] "Generic (PLEG): container finished" podID="4b1b7520-f52c-4a2a-98e5-16ac7460bade" containerID="6b9b936ddf7f45582285d8d2e0da2428665ce0beadde06342951de718bfb19dc" exitCode=143 Dec 11 10:38:05 crc kubenswrapper[4953]: I1211 10:38:05.512837 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"4b1b7520-f52c-4a2a-98e5-16ac7460bade","Type":"ContainerDied","Data":"6b9b936ddf7f45582285d8d2e0da2428665ce0beadde06342951de718bfb19dc"} Dec 11 10:38:05 crc kubenswrapper[4953]: I1211 10:38:05.535106 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 11 10:38:05 crc kubenswrapper[4953]: I1211 10:38:05.543597 4953 generic.go:334] "Generic (PLEG): container finished" podID="8bb60f7f-22a1-48c2-8815-f2c6cfddfc2e" containerID="8c899ec3f19ce335b2f89755f8a4e4532bfe9f417bd7fb76d6371e306044ac4e" exitCode=2 Dec 11 10:38:05 crc kubenswrapper[4953]: I1211 10:38:05.543666 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"8bb60f7f-22a1-48c2-8815-f2c6cfddfc2e","Type":"ContainerDied","Data":"8c899ec3f19ce335b2f89755f8a4e4532bfe9f417bd7fb76d6371e306044ac4e"} Dec 11 10:38:05 crc kubenswrapper[4953]: I1211 10:38:05.564986 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 11 10:38:05 crc kubenswrapper[4953]: I1211 10:38:05.573810 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="cd4593de-19d2-47c1-b6b0-b9c0e46e1107" containerName="nova-metadata-log" containerID="cri-o://f81a9c3634afcd79a633362dcf52201c0a4c001fbfe1929486695ab342d99feb" gracePeriod=30 Dec 11 10:38:05 crc kubenswrapper[4953]: E1211 10:38:05.678653 4953 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Dec 11 10:38:05 crc kubenswrapper[4953]: E1211 10:38:05.678721 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b29c8985-0d8c-4382-9969-29422929136f-config-data podName:b29c8985-0d8c-4382-9969-29422929136f nodeName:}" failed. No retries permitted until 2025-12-11 10:38:07.678703777 +0000 UTC m=+1605.702562810 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/b29c8985-0d8c-4382-9969-29422929136f-config-data") pod "rabbitmq-server-0" (UID: "b29c8985-0d8c-4382-9969-29422929136f") : configmap "rabbitmq-config-data" not found Dec 11 10:38:05 crc kubenswrapper[4953]: I1211 10:38:05.683528 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="cd4593de-19d2-47c1-b6b0-b9c0e46e1107" containerName="nova-metadata-metadata" containerID="cri-o://d01aaa77da386e9baab54f2e6b436105ab0703db857b3d9adc7c4e2df8f0e6e2" gracePeriod=30 Dec 11 10:38:05 crc kubenswrapper[4953]: I1211 10:38:05.704181 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 11 10:38:05 crc kubenswrapper[4953]: I1211 10:38:05.705088 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="4b66dbe7-edd9-4e23-a3d0-0661efe89ac6" containerName="nova-api-log" containerID="cri-o://33acb5b8399e690c332876bb46d0a8aa9f480f6d6435312361f99da160bb499a" gracePeriod=30 Dec 11 10:38:05 crc kubenswrapper[4953]: I1211 10:38:05.705152 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="4b66dbe7-edd9-4e23-a3d0-0661efe89ac6" containerName="nova-api-api" containerID="cri-o://39ba09432c8d47141f48eb0a06529b605d51f099d8537d288c6ec875cebae528" gracePeriod=30 Dec 11 10:38:05 crc kubenswrapper[4953]: I1211 10:38:05.730664 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="01196778-96de-4f79-b9ac-e01243f86ebb" containerName="rabbitmq" containerID="cri-o://a1bc8164296634778d4abaa0460ca228c5ac0bad626c3a54c3a93f97fe857237" gracePeriod=604800 Dec 11 10:38:05 crc kubenswrapper[4953]: I1211 10:38:05.756714 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_b4237606-fdcf-403b-8e5a-1bbb4a2e38de/ovsdbserver-nb/0.log" Dec 11 10:38:05 crc kubenswrapper[4953]: I1211 10:38:05.756784 4953 generic.go:334] "Generic (PLEG): container finished" podID="b4237606-fdcf-403b-8e5a-1bbb4a2e38de" containerID="e70fe7fa2779f3637bf42c139e92bf6db02367cfe5162c9dbcefd534285e7752" exitCode=2 Dec 11 10:38:05 crc kubenswrapper[4953]: I1211 10:38:05.756821 4953 generic.go:334] "Generic (PLEG): container finished" podID="b4237606-fdcf-403b-8e5a-1bbb4a2e38de" containerID="16b6376ca3b41c1f6e9ee55d0479d0566772d86be8f749eb1b02c4edcfa051b9" exitCode=143 Dec 11 10:38:05 crc kubenswrapper[4953]: I1211 10:38:05.756930 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"b4237606-fdcf-403b-8e5a-1bbb4a2e38de","Type":"ContainerDied","Data":"e70fe7fa2779f3637bf42c139e92bf6db02367cfe5162c9dbcefd534285e7752"} Dec 11 10:38:05 crc kubenswrapper[4953]: I1211 10:38:05.756959 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"b4237606-fdcf-403b-8e5a-1bbb4a2e38de","Type":"ContainerDied","Data":"16b6376ca3b41c1f6e9ee55d0479d0566772d86be8f749eb1b02c4edcfa051b9"} Dec 11 10:38:05 crc kubenswrapper[4953]: I1211 10:38:05.778932 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-555fcfcf54-sqln7"] Dec 11 10:38:05 crc kubenswrapper[4953]: I1211 10:38:05.779126 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-555fcfcf54-sqln7" podUID="caec0159-12b1-46f9-952c-10f229948036" containerName="barbican-keystone-listener-log" containerID="cri-o://7d7961ffaf0fa5639d3e96bbbb7ff1815fd8017ed09d51fdb3f868fc15297c07" gracePeriod=30 Dec 11 10:38:05 crc kubenswrapper[4953]: I1211 10:38:05.779520 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-555fcfcf54-sqln7" podUID="caec0159-12b1-46f9-952c-10f229948036" containerName="barbican-keystone-listener" containerID="cri-o://6cb07fdb5e67db9e16c8125784b8b3014f71452b7d478333ae5ae1ede91ec6ff" gracePeriod=30 Dec 11 10:38:05 crc kubenswrapper[4953]: I1211 10:38:05.791034 4953 generic.go:334] "Generic (PLEG): container finished" podID="498f7a43-7db9-42e8-b722-a5fb6ae4749f" containerID="d0b7c04c7aac708c8d19088fd2a98707adc64c19e1992cf63c2b85b7be925ba4" exitCode=0 Dec 11 10:38:05 crc kubenswrapper[4953]: I1211 10:38:05.791125 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-n6pxp" event={"ID":"498f7a43-7db9-42e8-b722-a5fb6ae4749f","Type":"ContainerDied","Data":"d0b7c04c7aac708c8d19088fd2a98707adc64c19e1992cf63c2b85b7be925ba4"} Dec 11 10:38:05 crc kubenswrapper[4953]: I1211 10:38:05.795617 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-7c85df7b9d-rdbfq"] Dec 11 10:38:05 crc kubenswrapper[4953]: I1211 10:38:05.795830 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-7c85df7b9d-rdbfq" podUID="767370a9-f8dd-4370-a2cc-f5baeff52c54" containerName="barbican-api-log" containerID="cri-o://ae3f22ec9f89b003c85fac5cd8cf0695244934ca68cf1fc2a0f17935650f23bf" gracePeriod=30 Dec 11 10:38:05 crc kubenswrapper[4953]: I1211 10:38:05.796394 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-7c85df7b9d-rdbfq" podUID="767370a9-f8dd-4370-a2cc-f5baeff52c54" containerName="barbican-api" containerID="cri-o://f1f3935cba9d49f468aa48835e818e819d4e1455992846d4cd92a2e960523799" gracePeriod=30 Dec 11 10:38:05 crc kubenswrapper[4953]: I1211 10:38:05.805548 4953 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="cd4593de-19d2-47c1-b6b0-b9c0e46e1107" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.203:8775/\": read tcp 10.217.0.2:42346->10.217.0.203:8775: read: connection reset by peer" Dec 11 10:38:05 crc kubenswrapper[4953]: I1211 10:38:05.812064 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-78f5cf7bd5-24fm8"] Dec 11 10:38:05 crc kubenswrapper[4953]: I1211 10:38:05.812261 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-78f5cf7bd5-24fm8" podUID="8521d832-efe5-4653-8c0e-8921f916e10f" containerName="proxy-httpd" containerID="cri-o://ae1625ae9b7343e79bf1b390eabfcbbde5a933a354f8b01e304d5b2edc571afd" gracePeriod=30 Dec 11 10:38:05 crc kubenswrapper[4953]: I1211 10:38:05.813095 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-78f5cf7bd5-24fm8" podUID="8521d832-efe5-4653-8c0e-8921f916e10f" containerName="proxy-server" containerID="cri-o://d900251d830ca62ae055c9f9a2f8078dbd3d1545f50142030a3209a17a071070" gracePeriod=30 Dec 11 10:38:05 crc kubenswrapper[4953]: I1211 10:38:05.817508 4953 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="cd4593de-19d2-47c1-b6b0-b9c0e46e1107" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.203:8775/\": EOF" Dec 11 10:38:05 crc kubenswrapper[4953]: I1211 10:38:05.824214 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-6cffd87c8c-wlgnt"] Dec 11 10:38:05 crc kubenswrapper[4953]: I1211 10:38:05.824410 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-6cffd87c8c-wlgnt" podUID="544e1955-4316-4587-90a8-94bac4f81ae5" containerName="barbican-worker-log" containerID="cri-o://9bcdd67ff3f27b165dca3277b206f20442bbecd9d522b5435dc8a058e29f8375" gracePeriod=30 Dec 11 10:38:05 crc kubenswrapper[4953]: I1211 10:38:05.824676 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-6cffd87c8c-wlgnt" podUID="544e1955-4316-4587-90a8-94bac4f81ae5" containerName="barbican-worker" containerID="cri-o://120e662c3201d0f81e55488f64c74d01e67c74d5af04b0ca903d4ba77213d505" gracePeriod=30 Dec 11 10:38:05 crc kubenswrapper[4953]: I1211 10:38:05.833334 4953 generic.go:334] "Generic (PLEG): container finished" podID="56f7d9a7-e24f-4b47-b829-7adcad2b0a60" containerID="34c2fd04e3b65f3a279e40c0af6591784d2ebe01fd4833cd51539e8756e2eea7" exitCode=137 Dec 11 10:38:05 crc kubenswrapper[4953]: I1211 10:38:05.842837 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 11 10:38:05 crc kubenswrapper[4953]: I1211 10:38:05.843040 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="79a93889-ae40-4bd1-a697-5797e065231b" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://bc5af3bb14085fb06d4fbb19425f7956a6394356771a345adb23fe49da34c4ef" gracePeriod=30 Dec 11 10:38:05 crc kubenswrapper[4953]: E1211 10:38:05.847117 4953 handlers.go:78] "Exec lifecycle hook for Container in Pod failed" err=< Dec 11 10:38:05 crc kubenswrapper[4953]: command '/usr/local/bin/container-scripts/stop-ovsdb-server.sh' exited with 137: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Dec 11 10:38:05 crc kubenswrapper[4953]: + source /usr/local/bin/container-scripts/functions Dec 11 10:38:05 crc kubenswrapper[4953]: ++ OVNBridge=br-int Dec 11 10:38:05 crc kubenswrapper[4953]: ++ OVNRemote=tcp:localhost:6642 Dec 11 10:38:05 crc kubenswrapper[4953]: ++ OVNEncapType=geneve Dec 11 10:38:05 crc kubenswrapper[4953]: ++ OVNAvailabilityZones= Dec 11 10:38:05 crc kubenswrapper[4953]: ++ EnableChassisAsGateway=true Dec 11 10:38:05 crc kubenswrapper[4953]: ++ PhysicalNetworks= Dec 11 10:38:05 crc kubenswrapper[4953]: ++ OVNHostName= Dec 11 10:38:05 crc kubenswrapper[4953]: ++ DB_FILE=/etc/openvswitch/conf.db Dec 11 10:38:05 crc kubenswrapper[4953]: ++ ovs_dir=/var/lib/openvswitch Dec 11 10:38:05 crc kubenswrapper[4953]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Dec 11 10:38:05 crc kubenswrapper[4953]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Dec 11 10:38:05 crc kubenswrapper[4953]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Dec 11 10:38:05 crc kubenswrapper[4953]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Dec 11 10:38:05 crc kubenswrapper[4953]: + sleep 0.5 Dec 11 10:38:05 crc kubenswrapper[4953]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Dec 11 10:38:05 crc kubenswrapper[4953]: + sleep 0.5 Dec 11 10:38:05 crc kubenswrapper[4953]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Dec 11 10:38:05 crc kubenswrapper[4953]: + cleanup_ovsdb_server_semaphore Dec 11 10:38:05 crc kubenswrapper[4953]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Dec 11 10:38:05 crc kubenswrapper[4953]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Dec 11 10:38:05 crc kubenswrapper[4953]: > execCommand=["/usr/local/bin/container-scripts/stop-ovsdb-server.sh"] containerName="ovsdb-server" pod="openstack/ovn-controller-ovs-mbtwm" message=< Dec 11 10:38:05 crc kubenswrapper[4953]: Exiting ovsdb-server (5) [ OK ] Dec 11 10:38:05 crc kubenswrapper[4953]: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Dec 11 10:38:05 crc kubenswrapper[4953]: + source /usr/local/bin/container-scripts/functions Dec 11 10:38:05 crc kubenswrapper[4953]: ++ OVNBridge=br-int Dec 11 10:38:05 crc kubenswrapper[4953]: ++ OVNRemote=tcp:localhost:6642 Dec 11 10:38:05 crc kubenswrapper[4953]: ++ OVNEncapType=geneve Dec 11 10:38:05 crc kubenswrapper[4953]: ++ OVNAvailabilityZones= Dec 11 10:38:05 crc kubenswrapper[4953]: ++ EnableChassisAsGateway=true Dec 11 10:38:05 crc kubenswrapper[4953]: ++ PhysicalNetworks= Dec 11 10:38:05 crc kubenswrapper[4953]: ++ OVNHostName= Dec 11 10:38:05 crc kubenswrapper[4953]: ++ DB_FILE=/etc/openvswitch/conf.db Dec 11 10:38:05 crc kubenswrapper[4953]: ++ ovs_dir=/var/lib/openvswitch Dec 11 10:38:05 crc kubenswrapper[4953]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Dec 11 10:38:05 crc kubenswrapper[4953]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Dec 11 10:38:05 crc kubenswrapper[4953]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Dec 11 10:38:05 crc kubenswrapper[4953]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Dec 11 10:38:05 crc kubenswrapper[4953]: + sleep 0.5 Dec 11 10:38:05 crc kubenswrapper[4953]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Dec 11 10:38:05 crc kubenswrapper[4953]: + sleep 0.5 Dec 11 10:38:05 crc kubenswrapper[4953]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Dec 11 10:38:05 crc kubenswrapper[4953]: + cleanup_ovsdb_server_semaphore Dec 11 10:38:05 crc kubenswrapper[4953]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Dec 11 10:38:05 crc kubenswrapper[4953]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Dec 11 10:38:05 crc kubenswrapper[4953]: > Dec 11 10:38:05 crc kubenswrapper[4953]: E1211 10:38:05.847154 4953 kuberuntime_container.go:691] "PreStop hook failed" err=< Dec 11 10:38:05 crc kubenswrapper[4953]: command '/usr/local/bin/container-scripts/stop-ovsdb-server.sh' exited with 137: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Dec 11 10:38:05 crc kubenswrapper[4953]: + source /usr/local/bin/container-scripts/functions Dec 11 10:38:05 crc kubenswrapper[4953]: ++ OVNBridge=br-int Dec 11 10:38:05 crc kubenswrapper[4953]: ++ OVNRemote=tcp:localhost:6642 Dec 11 10:38:05 crc kubenswrapper[4953]: ++ OVNEncapType=geneve Dec 11 10:38:05 crc kubenswrapper[4953]: ++ OVNAvailabilityZones= Dec 11 10:38:05 crc kubenswrapper[4953]: ++ EnableChassisAsGateway=true Dec 11 10:38:05 crc kubenswrapper[4953]: ++ PhysicalNetworks= Dec 11 10:38:05 crc kubenswrapper[4953]: ++ OVNHostName= Dec 11 10:38:05 crc kubenswrapper[4953]: ++ DB_FILE=/etc/openvswitch/conf.db Dec 11 10:38:05 crc kubenswrapper[4953]: ++ ovs_dir=/var/lib/openvswitch Dec 11 10:38:05 crc kubenswrapper[4953]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Dec 11 10:38:05 crc kubenswrapper[4953]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Dec 11 10:38:05 crc kubenswrapper[4953]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Dec 11 10:38:05 crc kubenswrapper[4953]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Dec 11 10:38:05 crc kubenswrapper[4953]: + sleep 0.5 Dec 11 10:38:05 crc kubenswrapper[4953]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Dec 11 10:38:05 crc kubenswrapper[4953]: + sleep 0.5 Dec 11 10:38:05 crc kubenswrapper[4953]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Dec 11 10:38:05 crc kubenswrapper[4953]: + cleanup_ovsdb_server_semaphore Dec 11 10:38:05 crc kubenswrapper[4953]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Dec 11 10:38:05 crc kubenswrapper[4953]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Dec 11 10:38:05 crc kubenswrapper[4953]: > pod="openstack/ovn-controller-ovs-mbtwm" podUID="5cfd14e5-05e2-4cc5-ba83-259321c6f872" containerName="ovsdb-server" containerID="cri-o://9dd0749df58975f05de050cfcf92dc87ed6378284f27a69c71579f156df64d52" Dec 11 10:38:05 crc kubenswrapper[4953]: I1211 10:38:05.847186 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-ovs-mbtwm" podUID="5cfd14e5-05e2-4cc5-ba83-259321c6f872" containerName="ovsdb-server" containerID="cri-o://9dd0749df58975f05de050cfcf92dc87ed6378284f27a69c71579f156df64d52" gracePeriod=28 Dec 11 10:38:05 crc kubenswrapper[4953]: I1211 10:38:05.855617 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron242b-account-delete-v47hk" event={"ID":"992b7c13-c6c6-4641-9c9a-3d8bfbd5029c","Type":"ContainerStarted","Data":"41dbb44fe99702597b2dfb7089798ccafb97bd82bf8ec43a43f67a8d5f4c222b"} Dec 11 10:38:05 crc kubenswrapper[4953]: I1211 10:38:05.863596 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 11 10:38:05 crc kubenswrapper[4953]: I1211 10:38:05.872159 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-tqd68_48902dd9-8c9f-4983-b8dd-6f22f4382a19/openstack-network-exporter/0.log" Dec 11 10:38:05 crc kubenswrapper[4953]: I1211 10:38:05.990610 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-999fx\" (UniqueName: \"kubernetes.io/projected/9c62085a-9722-4020-a26f-2adee83f78c8-kube-api-access-999fx\") pod \"novacell1e356-account-delete-h5n9c\" (UID: \"9c62085a-9722-4020-a26f-2adee83f78c8\") " pod="openstack/novacell1e356-account-delete-h5n9c" Dec 11 10:38:05 crc kubenswrapper[4953]: I1211 10:38:05.990918 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c62085a-9722-4020-a26f-2adee83f78c8-operator-scripts\") pod \"novacell1e356-account-delete-h5n9c\" (UID: \"9c62085a-9722-4020-a26f-2adee83f78c8\") " pod="openstack/novacell1e356-account-delete-h5n9c" Dec 11 10:38:05 crc kubenswrapper[4953]: E1211 10:38:05.991251 4953 configmap.go:193] Couldn't get configMap openstack/openstack-cell1-scripts: configmap "openstack-cell1-scripts" not found Dec 11 10:38:05 crc kubenswrapper[4953]: E1211 10:38:05.991325 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9c62085a-9722-4020-a26f-2adee83f78c8-operator-scripts podName:9c62085a-9722-4020-a26f-2adee83f78c8 nodeName:}" failed. No retries permitted until 2025-12-11 10:38:06.991309452 +0000 UTC m=+1605.015168485 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/9c62085a-9722-4020-a26f-2adee83f78c8-operator-scripts") pod "novacell1e356-account-delete-h5n9c" (UID: "9c62085a-9722-4020-a26f-2adee83f78c8") : configmap "openstack-cell1-scripts" not found Dec 11 10:38:05 crc kubenswrapper[4953]: I1211 10:38:05.995304 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-tqd68" Dec 11 10:38:06 crc kubenswrapper[4953]: E1211 10:38:06.004821 4953 projected.go:194] Error preparing data for projected volume kube-api-access-999fx for pod openstack/novacell1e356-account-delete-h5n9c: failed to fetch token: serviceaccounts "galera-openstack-cell1" not found Dec 11 10:38:06 crc kubenswrapper[4953]: E1211 10:38:06.004909 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9c62085a-9722-4020-a26f-2adee83f78c8-kube-api-access-999fx podName:9c62085a-9722-4020-a26f-2adee83f78c8 nodeName:}" failed. No retries permitted until 2025-12-11 10:38:07.004877628 +0000 UTC m=+1605.028736661 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-999fx" (UniqueName: "kubernetes.io/projected/9c62085a-9722-4020-a26f-2adee83f78c8-kube-api-access-999fx") pod "novacell1e356-account-delete-h5n9c" (UID: "9c62085a-9722-4020-a26f-2adee83f78c8") : failed to fetch token: serviceaccounts "galera-openstack-cell1" not found Dec 11 10:38:06 crc kubenswrapper[4953]: I1211 10:38:06.034229 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-ovs-mbtwm" podUID="5cfd14e5-05e2-4cc5-ba83-259321c6f872" containerName="ovs-vswitchd" containerID="cri-o://f6f4f73f93ab838f657b20b0e0f2f7780e20c20fb3adfe66d3e44a87fc1d18c6" gracePeriod=28 Dec 11 10:38:06 crc kubenswrapper[4953]: I1211 10:38:06.092364 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/novacell1e356-account-delete-h5n9c"] Dec 11 10:38:06 crc kubenswrapper[4953]: I1211 10:38:06.094425 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48902dd9-8c9f-4983-b8dd-6f22f4382a19-config\") pod \"48902dd9-8c9f-4983-b8dd-6f22f4382a19\" (UID: \"48902dd9-8c9f-4983-b8dd-6f22f4382a19\") " Dec 11 10:38:06 crc kubenswrapper[4953]: I1211 10:38:06.094541 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/48902dd9-8c9f-4983-b8dd-6f22f4382a19-ovs-rundir\") pod \"48902dd9-8c9f-4983-b8dd-6f22f4382a19\" (UID: \"48902dd9-8c9f-4983-b8dd-6f22f4382a19\") " Dec 11 10:38:06 crc kubenswrapper[4953]: I1211 10:38:06.094594 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48902dd9-8c9f-4983-b8dd-6f22f4382a19-combined-ca-bundle\") pod \"48902dd9-8c9f-4983-b8dd-6f22f4382a19\" (UID: \"48902dd9-8c9f-4983-b8dd-6f22f4382a19\") " Dec 11 10:38:06 crc kubenswrapper[4953]: I1211 10:38:06.094720 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8hfsq\" (UniqueName: \"kubernetes.io/projected/48902dd9-8c9f-4983-b8dd-6f22f4382a19-kube-api-access-8hfsq\") pod \"48902dd9-8c9f-4983-b8dd-6f22f4382a19\" (UID: \"48902dd9-8c9f-4983-b8dd-6f22f4382a19\") " Dec 11 10:38:06 crc kubenswrapper[4953]: I1211 10:38:06.094802 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/48902dd9-8c9f-4983-b8dd-6f22f4382a19-metrics-certs-tls-certs\") pod \"48902dd9-8c9f-4983-b8dd-6f22f4382a19\" (UID: \"48902dd9-8c9f-4983-b8dd-6f22f4382a19\") " Dec 11 10:38:06 crc kubenswrapper[4953]: I1211 10:38:06.094846 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/48902dd9-8c9f-4983-b8dd-6f22f4382a19-ovn-rundir\") pod \"48902dd9-8c9f-4983-b8dd-6f22f4382a19\" (UID: \"48902dd9-8c9f-4983-b8dd-6f22f4382a19\") " Dec 11 10:38:06 crc kubenswrapper[4953]: I1211 10:38:06.095432 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/48902dd9-8c9f-4983-b8dd-6f22f4382a19-ovn-rundir" (OuterVolumeSpecName: "ovn-rundir") pod "48902dd9-8c9f-4983-b8dd-6f22f4382a19" (UID: "48902dd9-8c9f-4983-b8dd-6f22f4382a19"). InnerVolumeSpecName "ovn-rundir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 10:38:06 crc kubenswrapper[4953]: I1211 10:38:06.096188 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48902dd9-8c9f-4983-b8dd-6f22f4382a19-config" (OuterVolumeSpecName: "config") pod "48902dd9-8c9f-4983-b8dd-6f22f4382a19" (UID: "48902dd9-8c9f-4983-b8dd-6f22f4382a19"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:38:06 crc kubenswrapper[4953]: I1211 10:38:06.096228 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/48902dd9-8c9f-4983-b8dd-6f22f4382a19-ovs-rundir" (OuterVolumeSpecName: "ovs-rundir") pod "48902dd9-8c9f-4983-b8dd-6f22f4382a19" (UID: "48902dd9-8c9f-4983-b8dd-6f22f4382a19"). InnerVolumeSpecName "ovs-rundir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 10:38:06 crc kubenswrapper[4953]: E1211 10:38:06.097302 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-999fx operator-scripts], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/novacell1e356-account-delete-h5n9c" podUID="9c62085a-9722-4020-a26f-2adee83f78c8" Dec 11 10:38:06 crc kubenswrapper[4953]: I1211 10:38:06.107880 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-kdnkh"] Dec 11 10:38:06 crc kubenswrapper[4953]: I1211 10:38:06.116294 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48902dd9-8c9f-4983-b8dd-6f22f4382a19-kube-api-access-8hfsq" (OuterVolumeSpecName: "kube-api-access-8hfsq") pod "48902dd9-8c9f-4983-b8dd-6f22f4382a19" (UID: "48902dd9-8c9f-4983-b8dd-6f22f4382a19"). InnerVolumeSpecName "kube-api-access-8hfsq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:38:06 crc kubenswrapper[4953]: I1211 10:38:06.125226 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-e356-account-create-update-k4hjq"] Dec 11 10:38:06 crc kubenswrapper[4953]: I1211 10:38:06.143297 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-kdnkh"] Dec 11 10:38:06 crc kubenswrapper[4953]: I1211 10:38:06.176198 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48902dd9-8c9f-4983-b8dd-6f22f4382a19-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "48902dd9-8c9f-4983-b8dd-6f22f4382a19" (UID: "48902dd9-8c9f-4983-b8dd-6f22f4382a19"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:38:06 crc kubenswrapper[4953]: I1211 10:38:06.182774 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-e356-account-create-update-k4hjq"] Dec 11 10:38:06 crc kubenswrapper[4953]: I1211 10:38:06.196952 4953 reconciler_common.go:293] "Volume detached for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/48902dd9-8c9f-4983-b8dd-6f22f4382a19-ovn-rundir\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:06 crc kubenswrapper[4953]: I1211 10:38:06.196979 4953 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48902dd9-8c9f-4983-b8dd-6f22f4382a19-config\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:06 crc kubenswrapper[4953]: I1211 10:38:06.196988 4953 reconciler_common.go:293] "Volume detached for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/48902dd9-8c9f-4983-b8dd-6f22f4382a19-ovs-rundir\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:06 crc kubenswrapper[4953]: I1211 10:38:06.196996 4953 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48902dd9-8c9f-4983-b8dd-6f22f4382a19-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:06 crc kubenswrapper[4953]: I1211 10:38:06.197005 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8hfsq\" (UniqueName: \"kubernetes.io/projected/48902dd9-8c9f-4983-b8dd-6f22f4382a19-kube-api-access-8hfsq\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:06 crc kubenswrapper[4953]: I1211 10:38:06.275195 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 11 10:38:06 crc kubenswrapper[4953]: I1211 10:38:06.275454 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-conductor-0" podUID="5c566b6b-16f8-422c-acda-0325e36103e6" containerName="nova-cell1-conductor-conductor" containerID="cri-o://83eedb4ddd84362084d8ccac38fed9fcbcacfbfefe97227d1e7bf4df1164fbc0" gracePeriod=30 Dec 11 10:38:06 crc kubenswrapper[4953]: I1211 10:38:06.380886 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-8g247"] Dec 11 10:38:06 crc kubenswrapper[4953]: I1211 10:38:06.410842 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-8g247"] Dec 11 10:38:06 crc kubenswrapper[4953]: I1211 10:38:06.419133 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 11 10:38:06 crc kubenswrapper[4953]: I1211 10:38:06.419357 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="1b3d5c24-61f6-4926-94ec-0e3a462334df" containerName="nova-cell0-conductor-conductor" containerID="cri-o://4d40902f2adb77e2b7dde3ed43d14df9863e66572e62ab6a82f12fa7bb0bcca2" gracePeriod=30 Dec 11 10:38:06 crc kubenswrapper[4953]: I1211 10:38:06.432773 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-v4tfr"] Dec 11 10:38:06 crc kubenswrapper[4953]: I1211 10:38:06.442751 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-v4tfr"] Dec 11 10:38:06 crc kubenswrapper[4953]: I1211 10:38:06.447781 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48902dd9-8c9f-4983-b8dd-6f22f4382a19-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "48902dd9-8c9f-4983-b8dd-6f22f4382a19" (UID: "48902dd9-8c9f-4983-b8dd-6f22f4382a19"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:38:06 crc kubenswrapper[4953]: I1211 10:38:06.454293 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 11 10:38:06 crc kubenswrapper[4953]: I1211 10:38:06.454627 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="7af3727e-8096-420d-b8d0-95988a5d36db" containerName="nova-scheduler-scheduler" containerID="cri-o://76b1adf1ecb9cc73cce6fab14903ebf309e0061c7db3b0247296d4d28611c686" gracePeriod=30 Dec 11 10:38:06 crc kubenswrapper[4953]: I1211 10:38:06.512936 4953 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/48902dd9-8c9f-4983-b8dd-6f22f4382a19-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:06 crc kubenswrapper[4953]: I1211 10:38:06.515723 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="00b5c561-61d7-4ae7-9485-a5882b9a5dc1" path="/var/lib/kubelet/pods/00b5c561-61d7-4ae7-9485-a5882b9a5dc1/volumes" Dec 11 10:38:06 crc kubenswrapper[4953]: I1211 10:38:06.532437 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21b311c3-3edf-4905-9929-79787eb29bb8" path="/var/lib/kubelet/pods/21b311c3-3edf-4905-9929-79787eb29bb8/volumes" Dec 11 10:38:06 crc kubenswrapper[4953]: I1211 10:38:06.534269 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33f6128e-32cb-454f-ba24-3c8e4e1cb2ba" path="/var/lib/kubelet/pods/33f6128e-32cb-454f-ba24-3c8e4e1cb2ba/volumes" Dec 11 10:38:06 crc kubenswrapper[4953]: I1211 10:38:06.535873 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c04ba7e-0ab4-4242-af7e-5566fc6030cb" path="/var/lib/kubelet/pods/7c04ba7e-0ab4-4242-af7e-5566fc6030cb/volumes" Dec 11 10:38:06 crc kubenswrapper[4953]: I1211 10:38:06.538199 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8092c791-0c1a-454e-9fe8-b3dcb63c3415" path="/var/lib/kubelet/pods/8092c791-0c1a-454e-9fe8-b3dcb63c3415/volumes" Dec 11 10:38:06 crc kubenswrapper[4953]: I1211 10:38:06.540387 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d02eaa4f-c90f-4399-a3f7-661e4773b7ee" path="/var/lib/kubelet/pods/d02eaa4f-c90f-4399-a3f7-661e4773b7ee/volumes" Dec 11 10:38:06 crc kubenswrapper[4953]: I1211 10:38:06.543612 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fad155f8-cdef-44b0-9be5-5a7db0881abc" path="/var/lib/kubelet/pods/fad155f8-cdef-44b0-9be5-5a7db0881abc/volumes" Dec 11 10:38:06 crc kubenswrapper[4953]: I1211 10:38:06.546628 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 11 10:38:06 crc kubenswrapper[4953]: I1211 10:38:06.546698 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron242b-account-delete-v47hk"] Dec 11 10:38:06 crc kubenswrapper[4953]: I1211 10:38:06.565119 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinderaf3a-account-delete-rtt56"] Dec 11 10:38:06 crc kubenswrapper[4953]: I1211 10:38:06.578196 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance5aff-account-delete-5hksm"] Dec 11 10:38:06 crc kubenswrapper[4953]: I1211 10:38:06.596219 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-cell1-galera-0" podUID="27258186-4cab-45b4-a20c-a4c3ddc82f76" containerName="galera" containerID="cri-o://0cfe0bd98f32db174fde1333af2c3108717607f2c93978857021eab34e2c9d4e" gracePeriod=30 Dec 11 10:38:06 crc kubenswrapper[4953]: I1211 10:38:06.638294 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="b29c8985-0d8c-4382-9969-29422929136f" containerName="rabbitmq" containerID="cri-o://8193f374115b267f95840c2fe78180f26fa81a7641851959e8cc0f1231cdb480" gracePeriod=604800 Dec 11 10:38:06 crc kubenswrapper[4953]: I1211 10:38:06.683225 4953 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-cell1-novncproxy-0" podUID="79a93889-ae40-4bd1-a697-5797e065231b" containerName="nova-cell1-novncproxy-novncproxy" probeResult="failure" output="Get \"https://10.217.0.197:6080/vnc_lite.html\": dial tcp 10.217.0.197:6080: connect: connection refused" Dec 11 10:38:06 crc kubenswrapper[4953]: I1211 10:38:06.702803 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 11 10:38:06 crc kubenswrapper[4953]: I1211 10:38:06.716843 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bl2rw\" (UniqueName: \"kubernetes.io/projected/56f7d9a7-e24f-4b47-b829-7adcad2b0a60-kube-api-access-bl2rw\") pod \"56f7d9a7-e24f-4b47-b829-7adcad2b0a60\" (UID: \"56f7d9a7-e24f-4b47-b829-7adcad2b0a60\") " Dec 11 10:38:06 crc kubenswrapper[4953]: I1211 10:38:06.716899 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/56f7d9a7-e24f-4b47-b829-7adcad2b0a60-openstack-config-secret\") pod \"56f7d9a7-e24f-4b47-b829-7adcad2b0a60\" (UID: \"56f7d9a7-e24f-4b47-b829-7adcad2b0a60\") " Dec 11 10:38:06 crc kubenswrapper[4953]: I1211 10:38:06.716989 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/56f7d9a7-e24f-4b47-b829-7adcad2b0a60-openstack-config\") pod \"56f7d9a7-e24f-4b47-b829-7adcad2b0a60\" (UID: \"56f7d9a7-e24f-4b47-b829-7adcad2b0a60\") " Dec 11 10:38:06 crc kubenswrapper[4953]: I1211 10:38:06.717016 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56f7d9a7-e24f-4b47-b829-7adcad2b0a60-combined-ca-bundle\") pod \"56f7d9a7-e24f-4b47-b829-7adcad2b0a60\" (UID: \"56f7d9a7-e24f-4b47-b829-7adcad2b0a60\") " Dec 11 10:38:06 crc kubenswrapper[4953]: E1211 10:38:06.720246 4953 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Dec 11 10:38:06 crc kubenswrapper[4953]: E1211 10:38:06.720316 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/01196778-96de-4f79-b9ac-e01243f86ebb-config-data podName:01196778-96de-4f79-b9ac-e01243f86ebb nodeName:}" failed. No retries permitted until 2025-12-11 10:38:10.720297424 +0000 UTC m=+1608.744156457 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/01196778-96de-4f79-b9ac-e01243f86ebb-config-data") pod "rabbitmq-cell1-server-0" (UID: "01196778-96de-4f79-b9ac-e01243f86ebb") : configmap "rabbitmq-cell1-config-data" not found Dec 11 10:38:06 crc kubenswrapper[4953]: I1211 10:38:06.772685 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-n6pxp" Dec 11 10:38:06 crc kubenswrapper[4953]: I1211 10:38:06.772848 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56f7d9a7-e24f-4b47-b829-7adcad2b0a60-kube-api-access-bl2rw" (OuterVolumeSpecName: "kube-api-access-bl2rw") pod "56f7d9a7-e24f-4b47-b829-7adcad2b0a60" (UID: "56f7d9a7-e24f-4b47-b829-7adcad2b0a60"). InnerVolumeSpecName "kube-api-access-bl2rw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:38:06 crc kubenswrapper[4953]: I1211 10:38:06.788859 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56f7d9a7-e24f-4b47-b829-7adcad2b0a60-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "56f7d9a7-e24f-4b47-b829-7adcad2b0a60" (UID: "56f7d9a7-e24f-4b47-b829-7adcad2b0a60"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:38:06 crc kubenswrapper[4953]: I1211 10:38:06.823879 4953 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56f7d9a7-e24f-4b47-b829-7adcad2b0a60-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:06 crc kubenswrapper[4953]: I1211 10:38:06.823916 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bl2rw\" (UniqueName: \"kubernetes.io/projected/56f7d9a7-e24f-4b47-b829-7adcad2b0a60-kube-api-access-bl2rw\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:06 crc kubenswrapper[4953]: E1211 10:38:06.832927 4953 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8521d832_efe5_4653_8c0e_8921f916e10f.slice/crio-ae1625ae9b7343e79bf1b390eabfcbbde5a933a354f8b01e304d5b2edc571afd.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod79a93889_ae40_4bd1_a697_5797e065231b.slice/crio-bc5af3bb14085fb06d4fbb19425f7956a6394356771a345adb23fe49da34c4ef.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod79a93889_ae40_4bd1_a697_5797e065231b.slice/crio-conmon-bc5af3bb14085fb06d4fbb19425f7956a6394356771a345adb23fe49da34c4ef.scope\": RecentStats: unable to find data in memory cache]" Dec 11 10:38:06 crc kubenswrapper[4953]: I1211 10:38:06.858681 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56f7d9a7-e24f-4b47-b829-7adcad2b0a60-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "56f7d9a7-e24f-4b47-b829-7adcad2b0a60" (UID: "56f7d9a7-e24f-4b47-b829-7adcad2b0a60"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:38:06 crc kubenswrapper[4953]: I1211 10:38:06.881937 4953 scope.go:117] "RemoveContainer" containerID="34c2fd04e3b65f3a279e40c0af6591784d2ebe01fd4833cd51539e8756e2eea7" Dec 11 10:38:06 crc kubenswrapper[4953]: I1211 10:38:06.882206 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 11 10:38:06 crc kubenswrapper[4953]: I1211 10:38:06.884460 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_b4237606-fdcf-403b-8e5a-1bbb4a2e38de/ovsdbserver-nb/0.log" Dec 11 10:38:06 crc kubenswrapper[4953]: I1211 10:38:06.885020 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Dec 11 10:38:06 crc kubenswrapper[4953]: I1211 10:38:06.888174 4953 generic.go:334] "Generic (PLEG): container finished" podID="cd4593de-19d2-47c1-b6b0-b9c0e46e1107" containerID="f81a9c3634afcd79a633362dcf52201c0a4c001fbfe1929486695ab342d99feb" exitCode=143 Dec 11 10:38:06 crc kubenswrapper[4953]: I1211 10:38:06.888450 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"cd4593de-19d2-47c1-b6b0-b9c0e46e1107","Type":"ContainerDied","Data":"f81a9c3634afcd79a633362dcf52201c0a4c001fbfe1929486695ab342d99feb"} Dec 11 10:38:07 crc kubenswrapper[4953]: I1211 10:38:06.933876 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinderaf3a-account-delete-rtt56" event={"ID":"46cd550e-17c8-4cd2-a5e0-9746edf42836","Type":"ContainerStarted","Data":"e6eb47f34c128cb4f1af94d49e8725fbbd0356cbb00225b9a5ebe82932fa74f7"} Dec 11 10:38:07 crc kubenswrapper[4953]: I1211 10:38:07.011607 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/498f7a43-7db9-42e8-b722-a5fb6ae4749f-var-run\") pod \"498f7a43-7db9-42e8-b722-a5fb6ae4749f\" (UID: \"498f7a43-7db9-42e8-b722-a5fb6ae4749f\") " Dec 11 10:38:07 crc kubenswrapper[4953]: I1211 10:38:07.011745 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/498f7a43-7db9-42e8-b722-a5fb6ae4749f-var-run" (OuterVolumeSpecName: "var-run") pod "498f7a43-7db9-42e8-b722-a5fb6ae4749f" (UID: "498f7a43-7db9-42e8-b722-a5fb6ae4749f"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 10:38:07 crc kubenswrapper[4953]: I1211 10:38:07.011901 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndbcluster-nb-etc-ovn\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"b4237606-fdcf-403b-8e5a-1bbb4a2e38de\" (UID: \"b4237606-fdcf-403b-8e5a-1bbb4a2e38de\") " Dec 11 10:38:07 crc kubenswrapper[4953]: I1211 10:38:07.011939 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/498f7a43-7db9-42e8-b722-a5fb6ae4749f-var-run-ovn\") pod \"498f7a43-7db9-42e8-b722-a5fb6ae4749f\" (UID: \"498f7a43-7db9-42e8-b722-a5fb6ae4749f\") " Dec 11 10:38:07 crc kubenswrapper[4953]: I1211 10:38:07.012018 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/498f7a43-7db9-42e8-b722-a5fb6ae4749f-var-log-ovn\") pod \"498f7a43-7db9-42e8-b722-a5fb6ae4749f\" (UID: \"498f7a43-7db9-42e8-b722-a5fb6ae4749f\") " Dec 11 10:38:07 crc kubenswrapper[4953]: I1211 10:38:07.012071 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/498f7a43-7db9-42e8-b722-a5fb6ae4749f-ovn-controller-tls-certs\") pod \"498f7a43-7db9-42e8-b722-a5fb6ae4749f\" (UID: \"498f7a43-7db9-42e8-b722-a5fb6ae4749f\") " Dec 11 10:38:07 crc kubenswrapper[4953]: I1211 10:38:07.012134 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b4237606-fdcf-403b-8e5a-1bbb4a2e38de-ovsdb-rundir\") pod \"b4237606-fdcf-403b-8e5a-1bbb4a2e38de\" (UID: \"b4237606-fdcf-403b-8e5a-1bbb4a2e38de\") " Dec 11 10:38:07 crc kubenswrapper[4953]: I1211 10:38:07.012158 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/498f7a43-7db9-42e8-b722-a5fb6ae4749f-combined-ca-bundle\") pod \"498f7a43-7db9-42e8-b722-a5fb6ae4749f\" (UID: \"498f7a43-7db9-42e8-b722-a5fb6ae4749f\") " Dec 11 10:38:07 crc kubenswrapper[4953]: I1211 10:38:07.012195 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4237606-fdcf-403b-8e5a-1bbb4a2e38de-combined-ca-bundle\") pod \"b4237606-fdcf-403b-8e5a-1bbb4a2e38de\" (UID: \"b4237606-fdcf-403b-8e5a-1bbb4a2e38de\") " Dec 11 10:38:07 crc kubenswrapper[4953]: I1211 10:38:07.012221 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4237606-fdcf-403b-8e5a-1bbb4a2e38de-ovsdbserver-nb-tls-certs\") pod \"b4237606-fdcf-403b-8e5a-1bbb4a2e38de\" (UID: \"b4237606-fdcf-403b-8e5a-1bbb4a2e38de\") " Dec 11 10:38:07 crc kubenswrapper[4953]: I1211 10:38:07.012246 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4237606-fdcf-403b-8e5a-1bbb4a2e38de-config\") pod \"b4237606-fdcf-403b-8e5a-1bbb4a2e38de\" (UID: \"b4237606-fdcf-403b-8e5a-1bbb4a2e38de\") " Dec 11 10:38:07 crc kubenswrapper[4953]: I1211 10:38:07.012273 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/498f7a43-7db9-42e8-b722-a5fb6ae4749f-scripts\") pod \"498f7a43-7db9-42e8-b722-a5fb6ae4749f\" (UID: \"498f7a43-7db9-42e8-b722-a5fb6ae4749f\") " Dec 11 10:38:07 crc kubenswrapper[4953]: I1211 10:38:07.012298 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-48wpb\" (UniqueName: \"kubernetes.io/projected/498f7a43-7db9-42e8-b722-a5fb6ae4749f-kube-api-access-48wpb\") pod \"498f7a43-7db9-42e8-b722-a5fb6ae4749f\" (UID: \"498f7a43-7db9-42e8-b722-a5fb6ae4749f\") " Dec 11 10:38:07 crc kubenswrapper[4953]: I1211 10:38:07.012327 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xv9xc\" (UniqueName: \"kubernetes.io/projected/b4237606-fdcf-403b-8e5a-1bbb4a2e38de-kube-api-access-xv9xc\") pod \"b4237606-fdcf-403b-8e5a-1bbb4a2e38de\" (UID: \"b4237606-fdcf-403b-8e5a-1bbb4a2e38de\") " Dec 11 10:38:07 crc kubenswrapper[4953]: I1211 10:38:07.012895 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-999fx\" (UniqueName: \"kubernetes.io/projected/9c62085a-9722-4020-a26f-2adee83f78c8-kube-api-access-999fx\") pod \"novacell1e356-account-delete-h5n9c\" (UID: \"9c62085a-9722-4020-a26f-2adee83f78c8\") " pod="openstack/novacell1e356-account-delete-h5n9c" Dec 11 10:38:07 crc kubenswrapper[4953]: I1211 10:38:07.013119 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c62085a-9722-4020-a26f-2adee83f78c8-operator-scripts\") pod \"novacell1e356-account-delete-h5n9c\" (UID: \"9c62085a-9722-4020-a26f-2adee83f78c8\") " pod="openstack/novacell1e356-account-delete-h5n9c" Dec 11 10:38:07 crc kubenswrapper[4953]: I1211 10:38:07.013294 4953 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/498f7a43-7db9-42e8-b722-a5fb6ae4749f-var-run\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:07 crc kubenswrapper[4953]: I1211 10:38:07.013311 4953 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/56f7d9a7-e24f-4b47-b829-7adcad2b0a60-openstack-config\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:07 crc kubenswrapper[4953]: E1211 10:38:07.013374 4953 configmap.go:193] Couldn't get configMap openstack/openstack-cell1-scripts: configmap "openstack-cell1-scripts" not found Dec 11 10:38:07 crc kubenswrapper[4953]: E1211 10:38:07.013425 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9c62085a-9722-4020-a26f-2adee83f78c8-operator-scripts podName:9c62085a-9722-4020-a26f-2adee83f78c8 nodeName:}" failed. No retries permitted until 2025-12-11 10:38:09.013408476 +0000 UTC m=+1607.037267509 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/9c62085a-9722-4020-a26f-2adee83f78c8-operator-scripts") pod "novacell1e356-account-delete-h5n9c" (UID: "9c62085a-9722-4020-a26f-2adee83f78c8") : configmap "openstack-cell1-scripts" not found Dec 11 10:38:07 crc kubenswrapper[4953]: I1211 10:38:07.020409 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4237606-fdcf-403b-8e5a-1bbb4a2e38de-config" (OuterVolumeSpecName: "config") pod "b4237606-fdcf-403b-8e5a-1bbb4a2e38de" (UID: "b4237606-fdcf-403b-8e5a-1bbb4a2e38de"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:38:07 crc kubenswrapper[4953]: I1211 10:38:07.032077 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/498f7a43-7db9-42e8-b722-a5fb6ae4749f-scripts" (OuterVolumeSpecName: "scripts") pod "498f7a43-7db9-42e8-b722-a5fb6ae4749f" (UID: "498f7a43-7db9-42e8-b722-a5fb6ae4749f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:38:07 crc kubenswrapper[4953]: I1211 10:38:07.035758 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/498f7a43-7db9-42e8-b722-a5fb6ae4749f-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "498f7a43-7db9-42e8-b722-a5fb6ae4749f" (UID: "498f7a43-7db9-42e8-b722-a5fb6ae4749f"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 10:38:07 crc kubenswrapper[4953]: I1211 10:38:07.035807 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/498f7a43-7db9-42e8-b722-a5fb6ae4749f-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "498f7a43-7db9-42e8-b722-a5fb6ae4749f" (UID: "498f7a43-7db9-42e8-b722-a5fb6ae4749f"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 10:38:07 crc kubenswrapper[4953]: I1211 10:38:07.039163 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b4237606-fdcf-403b-8e5a-1bbb4a2e38de-ovsdb-rundir" (OuterVolumeSpecName: "ovsdb-rundir") pod "b4237606-fdcf-403b-8e5a-1bbb4a2e38de" (UID: "b4237606-fdcf-403b-8e5a-1bbb4a2e38de"). InnerVolumeSpecName "ovsdb-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:38:07 crc kubenswrapper[4953]: E1211 10:38:07.043663 4953 projected.go:194] Error preparing data for projected volume kube-api-access-999fx for pod openstack/novacell1e356-account-delete-h5n9c: failed to fetch token: serviceaccounts "galera-openstack-cell1" not found Dec 11 10:38:07 crc kubenswrapper[4953]: E1211 10:38:07.043745 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9c62085a-9722-4020-a26f-2adee83f78c8-kube-api-access-999fx podName:9c62085a-9722-4020-a26f-2adee83f78c8 nodeName:}" failed. No retries permitted until 2025-12-11 10:38:09.043723159 +0000 UTC m=+1607.067582192 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-999fx" (UniqueName: "kubernetes.io/projected/9c62085a-9722-4020-a26f-2adee83f78c8-kube-api-access-999fx") pod "novacell1e356-account-delete-h5n9c" (UID: "9c62085a-9722-4020-a26f-2adee83f78c8") : failed to fetch token: serviceaccounts "galera-openstack-cell1" not found Dec 11 10:38:07 crc kubenswrapper[4953]: I1211 10:38:07.050056 4953 generic.go:334] "Generic (PLEG): container finished" podID="261b522a-b786-4b2b-975c-43f1cc0d8ccf" containerID="8ec34f149eb7b0df59ed60ac6fbbd810019ea5b30d0ab842e625394e2d8c2226" exitCode=0 Dec 11 10:38:07 crc kubenswrapper[4953]: I1211 10:38:07.050143 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-677c7c8c9c-gh7rd" event={"ID":"261b522a-b786-4b2b-975c-43f1cc0d8ccf","Type":"ContainerDied","Data":"8ec34f149eb7b0df59ed60ac6fbbd810019ea5b30d0ab842e625394e2d8c2226"} Dec 11 10:38:07 crc kubenswrapper[4953]: I1211 10:38:07.060493 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4237606-fdcf-403b-8e5a-1bbb4a2e38de-kube-api-access-xv9xc" (OuterVolumeSpecName: "kube-api-access-xv9xc") pod "b4237606-fdcf-403b-8e5a-1bbb4a2e38de" (UID: "b4237606-fdcf-403b-8e5a-1bbb4a2e38de"). InnerVolumeSpecName "kube-api-access-xv9xc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:38:07 crc kubenswrapper[4953]: E1211 10:38:07.064665 4953 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b7f497b107b8e8652a7f168df902d76edf4cc8c0d003e369a126e81b80c2c81c is running failed: container process not found" containerID="b7f497b107b8e8652a7f168df902d76edf4cc8c0d003e369a126e81b80c2c81c" cmd=["/usr/bin/pidof","ovsdb-server"] Dec 11 10:38:07 crc kubenswrapper[4953]: I1211 10:38:07.068134 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/498f7a43-7db9-42e8-b722-a5fb6ae4749f-kube-api-access-48wpb" (OuterVolumeSpecName: "kube-api-access-48wpb") pod "498f7a43-7db9-42e8-b722-a5fb6ae4749f" (UID: "498f7a43-7db9-42e8-b722-a5fb6ae4749f"). InnerVolumeSpecName "kube-api-access-48wpb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:38:07 crc kubenswrapper[4953]: E1211 10:38:07.068974 4953 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b7f497b107b8e8652a7f168df902d76edf4cc8c0d003e369a126e81b80c2c81c is running failed: container process not found" containerID="b7f497b107b8e8652a7f168df902d76edf4cc8c0d003e369a126e81b80c2c81c" cmd=["/usr/bin/pidof","ovsdb-server"] Dec 11 10:38:07 crc kubenswrapper[4953]: I1211 10:38:07.071620 4953 generic.go:334] "Generic (PLEG): container finished" podID="7b77681a-0823-42e6-b0a4-2af1ce955970" containerID="3dd428abe094a4785fe247c46053c25a62247016c38cd9af55762bbf581ab80f" exitCode=143 Dec 11 10:38:07 crc kubenswrapper[4953]: I1211 10:38:07.071946 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7b77681a-0823-42e6-b0a4-2af1ce955970","Type":"ContainerDied","Data":"3dd428abe094a4785fe247c46053c25a62247016c38cd9af55762bbf581ab80f"} Dec 11 10:38:07 crc kubenswrapper[4953]: E1211 10:38:07.072026 4953 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b7f497b107b8e8652a7f168df902d76edf4cc8c0d003e369a126e81b80c2c81c is running failed: container process not found" containerID="b7f497b107b8e8652a7f168df902d76edf4cc8c0d003e369a126e81b80c2c81c" cmd=["/usr/bin/pidof","ovsdb-server"] Dec 11 10:38:07 crc kubenswrapper[4953]: E1211 10:38:07.072060 4953 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b7f497b107b8e8652a7f168df902d76edf4cc8c0d003e369a126e81b80c2c81c is running failed: container process not found" probeType="Readiness" pod="openstack/ovsdbserver-sb-0" podUID="8bb60f7f-22a1-48c2-8815-f2c6cfddfc2e" containerName="ovsdbserver-sb" Dec 11 10:38:07 crc kubenswrapper[4953]: I1211 10:38:07.077772 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ddd577785-sstm7" Dec 11 10:38:07 crc kubenswrapper[4953]: I1211 10:38:07.097993 4953 generic.go:334] "Generic (PLEG): container finished" podID="e067a835-8a1a-4672-aaea-b8c101109018" containerID="2e9a60ec1684ff881133bf906166805dce055256199aa98702401b39a20c68d8" exitCode=143 Dec 11 10:38:07 crc kubenswrapper[4953]: I1211 10:38:07.098132 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e067a835-8a1a-4672-aaea-b8c101109018","Type":"ContainerDied","Data":"2e9a60ec1684ff881133bf906166805dce055256199aa98702401b39a20c68d8"} Dec 11 10:38:07 crc kubenswrapper[4953]: I1211 10:38:07.111000 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "ovndbcluster-nb-etc-ovn") pod "b4237606-fdcf-403b-8e5a-1bbb4a2e38de" (UID: "b4237606-fdcf-403b-8e5a-1bbb4a2e38de"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 11 10:38:07 crc kubenswrapper[4953]: I1211 10:38:07.113884 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e27f6309-0ccd-4aca-ad87-0cd7a9357469-dns-svc\") pod \"e27f6309-0ccd-4aca-ad87-0cd7a9357469\" (UID: \"e27f6309-0ccd-4aca-ad87-0cd7a9357469\") " Dec 11 10:38:07 crc kubenswrapper[4953]: I1211 10:38:07.114005 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e27f6309-0ccd-4aca-ad87-0cd7a9357469-ovsdbserver-sb\") pod \"e27f6309-0ccd-4aca-ad87-0cd7a9357469\" (UID: \"e27f6309-0ccd-4aca-ad87-0cd7a9357469\") " Dec 11 10:38:07 crc kubenswrapper[4953]: I1211 10:38:07.114101 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e27f6309-0ccd-4aca-ad87-0cd7a9357469-dns-swift-storage-0\") pod \"e27f6309-0ccd-4aca-ad87-0cd7a9357469\" (UID: \"e27f6309-0ccd-4aca-ad87-0cd7a9357469\") " Dec 11 10:38:07 crc kubenswrapper[4953]: I1211 10:38:07.114134 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e27f6309-0ccd-4aca-ad87-0cd7a9357469-config\") pod \"e27f6309-0ccd-4aca-ad87-0cd7a9357469\" (UID: \"e27f6309-0ccd-4aca-ad87-0cd7a9357469\") " Dec 11 10:38:07 crc kubenswrapper[4953]: I1211 10:38:07.114168 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4knhw\" (UniqueName: \"kubernetes.io/projected/e27f6309-0ccd-4aca-ad87-0cd7a9357469-kube-api-access-4knhw\") pod \"e27f6309-0ccd-4aca-ad87-0cd7a9357469\" (UID: \"e27f6309-0ccd-4aca-ad87-0cd7a9357469\") " Dec 11 10:38:07 crc kubenswrapper[4953]: I1211 10:38:07.114193 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e27f6309-0ccd-4aca-ad87-0cd7a9357469-ovsdbserver-nb\") pod \"e27f6309-0ccd-4aca-ad87-0cd7a9357469\" (UID: \"e27f6309-0ccd-4aca-ad87-0cd7a9357469\") " Dec 11 10:38:07 crc kubenswrapper[4953]: I1211 10:38:07.114217 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b4237606-fdcf-403b-8e5a-1bbb4a2e38de-scripts\") pod \"b4237606-fdcf-403b-8e5a-1bbb4a2e38de\" (UID: \"b4237606-fdcf-403b-8e5a-1bbb4a2e38de\") " Dec 11 10:38:07 crc kubenswrapper[4953]: I1211 10:38:07.114256 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4237606-fdcf-403b-8e5a-1bbb4a2e38de-metrics-certs-tls-certs\") pod \"b4237606-fdcf-403b-8e5a-1bbb4a2e38de\" (UID: \"b4237606-fdcf-403b-8e5a-1bbb4a2e38de\") " Dec 11 10:38:07 crc kubenswrapper[4953]: I1211 10:38:07.114703 4953 reconciler_common.go:293] "Volume detached for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b4237606-fdcf-403b-8e5a-1bbb4a2e38de-ovsdb-rundir\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:07 crc kubenswrapper[4953]: I1211 10:38:07.114730 4953 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4237606-fdcf-403b-8e5a-1bbb4a2e38de-config\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:07 crc kubenswrapper[4953]: I1211 10:38:07.114743 4953 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/498f7a43-7db9-42e8-b722-a5fb6ae4749f-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:07 crc kubenswrapper[4953]: I1211 10:38:07.114755 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-48wpb\" (UniqueName: \"kubernetes.io/projected/498f7a43-7db9-42e8-b722-a5fb6ae4749f-kube-api-access-48wpb\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:07 crc kubenswrapper[4953]: I1211 10:38:07.114767 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xv9xc\" (UniqueName: \"kubernetes.io/projected/b4237606-fdcf-403b-8e5a-1bbb4a2e38de-kube-api-access-xv9xc\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:07 crc kubenswrapper[4953]: I1211 10:38:07.114794 4953 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Dec 11 10:38:07 crc kubenswrapper[4953]: I1211 10:38:07.114808 4953 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/498f7a43-7db9-42e8-b722-a5fb6ae4749f-var-run-ovn\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:07 crc kubenswrapper[4953]: I1211 10:38:07.115349 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4237606-fdcf-403b-8e5a-1bbb4a2e38de-scripts" (OuterVolumeSpecName: "scripts") pod "b4237606-fdcf-403b-8e5a-1bbb4a2e38de" (UID: "b4237606-fdcf-403b-8e5a-1bbb4a2e38de"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:38:07 crc kubenswrapper[4953]: I1211 10:38:07.115379 4953 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/498f7a43-7db9-42e8-b722-a5fb6ae4749f-var-log-ovn\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:07 crc kubenswrapper[4953]: I1211 10:38:07.115869 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-n6pxp" event={"ID":"498f7a43-7db9-42e8-b722-a5fb6ae4749f","Type":"ContainerDied","Data":"abd5eb9c865c91ed39409e56456d5631999a02a171950a07d1bf10beb42a032a"} Dec 11 10:38:07 crc kubenswrapper[4953]: I1211 10:38:07.115915 4953 scope.go:117] "RemoveContainer" containerID="d0b7c04c7aac708c8d19088fd2a98707adc64c19e1992cf63c2b85b7be925ba4" Dec 11 10:38:07 crc kubenswrapper[4953]: I1211 10:38:07.116034 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-n6pxp" Dec 11 10:38:07 crc kubenswrapper[4953]: I1211 10:38:07.135705 4953 generic.go:334] "Generic (PLEG): container finished" podID="e27f6309-0ccd-4aca-ad87-0cd7a9357469" containerID="43c5d3cbea07b2f71a6427f7f8f0c5486326e6e637aefebb1edd4c2b3c333c07" exitCode=0 Dec 11 10:38:07 crc kubenswrapper[4953]: I1211 10:38:07.135838 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ddd577785-sstm7" event={"ID":"e27f6309-0ccd-4aca-ad87-0cd7a9357469","Type":"ContainerDied","Data":"43c5d3cbea07b2f71a6427f7f8f0c5486326e6e637aefebb1edd4c2b3c333c07"} Dec 11 10:38:07 crc kubenswrapper[4953]: I1211 10:38:07.135938 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ddd577785-sstm7" Dec 11 10:38:07 crc kubenswrapper[4953]: I1211 10:38:07.138118 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e27f6309-0ccd-4aca-ad87-0cd7a9357469-kube-api-access-4knhw" (OuterVolumeSpecName: "kube-api-access-4knhw") pod "e27f6309-0ccd-4aca-ad87-0cd7a9357469" (UID: "e27f6309-0ccd-4aca-ad87-0cd7a9357469"). InnerVolumeSpecName "kube-api-access-4knhw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:38:07 crc kubenswrapper[4953]: I1211 10:38:07.154004 4953 generic.go:334] "Generic (PLEG): container finished" podID="767370a9-f8dd-4370-a2cc-f5baeff52c54" containerID="ae3f22ec9f89b003c85fac5cd8cf0695244934ca68cf1fc2a0f17935650f23bf" exitCode=143 Dec 11 10:38:07 crc kubenswrapper[4953]: I1211 10:38:07.154153 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7c85df7b9d-rdbfq" event={"ID":"767370a9-f8dd-4370-a2cc-f5baeff52c54","Type":"ContainerDied","Data":"ae3f22ec9f89b003c85fac5cd8cf0695244934ca68cf1fc2a0f17935650f23bf"} Dec 11 10:38:07 crc kubenswrapper[4953]: I1211 10:38:07.161107 4953 generic.go:334] "Generic (PLEG): container finished" podID="5cfd14e5-05e2-4cc5-ba83-259321c6f872" containerID="9dd0749df58975f05de050cfcf92dc87ed6378284f27a69c71579f156df64d52" exitCode=0 Dec 11 10:38:07 crc kubenswrapper[4953]: I1211 10:38:07.161191 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-mbtwm" event={"ID":"5cfd14e5-05e2-4cc5-ba83-259321c6f872","Type":"ContainerDied","Data":"9dd0749df58975f05de050cfcf92dc87ed6378284f27a69c71579f156df64d52"} Dec 11 10:38:07 crc kubenswrapper[4953]: I1211 10:38:07.163938 4953 generic.go:334] "Generic (PLEG): container finished" podID="345a513a-93a0-4e23-9266-3eeaf3ff0c10" containerID="111e78ea1225285d6f9cf9e61ccddd3adee93f71a7ea5c5159526554c821ed7c" exitCode=143 Dec 11 10:38:07 crc kubenswrapper[4953]: I1211 10:38:07.164007 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7567d9469d-rx5dx" event={"ID":"345a513a-93a0-4e23-9266-3eeaf3ff0c10","Type":"ContainerDied","Data":"111e78ea1225285d6f9cf9e61ccddd3adee93f71a7ea5c5159526554c821ed7c"} Dec 11 10:38:07 crc kubenswrapper[4953]: I1211 10:38:07.166779 4953 generic.go:334] "Generic (PLEG): container finished" podID="d1833793-1408-450f-8a7e-e01e6048edd5" containerID="a1dd894fb738f43b760b8725bd438e6786b826d8bd5ea6ec40ebf1c67bee2cc0" exitCode=0 Dec 11 10:38:07 crc kubenswrapper[4953]: I1211 10:38:07.166852 4953 generic.go:334] "Generic (PLEG): container finished" podID="d1833793-1408-450f-8a7e-e01e6048edd5" containerID="6633d2d60118f289461651ca377abc04f8eae490967bd314f612d43a8c179596" exitCode=0 Dec 11 10:38:07 crc kubenswrapper[4953]: I1211 10:38:07.166921 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d1833793-1408-450f-8a7e-e01e6048edd5","Type":"ContainerDied","Data":"a1dd894fb738f43b760b8725bd438e6786b826d8bd5ea6ec40ebf1c67bee2cc0"} Dec 11 10:38:07 crc kubenswrapper[4953]: I1211 10:38:07.166949 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d1833793-1408-450f-8a7e-e01e6048edd5","Type":"ContainerDied","Data":"6633d2d60118f289461651ca377abc04f8eae490967bd314f612d43a8c179596"} Dec 11 10:38:07 crc kubenswrapper[4953]: I1211 10:38:07.167069 4953 scope.go:117] "RemoveContainer" containerID="43c5d3cbea07b2f71a6427f7f8f0c5486326e6e637aefebb1edd4c2b3c333c07" Dec 11 10:38:07 crc kubenswrapper[4953]: I1211 10:38:07.191373 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56f7d9a7-e24f-4b47-b829-7adcad2b0a60-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "56f7d9a7-e24f-4b47-b829-7adcad2b0a60" (UID: "56f7d9a7-e24f-4b47-b829-7adcad2b0a60"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:38:07 crc kubenswrapper[4953]: I1211 10:38:07.192991 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-tqd68_48902dd9-8c9f-4983-b8dd-6f22f4382a19/openstack-network-exporter/0.log" Dec 11 10:38:07 crc kubenswrapper[4953]: I1211 10:38:07.193137 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-tqd68" Dec 11 10:38:07 crc kubenswrapper[4953]: I1211 10:38:07.194237 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-tqd68" event={"ID":"48902dd9-8c9f-4983-b8dd-6f22f4382a19","Type":"ContainerDied","Data":"83fcd8493ce3c7045884617ae0baff2087a15f99d164d4a580ff6ea0be0b5085"} Dec 11 10:38:07 crc kubenswrapper[4953]: I1211 10:38:07.203140 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_8bb60f7f-22a1-48c2-8815-f2c6cfddfc2e/ovsdbserver-sb/0.log" Dec 11 10:38:07 crc kubenswrapper[4953]: I1211 10:38:07.203185 4953 generic.go:334] "Generic (PLEG): container finished" podID="8bb60f7f-22a1-48c2-8815-f2c6cfddfc2e" containerID="b7f497b107b8e8652a7f168df902d76edf4cc8c0d003e369a126e81b80c2c81c" exitCode=143 Dec 11 10:38:07 crc kubenswrapper[4953]: I1211 10:38:07.203251 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"8bb60f7f-22a1-48c2-8815-f2c6cfddfc2e","Type":"ContainerDied","Data":"b7f497b107b8e8652a7f168df902d76edf4cc8c0d003e369a126e81b80c2c81c"} Dec 11 10:38:07 crc kubenswrapper[4953]: I1211 10:38:07.207564 4953 generic.go:334] "Generic (PLEG): container finished" podID="79a93889-ae40-4bd1-a697-5797e065231b" containerID="bc5af3bb14085fb06d4fbb19425f7956a6394356771a345adb23fe49da34c4ef" exitCode=0 Dec 11 10:38:07 crc kubenswrapper[4953]: I1211 10:38:07.207706 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"79a93889-ae40-4bd1-a697-5797e065231b","Type":"ContainerDied","Data":"bc5af3bb14085fb06d4fbb19425f7956a6394356771a345adb23fe49da34c4ef"} Dec 11 10:38:07 crc kubenswrapper[4953]: I1211 10:38:07.220366 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4knhw\" (UniqueName: \"kubernetes.io/projected/e27f6309-0ccd-4aca-ad87-0cd7a9357469-kube-api-access-4knhw\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:07 crc kubenswrapper[4953]: I1211 10:38:07.220401 4953 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b4237606-fdcf-403b-8e5a-1bbb4a2e38de-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:07 crc kubenswrapper[4953]: I1211 10:38:07.220417 4953 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/56f7d9a7-e24f-4b47-b829-7adcad2b0a60-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:07 crc kubenswrapper[4953]: I1211 10:38:07.221200 4953 generic.go:334] "Generic (PLEG): container finished" podID="7be1c768-78bb-476b-b51d-8e4fe80b8500" containerID="3c65359d49ee68c46b25f7c48cca23725c2a07a228cfed6a3b8c90cef4f401ce" exitCode=0 Dec 11 10:38:07 crc kubenswrapper[4953]: I1211 10:38:07.221320 4953 generic.go:334] "Generic (PLEG): container finished" podID="7be1c768-78bb-476b-b51d-8e4fe80b8500" containerID="55455d29b2f9f09dccbeb1ee95244b733e578206c81bca651b8b08a2abc3da6f" exitCode=0 Dec 11 10:38:07 crc kubenswrapper[4953]: I1211 10:38:07.221422 4953 generic.go:334] "Generic (PLEG): container finished" podID="7be1c768-78bb-476b-b51d-8e4fe80b8500" containerID="c7fa20846bc15438ea48e549cb0457b5fdbbcd2598a4d940ee938fb4fb3a9db3" exitCode=0 Dec 11 10:38:07 crc kubenswrapper[4953]: I1211 10:38:07.221513 4953 generic.go:334] "Generic (PLEG): container finished" podID="7be1c768-78bb-476b-b51d-8e4fe80b8500" containerID="bc0e3f085ef80ef3d58ffae3ef2a52f5bf40447e1f3f4fae4ba935bd88ae1802" exitCode=0 Dec 11 10:38:07 crc kubenswrapper[4953]: I1211 10:38:07.221613 4953 generic.go:334] "Generic (PLEG): container finished" podID="7be1c768-78bb-476b-b51d-8e4fe80b8500" containerID="8981b379cfe002ec1ffbcd789bf3f9088d55241543514d305383406d070e9749" exitCode=0 Dec 11 10:38:07 crc kubenswrapper[4953]: I1211 10:38:07.221696 4953 generic.go:334] "Generic (PLEG): container finished" podID="7be1c768-78bb-476b-b51d-8e4fe80b8500" containerID="42ee56a6413b971f972dd83deea70f7f4ed0f5bd15d3d8739f47c3de625b36da" exitCode=0 Dec 11 10:38:07 crc kubenswrapper[4953]: I1211 10:38:07.221773 4953 generic.go:334] "Generic (PLEG): container finished" podID="7be1c768-78bb-476b-b51d-8e4fe80b8500" containerID="ee01036005d992c399d8891c4088b620c28089677482095eb23ddbcf5787ed0f" exitCode=0 Dec 11 10:38:07 crc kubenswrapper[4953]: I1211 10:38:07.221863 4953 generic.go:334] "Generic (PLEG): container finished" podID="7be1c768-78bb-476b-b51d-8e4fe80b8500" containerID="8271a6a07ac8401063b754218c3eb89ceb4f2d9d019082057eb897dcd5350656" exitCode=0 Dec 11 10:38:07 crc kubenswrapper[4953]: I1211 10:38:07.221965 4953 generic.go:334] "Generic (PLEG): container finished" podID="7be1c768-78bb-476b-b51d-8e4fe80b8500" containerID="47e0171f5c393def51346598fe0050490ca2584402ed6532e4a68c71c29d1284" exitCode=0 Dec 11 10:38:07 crc kubenswrapper[4953]: I1211 10:38:07.222045 4953 generic.go:334] "Generic (PLEG): container finished" podID="7be1c768-78bb-476b-b51d-8e4fe80b8500" containerID="bf1b66be16060aee36932d81a73465cd1174ad5e0ce2ac136fa9b17ea2beb026" exitCode=0 Dec 11 10:38:07 crc kubenswrapper[4953]: I1211 10:38:07.222119 4953 generic.go:334] "Generic (PLEG): container finished" podID="7be1c768-78bb-476b-b51d-8e4fe80b8500" containerID="84916ff0808e4afae4bbc6dc9c0bfcc649e85608c78bcca53fc062955964d97f" exitCode=0 Dec 11 10:38:07 crc kubenswrapper[4953]: I1211 10:38:07.222195 4953 generic.go:334] "Generic (PLEG): container finished" podID="7be1c768-78bb-476b-b51d-8e4fe80b8500" containerID="8bd2acaf8a28b1f1656e66014334ca8748f846ad6e8ad38b27cb4bdf466f3173" exitCode=0 Dec 11 10:38:07 crc kubenswrapper[4953]: I1211 10:38:07.222279 4953 generic.go:334] "Generic (PLEG): container finished" podID="7be1c768-78bb-476b-b51d-8e4fe80b8500" containerID="679d2553c36012b1b180157877c057ec44f2c2462adfbbecdb5379d3b623b02c" exitCode=0 Dec 11 10:38:07 crc kubenswrapper[4953]: I1211 10:38:07.222379 4953 generic.go:334] "Generic (PLEG): container finished" podID="7be1c768-78bb-476b-b51d-8e4fe80b8500" containerID="8f4c46cb4b9e3e20f278144150f92781df0603ba1ce189953a04f830ee3bc004" exitCode=0 Dec 11 10:38:07 crc kubenswrapper[4953]: I1211 10:38:07.222506 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7be1c768-78bb-476b-b51d-8e4fe80b8500","Type":"ContainerDied","Data":"3c65359d49ee68c46b25f7c48cca23725c2a07a228cfed6a3b8c90cef4f401ce"} Dec 11 10:38:07 crc kubenswrapper[4953]: I1211 10:38:07.222653 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7be1c768-78bb-476b-b51d-8e4fe80b8500","Type":"ContainerDied","Data":"55455d29b2f9f09dccbeb1ee95244b733e578206c81bca651b8b08a2abc3da6f"} Dec 11 10:38:07 crc kubenswrapper[4953]: I1211 10:38:07.222749 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7be1c768-78bb-476b-b51d-8e4fe80b8500","Type":"ContainerDied","Data":"c7fa20846bc15438ea48e549cb0457b5fdbbcd2598a4d940ee938fb4fb3a9db3"} Dec 11 10:38:07 crc kubenswrapper[4953]: I1211 10:38:07.222915 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7be1c768-78bb-476b-b51d-8e4fe80b8500","Type":"ContainerDied","Data":"bc0e3f085ef80ef3d58ffae3ef2a52f5bf40447e1f3f4fae4ba935bd88ae1802"} Dec 11 10:38:07 crc kubenswrapper[4953]: I1211 10:38:07.223011 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7be1c768-78bb-476b-b51d-8e4fe80b8500","Type":"ContainerDied","Data":"8981b379cfe002ec1ffbcd789bf3f9088d55241543514d305383406d070e9749"} Dec 11 10:38:07 crc kubenswrapper[4953]: I1211 10:38:07.223167 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7be1c768-78bb-476b-b51d-8e4fe80b8500","Type":"ContainerDied","Data":"42ee56a6413b971f972dd83deea70f7f4ed0f5bd15d3d8739f47c3de625b36da"} Dec 11 10:38:07 crc kubenswrapper[4953]: I1211 10:38:07.223264 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7be1c768-78bb-476b-b51d-8e4fe80b8500","Type":"ContainerDied","Data":"ee01036005d992c399d8891c4088b620c28089677482095eb23ddbcf5787ed0f"} Dec 11 10:38:07 crc kubenswrapper[4953]: I1211 10:38:07.223348 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7be1c768-78bb-476b-b51d-8e4fe80b8500","Type":"ContainerDied","Data":"8271a6a07ac8401063b754218c3eb89ceb4f2d9d019082057eb897dcd5350656"} Dec 11 10:38:07 crc kubenswrapper[4953]: I1211 10:38:07.223439 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7be1c768-78bb-476b-b51d-8e4fe80b8500","Type":"ContainerDied","Data":"47e0171f5c393def51346598fe0050490ca2584402ed6532e4a68c71c29d1284"} Dec 11 10:38:07 crc kubenswrapper[4953]: I1211 10:38:07.223530 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7be1c768-78bb-476b-b51d-8e4fe80b8500","Type":"ContainerDied","Data":"bf1b66be16060aee36932d81a73465cd1174ad5e0ce2ac136fa9b17ea2beb026"} Dec 11 10:38:07 crc kubenswrapper[4953]: I1211 10:38:07.223699 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7be1c768-78bb-476b-b51d-8e4fe80b8500","Type":"ContainerDied","Data":"84916ff0808e4afae4bbc6dc9c0bfcc649e85608c78bcca53fc062955964d97f"} Dec 11 10:38:07 crc kubenswrapper[4953]: I1211 10:38:07.223797 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7be1c768-78bb-476b-b51d-8e4fe80b8500","Type":"ContainerDied","Data":"8bd2acaf8a28b1f1656e66014334ca8748f846ad6e8ad38b27cb4bdf466f3173"} Dec 11 10:38:07 crc kubenswrapper[4953]: I1211 10:38:07.223915 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7be1c768-78bb-476b-b51d-8e4fe80b8500","Type":"ContainerDied","Data":"679d2553c36012b1b180157877c057ec44f2c2462adfbbecdb5379d3b623b02c"} Dec 11 10:38:07 crc kubenswrapper[4953]: I1211 10:38:07.224022 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7be1c768-78bb-476b-b51d-8e4fe80b8500","Type":"ContainerDied","Data":"8f4c46cb4b9e3e20f278144150f92781df0603ba1ce189953a04f830ee3bc004"} Dec 11 10:38:07 crc kubenswrapper[4953]: I1211 10:38:07.227168 4953 generic.go:334] "Generic (PLEG): container finished" podID="4b66dbe7-edd9-4e23-a3d0-0661efe89ac6" containerID="33acb5b8399e690c332876bb46d0a8aa9f480f6d6435312361f99da160bb499a" exitCode=143 Dec 11 10:38:07 crc kubenswrapper[4953]: I1211 10:38:07.227232 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4b66dbe7-edd9-4e23-a3d0-0661efe89ac6","Type":"ContainerDied","Data":"33acb5b8399e690c332876bb46d0a8aa9f480f6d6435312361f99da160bb499a"} Dec 11 10:38:07 crc kubenswrapper[4953]: I1211 10:38:07.231845 4953 generic.go:334] "Generic (PLEG): container finished" podID="8521d832-efe5-4653-8c0e-8921f916e10f" containerID="d900251d830ca62ae055c9f9a2f8078dbd3d1545f50142030a3209a17a071070" exitCode=0 Dec 11 10:38:07 crc kubenswrapper[4953]: I1211 10:38:07.231872 4953 generic.go:334] "Generic (PLEG): container finished" podID="8521d832-efe5-4653-8c0e-8921f916e10f" containerID="ae1625ae9b7343e79bf1b390eabfcbbde5a933a354f8b01e304d5b2edc571afd" exitCode=0 Dec 11 10:38:07 crc kubenswrapper[4953]: I1211 10:38:07.231913 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-78f5cf7bd5-24fm8" event={"ID":"8521d832-efe5-4653-8c0e-8921f916e10f","Type":"ContainerDied","Data":"d900251d830ca62ae055c9f9a2f8078dbd3d1545f50142030a3209a17a071070"} Dec 11 10:38:07 crc kubenswrapper[4953]: I1211 10:38:07.231935 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-78f5cf7bd5-24fm8" event={"ID":"8521d832-efe5-4653-8c0e-8921f916e10f","Type":"ContainerDied","Data":"ae1625ae9b7343e79bf1b390eabfcbbde5a933a354f8b01e304d5b2edc571afd"} Dec 11 10:38:07 crc kubenswrapper[4953]: I1211 10:38:07.327493 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance5aff-account-delete-5hksm" event={"ID":"e6515789-e6f6-4aa3-83f3-4fc58f862dc9","Type":"ContainerStarted","Data":"fb60de9865675abd3277bc52365656a3c4bd56bd096306b1e205c9933f3fd7e6"} Dec 11 10:38:07 crc kubenswrapper[4953]: I1211 10:38:07.340168 4953 generic.go:334] "Generic (PLEG): container finished" podID="caec0159-12b1-46f9-952c-10f229948036" containerID="7d7961ffaf0fa5639d3e96bbbb7ff1815fd8017ed09d51fdb3f868fc15297c07" exitCode=143 Dec 11 10:38:07 crc kubenswrapper[4953]: I1211 10:38:07.345068 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-555fcfcf54-sqln7" event={"ID":"caec0159-12b1-46f9-952c-10f229948036","Type":"ContainerDied","Data":"7d7961ffaf0fa5639d3e96bbbb7ff1815fd8017ed09d51fdb3f868fc15297c07"} Dec 11 10:38:07 crc kubenswrapper[4953]: I1211 10:38:07.400051 4953 generic.go:334] "Generic (PLEG): container finished" podID="544e1955-4316-4587-90a8-94bac4f81ae5" containerID="9bcdd67ff3f27b165dca3277b206f20442bbecd9d522b5435dc8a058e29f8375" exitCode=143 Dec 11 10:38:07 crc kubenswrapper[4953]: I1211 10:38:07.401429 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/novacell1e356-account-delete-h5n9c" Dec 11 10:38:07 crc kubenswrapper[4953]: I1211 10:38:07.401073 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6cffd87c8c-wlgnt" event={"ID":"544e1955-4316-4587-90a8-94bac4f81ae5","Type":"ContainerDied","Data":"9bcdd67ff3f27b165dca3277b206f20442bbecd9d522b5435dc8a058e29f8375"} Dec 11 10:38:07 crc kubenswrapper[4953]: I1211 10:38:07.432631 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placementa6a0-account-delete-vhpnd"] Dec 11 10:38:07 crc kubenswrapper[4953]: I1211 10:38:07.442786 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/novacell0caaa-account-delete-n4fck"] Dec 11 10:38:07 crc kubenswrapper[4953]: I1211 10:38:07.458720 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/498f7a43-7db9-42e8-b722-a5fb6ae4749f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "498f7a43-7db9-42e8-b722-a5fb6ae4749f" (UID: "498f7a43-7db9-42e8-b722-a5fb6ae4749f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:38:07 crc kubenswrapper[4953]: I1211 10:38:07.499902 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican3c8c-account-delete-kzsq8"] Dec 11 10:38:07 crc kubenswrapper[4953]: I1211 10:38:07.506742 4953 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Dec 11 10:38:07 crc kubenswrapper[4953]: W1211 10:38:07.511070 4953 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3aee1a2c_6a1e_48c0_9491_3f61371047eb.slice/crio-50cdd6cfdaf7476788da5828eaafbaf677a388187dfb6ff3243df19709359e3e WatchSource:0}: Error finding container 50cdd6cfdaf7476788da5828eaafbaf677a388187dfb6ff3243df19709359e3e: Status 404 returned error can't find the container with id 50cdd6cfdaf7476788da5828eaafbaf677a388187dfb6ff3243df19709359e3e Dec 11 10:38:07 crc kubenswrapper[4953]: I1211 10:38:07.539666 4953 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/498f7a43-7db9-42e8-b722-a5fb6ae4749f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:07 crc kubenswrapper[4953]: I1211 10:38:07.539694 4953 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:07 crc kubenswrapper[4953]: I1211 10:38:07.582085 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e27f6309-0ccd-4aca-ad87-0cd7a9357469-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e27f6309-0ccd-4aca-ad87-0cd7a9357469" (UID: "e27f6309-0ccd-4aca-ad87-0cd7a9357469"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:38:07 crc kubenswrapper[4953]: I1211 10:38:07.629760 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4237606-fdcf-403b-8e5a-1bbb4a2e38de-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b4237606-fdcf-403b-8e5a-1bbb4a2e38de" (UID: "b4237606-fdcf-403b-8e5a-1bbb4a2e38de"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:38:07 crc kubenswrapper[4953]: I1211 10:38:07.641273 4953 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e27f6309-0ccd-4aca-ad87-0cd7a9357469-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:07 crc kubenswrapper[4953]: I1211 10:38:07.641555 4953 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4237606-fdcf-403b-8e5a-1bbb4a2e38de-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:07 crc kubenswrapper[4953]: E1211 10:38:07.645986 4953 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="83eedb4ddd84362084d8ccac38fed9fcbcacfbfefe97227d1e7bf4df1164fbc0" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Dec 11 10:38:07 crc kubenswrapper[4953]: I1211 10:38:07.648977 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e27f6309-0ccd-4aca-ad87-0cd7a9357469-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e27f6309-0ccd-4aca-ad87-0cd7a9357469" (UID: "e27f6309-0ccd-4aca-ad87-0cd7a9357469"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:38:07 crc kubenswrapper[4953]: I1211 10:38:07.655833 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e27f6309-0ccd-4aca-ad87-0cd7a9357469-config" (OuterVolumeSpecName: "config") pod "e27f6309-0ccd-4aca-ad87-0cd7a9357469" (UID: "e27f6309-0ccd-4aca-ad87-0cd7a9357469"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:38:07 crc kubenswrapper[4953]: E1211 10:38:07.657819 4953 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="83eedb4ddd84362084d8ccac38fed9fcbcacfbfefe97227d1e7bf4df1164fbc0" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Dec 11 10:38:07 crc kubenswrapper[4953]: E1211 10:38:07.659999 4953 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="83eedb4ddd84362084d8ccac38fed9fcbcacfbfefe97227d1e7bf4df1164fbc0" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Dec 11 10:38:07 crc kubenswrapper[4953]: E1211 10:38:07.660115 4953 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell1-conductor-0" podUID="5c566b6b-16f8-422c-acda-0325e36103e6" containerName="nova-cell1-conductor-conductor" Dec 11 10:38:07 crc kubenswrapper[4953]: I1211 10:38:07.670029 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e27f6309-0ccd-4aca-ad87-0cd7a9357469-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e27f6309-0ccd-4aca-ad87-0cd7a9357469" (UID: "e27f6309-0ccd-4aca-ad87-0cd7a9357469"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:38:07 crc kubenswrapper[4953]: I1211 10:38:07.674740 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e27f6309-0ccd-4aca-ad87-0cd7a9357469-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "e27f6309-0ccd-4aca-ad87-0cd7a9357469" (UID: "e27f6309-0ccd-4aca-ad87-0cd7a9357469"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:38:07 crc kubenswrapper[4953]: I1211 10:38:07.754980 4953 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e27f6309-0ccd-4aca-ad87-0cd7a9357469-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:07 crc kubenswrapper[4953]: I1211 10:38:07.755016 4953 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e27f6309-0ccd-4aca-ad87-0cd7a9357469-config\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:07 crc kubenswrapper[4953]: I1211 10:38:07.755025 4953 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e27f6309-0ccd-4aca-ad87-0cd7a9357469-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:07 crc kubenswrapper[4953]: I1211 10:38:07.755035 4953 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e27f6309-0ccd-4aca-ad87-0cd7a9357469-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:07 crc kubenswrapper[4953]: E1211 10:38:07.755097 4953 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Dec 11 10:38:07 crc kubenswrapper[4953]: E1211 10:38:07.755146 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b29c8985-0d8c-4382-9969-29422929136f-config-data podName:b29c8985-0d8c-4382-9969-29422929136f nodeName:}" failed. No retries permitted until 2025-12-11 10:38:11.755129569 +0000 UTC m=+1609.778988602 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/b29c8985-0d8c-4382-9969-29422929136f-config-data") pod "rabbitmq-server-0" (UID: "b29c8985-0d8c-4382-9969-29422929136f") : configmap "rabbitmq-config-data" not found Dec 11 10:38:07 crc kubenswrapper[4953]: I1211 10:38:07.790788 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/498f7a43-7db9-42e8-b722-a5fb6ae4749f-ovn-controller-tls-certs" (OuterVolumeSpecName: "ovn-controller-tls-certs") pod "498f7a43-7db9-42e8-b722-a5fb6ae4749f" (UID: "498f7a43-7db9-42e8-b722-a5fb6ae4749f"). InnerVolumeSpecName "ovn-controller-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:38:07 crc kubenswrapper[4953]: I1211 10:38:07.790802 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4237606-fdcf-403b-8e5a-1bbb4a2e38de-ovsdbserver-nb-tls-certs" (OuterVolumeSpecName: "ovsdbserver-nb-tls-certs") pod "b4237606-fdcf-403b-8e5a-1bbb4a2e38de" (UID: "b4237606-fdcf-403b-8e5a-1bbb4a2e38de"). InnerVolumeSpecName "ovsdbserver-nb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:38:07 crc kubenswrapper[4953]: I1211 10:38:07.858214 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4237606-fdcf-403b-8e5a-1bbb4a2e38de-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "b4237606-fdcf-403b-8e5a-1bbb4a2e38de" (UID: "b4237606-fdcf-403b-8e5a-1bbb4a2e38de"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:38:07 crc kubenswrapper[4953]: I1211 10:38:07.859969 4953 reconciler_common.go:293] "Volume detached for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/498f7a43-7db9-42e8-b722-a5fb6ae4749f-ovn-controller-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:07 crc kubenswrapper[4953]: I1211 10:38:07.860018 4953 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4237606-fdcf-403b-8e5a-1bbb4a2e38de-ovsdbserver-nb-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:07 crc kubenswrapper[4953]: I1211 10:38:07.860032 4953 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4237606-fdcf-403b-8e5a-1bbb4a2e38de-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:07 crc kubenswrapper[4953]: I1211 10:38:07.867580 4953 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-677c7c8c9c-gh7rd" podUID="261b522a-b786-4b2b-975c-43f1cc0d8ccf" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.154:9696/\": dial tcp 10.217.0.154:9696: connect: connection refused" Dec 11 10:38:08 crc kubenswrapper[4953]: E1211 10:38:08.012325 4953 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9dd0749df58975f05de050cfcf92dc87ed6378284f27a69c71579f156df64d52 is running failed: container process not found" containerID="9dd0749df58975f05de050cfcf92dc87ed6378284f27a69c71579f156df64d52" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Dec 11 10:38:08 crc kubenswrapper[4953]: E1211 10:38:08.012659 4953 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9dd0749df58975f05de050cfcf92dc87ed6378284f27a69c71579f156df64d52 is running failed: container process not found" containerID="9dd0749df58975f05de050cfcf92dc87ed6378284f27a69c71579f156df64d52" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Dec 11 10:38:08 crc kubenswrapper[4953]: E1211 10:38:08.012832 4953 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9dd0749df58975f05de050cfcf92dc87ed6378284f27a69c71579f156df64d52 is running failed: container process not found" containerID="9dd0749df58975f05de050cfcf92dc87ed6378284f27a69c71579f156df64d52" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Dec 11 10:38:08 crc kubenswrapper[4953]: E1211 10:38:08.012858 4953 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9dd0749df58975f05de050cfcf92dc87ed6378284f27a69c71579f156df64d52 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-mbtwm" podUID="5cfd14e5-05e2-4cc5-ba83-259321c6f872" containerName="ovsdb-server" Dec 11 10:38:08 crc kubenswrapper[4953]: E1211 10:38:08.015916 4953 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f6f4f73f93ab838f657b20b0e0f2f7780e20c20fb3adfe66d3e44a87fc1d18c6" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Dec 11 10:38:08 crc kubenswrapper[4953]: E1211 10:38:08.068540 4953 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f6f4f73f93ab838f657b20b0e0f2f7780e20c20fb3adfe66d3e44a87fc1d18c6" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Dec 11 10:38:08 crc kubenswrapper[4953]: E1211 10:38:08.076051 4953 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f6f4f73f93ab838f657b20b0e0f2f7780e20c20fb3adfe66d3e44a87fc1d18c6" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Dec 11 10:38:08 crc kubenswrapper[4953]: E1211 10:38:08.076125 4953 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-mbtwm" podUID="5cfd14e5-05e2-4cc5-ba83-259321c6f872" containerName="ovs-vswitchd" Dec 11 10:38:08 crc kubenswrapper[4953]: I1211 10:38:08.223845 4953 scope.go:117] "RemoveContainer" containerID="171127e736e1696680b8224078e3d976deda865332d1d3c62b28d433837b0886" Dec 11 10:38:08 crc kubenswrapper[4953]: I1211 10:38:08.305819 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/novacell1e356-account-delete-h5n9c" Dec 11 10:38:08 crc kubenswrapper[4953]: I1211 10:38:08.327032 4953 scope.go:117] "RemoveContainer" containerID="9700ecceafec88bb52bb9474793b818e8b7aef5592713f65ce2d49b857374493" Dec 11 10:38:08 crc kubenswrapper[4953]: I1211 10:38:08.422379 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_8bb60f7f-22a1-48c2-8815-f2c6cfddfc2e/ovsdbserver-sb/0.log" Dec 11 10:38:08 crc kubenswrapper[4953]: I1211 10:38:08.422548 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"8bb60f7f-22a1-48c2-8815-f2c6cfddfc2e","Type":"ContainerDied","Data":"b1d5dfa95f27ea161b20ee40d6f6d4bf87396305b0879c824cc4998b6621f128"} Dec 11 10:38:08 crc kubenswrapper[4953]: I1211 10:38:08.422636 4953 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b1d5dfa95f27ea161b20ee40d6f6d4bf87396305b0879c824cc4998b6621f128" Dec 11 10:38:08 crc kubenswrapper[4953]: I1211 10:38:08.425473 4953 generic.go:334] "Generic (PLEG): container finished" podID="caec0159-12b1-46f9-952c-10f229948036" containerID="6cb07fdb5e67db9e16c8125784b8b3014f71452b7d478333ae5ae1ede91ec6ff" exitCode=0 Dec 11 10:38:08 crc kubenswrapper[4953]: I1211 10:38:08.425535 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-555fcfcf54-sqln7" event={"ID":"caec0159-12b1-46f9-952c-10f229948036","Type":"ContainerDied","Data":"6cb07fdb5e67db9e16c8125784b8b3014f71452b7d478333ae5ae1ede91ec6ff"} Dec 11 10:38:08 crc kubenswrapper[4953]: I1211 10:38:08.425585 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-555fcfcf54-sqln7" event={"ID":"caec0159-12b1-46f9-952c-10f229948036","Type":"ContainerDied","Data":"44b47d8eb8cc371d2022c5dadcc674fbdcac5dae0c680bd3bd0f7d9e81ddc2d4"} Dec 11 10:38:08 crc kubenswrapper[4953]: I1211 10:38:08.425595 4953 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="44b47d8eb8cc371d2022c5dadcc674fbdcac5dae0c680bd3bd0f7d9e81ddc2d4" Dec 11 10:38:08 crc kubenswrapper[4953]: I1211 10:38:08.427430 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ddd577785-sstm7" event={"ID":"e27f6309-0ccd-4aca-ad87-0cd7a9357469","Type":"ContainerDied","Data":"60ed0977fe7062141d08e06124136a5c59fe9dacbe6224d4d1e28a7cbdfffeb5"} Dec 11 10:38:08 crc kubenswrapper[4953]: I1211 10:38:08.438387 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-78f5cf7bd5-24fm8" event={"ID":"8521d832-efe5-4653-8c0e-8921f916e10f","Type":"ContainerDied","Data":"a25f0695175849fb98b2a8082989c5a96391ec883dd5d17ff8ccc5433e8dbec9"} Dec 11 10:38:08 crc kubenswrapper[4953]: I1211 10:38:08.438424 4953 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a25f0695175849fb98b2a8082989c5a96391ec883dd5d17ff8ccc5433e8dbec9" Dec 11 10:38:08 crc kubenswrapper[4953]: I1211 10:38:08.440439 4953 generic.go:334] "Generic (PLEG): container finished" podID="e6515789-e6f6-4aa3-83f3-4fc58f862dc9" containerID="52a8a81efc76dd9871260a4f51ba575eeb502a0c5bcdca997ff392a64c988a8a" exitCode=0 Dec 11 10:38:08 crc kubenswrapper[4953]: I1211 10:38:08.440524 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance5aff-account-delete-5hksm" event={"ID":"e6515789-e6f6-4aa3-83f3-4fc58f862dc9","Type":"ContainerDied","Data":"52a8a81efc76dd9871260a4f51ba575eeb502a0c5bcdca997ff392a64c988a8a"} Dec 11 10:38:08 crc kubenswrapper[4953]: I1211 10:38:08.442400 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican3c8c-account-delete-kzsq8" event={"ID":"b09879bd-62c8-4810-ad58-09db28d6afb5","Type":"ContainerStarted","Data":"56ab39730968e40fdc10da81605b560f0b5f5a4167aefccc2be6a303575bc685"} Dec 11 10:38:08 crc kubenswrapper[4953]: I1211 10:38:08.445558 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/novacell0caaa-account-delete-n4fck" event={"ID":"12df8687-e24e-47fb-802c-3ab978ed04fd","Type":"ContainerStarted","Data":"1f4a8b8fe5c62cfd729ea5bef5c02cbf9e6535458dd38a397dfc635e8f13b149"} Dec 11 10:38:08 crc kubenswrapper[4953]: I1211 10:38:08.448040 4953 generic.go:334] "Generic (PLEG): container finished" podID="27258186-4cab-45b4-a20c-a4c3ddc82f76" containerID="0cfe0bd98f32db174fde1333af2c3108717607f2c93978857021eab34e2c9d4e" exitCode=0 Dec 11 10:38:08 crc kubenswrapper[4953]: I1211 10:38:08.448104 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"27258186-4cab-45b4-a20c-a4c3ddc82f76","Type":"ContainerDied","Data":"0cfe0bd98f32db174fde1333af2c3108717607f2c93978857021eab34e2c9d4e"} Dec 11 10:38:08 crc kubenswrapper[4953]: I1211 10:38:08.448123 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"27258186-4cab-45b4-a20c-a4c3ddc82f76","Type":"ContainerDied","Data":"4ef4df22cc9c9f55d1bb8a1da388a5bbc6210506bfcd3708750a31edf4486029"} Dec 11 10:38:08 crc kubenswrapper[4953]: I1211 10:38:08.448133 4953 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ef4df22cc9c9f55d1bb8a1da388a5bbc6210506bfcd3708750a31edf4486029" Dec 11 10:38:08 crc kubenswrapper[4953]: I1211 10:38:08.450476 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_b4237606-fdcf-403b-8e5a-1bbb4a2e38de/ovsdbserver-nb/0.log" Dec 11 10:38:08 crc kubenswrapper[4953]: I1211 10:38:08.450529 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"b4237606-fdcf-403b-8e5a-1bbb4a2e38de","Type":"ContainerDied","Data":"6b54a10b2979db12155abb16c859a889440d76284d97353bd444550a2c49e75f"} Dec 11 10:38:08 crc kubenswrapper[4953]: I1211 10:38:08.450659 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Dec 11 10:38:08 crc kubenswrapper[4953]: I1211 10:38:08.462564 4953 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/placement-7567d9469d-rx5dx" podUID="345a513a-93a0-4e23-9266-3eeaf3ff0c10" containerName="placement-api" probeResult="failure" output="Get \"https://10.217.0.151:8778/\": read tcp 10.217.0.2:37372->10.217.0.151:8778: read: connection reset by peer" Dec 11 10:38:08 crc kubenswrapper[4953]: I1211 10:38:08.462802 4953 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/placement-7567d9469d-rx5dx" podUID="345a513a-93a0-4e23-9266-3eeaf3ff0c10" containerName="placement-log" probeResult="failure" output="Get \"https://10.217.0.151:8778/\": read tcp 10.217.0.2:37388->10.217.0.151:8778: read: connection reset by peer" Dec 11 10:38:08 crc kubenswrapper[4953]: I1211 10:38:08.466036 4953 generic.go:334] "Generic (PLEG): container finished" podID="992b7c13-c6c6-4641-9c9a-3d8bfbd5029c" containerID="48707275b7bce1fa32e26a8593896bb5249bdc8d8437a70dc6fc21de8d0d7886" exitCode=0 Dec 11 10:38:08 crc kubenswrapper[4953]: I1211 10:38:08.466591 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron242b-account-delete-v47hk" event={"ID":"992b7c13-c6c6-4641-9c9a-3d8bfbd5029c","Type":"ContainerDied","Data":"48707275b7bce1fa32e26a8593896bb5249bdc8d8437a70dc6fc21de8d0d7886"} Dec 11 10:38:08 crc kubenswrapper[4953]: I1211 10:38:08.468931 4953 generic.go:334] "Generic (PLEG): container finished" podID="544e1955-4316-4587-90a8-94bac4f81ae5" containerID="120e662c3201d0f81e55488f64c74d01e67c74d5af04b0ca903d4ba77213d505" exitCode=0 Dec 11 10:38:08 crc kubenswrapper[4953]: I1211 10:38:08.469002 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6cffd87c8c-wlgnt" event={"ID":"544e1955-4316-4587-90a8-94bac4f81ae5","Type":"ContainerDied","Data":"120e662c3201d0f81e55488f64c74d01e67c74d5af04b0ca903d4ba77213d505"} Dec 11 10:38:08 crc kubenswrapper[4953]: I1211 10:38:08.469034 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6cffd87c8c-wlgnt" event={"ID":"544e1955-4316-4587-90a8-94bac4f81ae5","Type":"ContainerDied","Data":"e0837a1a89fc8f4183d7e64dada5af8c45d46c64eb05cfb2eaf2de1357dbe535"} Dec 11 10:38:08 crc kubenswrapper[4953]: I1211 10:38:08.469049 4953 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e0837a1a89fc8f4183d7e64dada5af8c45d46c64eb05cfb2eaf2de1357dbe535" Dec 11 10:38:08 crc kubenswrapper[4953]: I1211 10:38:08.471436 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"79a93889-ae40-4bd1-a697-5797e065231b","Type":"ContainerDied","Data":"6c519abe2abb5f8fdb11e1a68ab9c639af684b5e3f48b346e7ea25f62c51f921"} Dec 11 10:38:08 crc kubenswrapper[4953]: I1211 10:38:08.471617 4953 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6c519abe2abb5f8fdb11e1a68ab9c639af684b5e3f48b346e7ea25f62c51f921" Dec 11 10:38:08 crc kubenswrapper[4953]: I1211 10:38:08.473939 4953 generic.go:334] "Generic (PLEG): container finished" podID="46cd550e-17c8-4cd2-a5e0-9746edf42836" containerID="c01dfb17fd1a4b83ee5eb0990cc0f19e9a731d964693ce7da348597be82d9a2f" exitCode=0 Dec 11 10:38:08 crc kubenswrapper[4953]: I1211 10:38:08.474072 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinderaf3a-account-delete-rtt56" event={"ID":"46cd550e-17c8-4cd2-a5e0-9746edf42836","Type":"ContainerDied","Data":"c01dfb17fd1a4b83ee5eb0990cc0f19e9a731d964693ce7da348597be82d9a2f"} Dec 11 10:38:08 crc kubenswrapper[4953]: I1211 10:38:08.500775 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/novacell1e356-account-delete-h5n9c" Dec 11 10:38:08 crc kubenswrapper[4953]: I1211 10:38:08.505944 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56f7d9a7-e24f-4b47-b829-7adcad2b0a60" path="/var/lib/kubelet/pods/56f7d9a7-e24f-4b47-b829-7adcad2b0a60/volumes" Dec 11 10:38:08 crc kubenswrapper[4953]: I1211 10:38:08.509953 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d1833793-1408-450f-8a7e-e01e6048edd5","Type":"ContainerDied","Data":"54cfaf9c4ca8b8cfd09d72616fb4df347ee6589d72cb8e24c0770de7b158f9dc"} Dec 11 10:38:08 crc kubenswrapper[4953]: I1211 10:38:08.510042 4953 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="54cfaf9c4ca8b8cfd09d72616fb4df347ee6589d72cb8e24c0770de7b158f9dc" Dec 11 10:38:08 crc kubenswrapper[4953]: I1211 10:38:08.510079 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/novaapi339c-account-delete-l2kws"] Dec 11 10:38:08 crc kubenswrapper[4953]: I1211 10:38:08.510100 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placementa6a0-account-delete-vhpnd" event={"ID":"3aee1a2c-6a1e-48c0-9491-3f61371047eb","Type":"ContainerStarted","Data":"50cdd6cfdaf7476788da5828eaafbaf677a388187dfb6ff3243df19709359e3e"} Dec 11 10:38:08 crc kubenswrapper[4953]: W1211 10:38:08.516525 4953 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod10e32559_b465_4538_af8b_9dd3deedf2b9.slice/crio-4f316ac35b4df1a9f5195a54af7c20b83aa8d2cd8c9a6a360ce94ef77c733433 WatchSource:0}: Error finding container 4f316ac35b4df1a9f5195a54af7c20b83aa8d2cd8c9a6a360ce94ef77c733433: Status 404 returned error can't find the container with id 4f316ac35b4df1a9f5195a54af7c20b83aa8d2cd8c9a6a360ce94ef77c733433 Dec 11 10:38:08 crc kubenswrapper[4953]: I1211 10:38:08.658547 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 11 10:38:08 crc kubenswrapper[4953]: I1211 10:38:08.658980 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0fdfbbe2-a3b8-4834-9920-114c40de67dc" containerName="ceilometer-central-agent" containerID="cri-o://26c557ccd567d40c3f683a4b6ace8ab2e8b7ac5434a459e3c5578f86eab6d9ef" gracePeriod=30 Dec 11 10:38:08 crc kubenswrapper[4953]: I1211 10:38:08.659553 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0fdfbbe2-a3b8-4834-9920-114c40de67dc" containerName="proxy-httpd" containerID="cri-o://7bcde1f160b621a411c4432d1c9223855ce56dae0721cd858ef2d9f01ba8fc4f" gracePeriod=30 Dec 11 10:38:08 crc kubenswrapper[4953]: I1211 10:38:08.659656 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0fdfbbe2-a3b8-4834-9920-114c40de67dc" containerName="sg-core" containerID="cri-o://833b6f02c978f12986b237387138803da1e2d0773b34467c4d1a5b383a6b7409" gracePeriod=30 Dec 11 10:38:08 crc kubenswrapper[4953]: I1211 10:38:08.659711 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0fdfbbe2-a3b8-4834-9920-114c40de67dc" containerName="ceilometer-notification-agent" containerID="cri-o://24a20114145ea26b514ff1c0db96904c68235dddedc19cbb3ebee0b622fd84b3" gracePeriod=30 Dec 11 10:38:08 crc kubenswrapper[4953]: I1211 10:38:08.701945 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 11 10:38:08 crc kubenswrapper[4953]: I1211 10:38:08.702230 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="9da03c89-b3fb-431e-bef0-eb8f6d0b180e" containerName="kube-state-metrics" containerID="cri-o://e77c7ae1c87e7949e1f82009c61668e44597f8128ff13d56a0b924b074388ac2" gracePeriod=30 Dec 11 10:38:08 crc kubenswrapper[4953]: I1211 10:38:08.748129 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ddd577785-sstm7"] Dec 11 10:38:08 crc kubenswrapper[4953]: I1211 10:38:08.771071 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_8bb60f7f-22a1-48c2-8815-f2c6cfddfc2e/ovsdbserver-sb/0.log" Dec 11 10:38:08 crc kubenswrapper[4953]: I1211 10:38:08.771183 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Dec 11 10:38:08 crc kubenswrapper[4953]: E1211 10:38:08.774873 4953 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c0f6853d6258372aa9946f5e58c9f253d8e32cbaa5a5914801b2de468c7d1703" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Dec 11 10:38:08 crc kubenswrapper[4953]: I1211 10:38:08.776658 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5ddd577785-sstm7"] Dec 11 10:38:08 crc kubenswrapper[4953]: I1211 10:38:08.792967 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-metrics-tqd68"] Dec 11 10:38:08 crc kubenswrapper[4953]: I1211 10:38:08.805478 4953 scope.go:117] "RemoveContainer" containerID="e70fe7fa2779f3637bf42c139e92bf6db02367cfe5162c9dbcefd534285e7752" Dec 11 10:38:08 crc kubenswrapper[4953]: E1211 10:38:08.828724 4953 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c0f6853d6258372aa9946f5e58c9f253d8e32cbaa5a5914801b2de468c7d1703" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Dec 11 10:38:08 crc kubenswrapper[4953]: I1211 10:38:08.842347 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8bb60f7f-22a1-48c2-8815-f2c6cfddfc2e-scripts\") pod \"8bb60f7f-22a1-48c2-8815-f2c6cfddfc2e\" (UID: \"8bb60f7f-22a1-48c2-8815-f2c6cfddfc2e\") " Dec 11 10:38:08 crc kubenswrapper[4953]: I1211 10:38:08.842503 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8bb60f7f-22a1-48c2-8815-f2c6cfddfc2e-combined-ca-bundle\") pod \"8bb60f7f-22a1-48c2-8815-f2c6cfddfc2e\" (UID: \"8bb60f7f-22a1-48c2-8815-f2c6cfddfc2e\") " Dec 11 10:38:08 crc kubenswrapper[4953]: I1211 10:38:08.842536 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c8w67\" (UniqueName: \"kubernetes.io/projected/8bb60f7f-22a1-48c2-8815-f2c6cfddfc2e-kube-api-access-c8w67\") pod \"8bb60f7f-22a1-48c2-8815-f2c6cfddfc2e\" (UID: \"8bb60f7f-22a1-48c2-8815-f2c6cfddfc2e\") " Dec 11 10:38:08 crc kubenswrapper[4953]: I1211 10:38:08.842605 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndbcluster-sb-etc-ovn\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"8bb60f7f-22a1-48c2-8815-f2c6cfddfc2e\" (UID: \"8bb60f7f-22a1-48c2-8815-f2c6cfddfc2e\") " Dec 11 10:38:08 crc kubenswrapper[4953]: I1211 10:38:08.842642 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8bb60f7f-22a1-48c2-8815-f2c6cfddfc2e-ovsdbserver-sb-tls-certs\") pod \"8bb60f7f-22a1-48c2-8815-f2c6cfddfc2e\" (UID: \"8bb60f7f-22a1-48c2-8815-f2c6cfddfc2e\") " Dec 11 10:38:08 crc kubenswrapper[4953]: I1211 10:38:08.842667 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8bb60f7f-22a1-48c2-8815-f2c6cfddfc2e-ovsdb-rundir\") pod \"8bb60f7f-22a1-48c2-8815-f2c6cfddfc2e\" (UID: \"8bb60f7f-22a1-48c2-8815-f2c6cfddfc2e\") " Dec 11 10:38:08 crc kubenswrapper[4953]: I1211 10:38:08.842722 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8bb60f7f-22a1-48c2-8815-f2c6cfddfc2e-metrics-certs-tls-certs\") pod \"8bb60f7f-22a1-48c2-8815-f2c6cfddfc2e\" (UID: \"8bb60f7f-22a1-48c2-8815-f2c6cfddfc2e\") " Dec 11 10:38:08 crc kubenswrapper[4953]: I1211 10:38:08.842744 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8bb60f7f-22a1-48c2-8815-f2c6cfddfc2e-config\") pod \"8bb60f7f-22a1-48c2-8815-f2c6cfddfc2e\" (UID: \"8bb60f7f-22a1-48c2-8815-f2c6cfddfc2e\") " Dec 11 10:38:08 crc kubenswrapper[4953]: I1211 10:38:08.853237 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8bb60f7f-22a1-48c2-8815-f2c6cfddfc2e-scripts" (OuterVolumeSpecName: "scripts") pod "8bb60f7f-22a1-48c2-8815-f2c6cfddfc2e" (UID: "8bb60f7f-22a1-48c2-8815-f2c6cfddfc2e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:38:08 crc kubenswrapper[4953]: I1211 10:38:08.853611 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8bb60f7f-22a1-48c2-8815-f2c6cfddfc2e-config" (OuterVolumeSpecName: "config") pod "8bb60f7f-22a1-48c2-8815-f2c6cfddfc2e" (UID: "8bb60f7f-22a1-48c2-8815-f2c6cfddfc2e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:38:08 crc kubenswrapper[4953]: I1211 10:38:08.855122 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8bb60f7f-22a1-48c2-8815-f2c6cfddfc2e-ovsdb-rundir" (OuterVolumeSpecName: "ovsdb-rundir") pod "8bb60f7f-22a1-48c2-8815-f2c6cfddfc2e" (UID: "8bb60f7f-22a1-48c2-8815-f2c6cfddfc2e"). InnerVolumeSpecName "ovsdb-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:38:08 crc kubenswrapper[4953]: I1211 10:38:08.858653 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-metrics-tqd68"] Dec 11 10:38:08 crc kubenswrapper[4953]: I1211 10:38:08.949448 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "ovndbcluster-sb-etc-ovn") pod "8bb60f7f-22a1-48c2-8815-f2c6cfddfc2e" (UID: "8bb60f7f-22a1-48c2-8815-f2c6cfddfc2e"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 11 10:38:08 crc kubenswrapper[4953]: I1211 10:38:08.951754 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-n6pxp"] Dec 11 10:38:08 crc kubenswrapper[4953]: E1211 10:38:08.978001 4953 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c0f6853d6258372aa9946f5e58c9f253d8e32cbaa5a5914801b2de468c7d1703" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Dec 11 10:38:08 crc kubenswrapper[4953]: E1211 10:38:08.978074 4953 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-northd-0" podUID="4287349e-ff2e-483c-9ede-08ec5e03a2b4" containerName="ovn-northd" Dec 11 10:38:08 crc kubenswrapper[4953]: I1211 10:38:08.978668 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8bb60f7f-22a1-48c2-8815-f2c6cfddfc2e-kube-api-access-c8w67" (OuterVolumeSpecName: "kube-api-access-c8w67") pod "8bb60f7f-22a1-48c2-8815-f2c6cfddfc2e" (UID: "8bb60f7f-22a1-48c2-8815-f2c6cfddfc2e"). InnerVolumeSpecName "kube-api-access-c8w67". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:38:08 crc kubenswrapper[4953]: I1211 10:38:08.988214 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c8w67\" (UniqueName: \"kubernetes.io/projected/8bb60f7f-22a1-48c2-8815-f2c6cfddfc2e-kube-api-access-c8w67\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:08 crc kubenswrapper[4953]: I1211 10:38:08.988263 4953 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Dec 11 10:38:08 crc kubenswrapper[4953]: I1211 10:38:08.988274 4953 reconciler_common.go:293] "Volume detached for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8bb60f7f-22a1-48c2-8815-f2c6cfddfc2e-ovsdb-rundir\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:08 crc kubenswrapper[4953]: I1211 10:38:08.988286 4953 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8bb60f7f-22a1-48c2-8815-f2c6cfddfc2e-config\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:08 crc kubenswrapper[4953]: I1211 10:38:08.988295 4953 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8bb60f7f-22a1-48c2-8815-f2c6cfddfc2e-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:08 crc kubenswrapper[4953]: I1211 10:38:08.991654 4953 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="4b1b7520-f52c-4a2a-98e5-16ac7460bade" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.165:8776/healthcheck\": read tcp 10.217.0.2:57828->10.217.0.165:8776: read: connection reset by peer" Dec 11 10:38:09 crc kubenswrapper[4953]: I1211 10:38:09.003734 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-n6pxp"] Dec 11 10:38:09 crc kubenswrapper[4953]: I1211 10:38:09.011523 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/memcached-0"] Dec 11 10:38:09 crc kubenswrapper[4953]: I1211 10:38:09.012007 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/memcached-0" podUID="ab07f951-5c8d-428b-9b26-52ea2284ee52" containerName="memcached" containerID="cri-o://41bbb6ee795ebc3c22e509c06b7f775810c8aed2e9da9f0f6b746d7e045c0c23" gracePeriod=30 Dec 11 10:38:09 crc kubenswrapper[4953]: I1211 10:38:09.028622 4953 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Dec 11 10:38:09 crc kubenswrapper[4953]: I1211 10:38:09.040606 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-n8dxg"] Dec 11 10:38:09 crc kubenswrapper[4953]: I1211 10:38:09.066855 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-pqxj6"] Dec 11 10:38:09 crc kubenswrapper[4953]: I1211 10:38:09.066922 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-n8dxg"] Dec 11 10:38:09 crc kubenswrapper[4953]: I1211 10:38:09.066974 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8bb60f7f-22a1-48c2-8815-f2c6cfddfc2e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8bb60f7f-22a1-48c2-8815-f2c6cfddfc2e" (UID: "8bb60f7f-22a1-48c2-8815-f2c6cfddfc2e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:38:09 crc kubenswrapper[4953]: I1211 10:38:09.073636 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-pqxj6"] Dec 11 10:38:09 crc kubenswrapper[4953]: I1211 10:38:09.086606 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-55b68558f8-r49n8"] Dec 11 10:38:09 crc kubenswrapper[4953]: I1211 10:38:09.086848 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/keystone-55b68558f8-r49n8" podUID="b4e64ea9-3129-46a7-8197-bdd7730ad3f1" containerName="keystone-api" containerID="cri-o://e95a830582a33c31ab5aeaf4d56f0badd309548a0a67ed5368d9a6983add712a" gracePeriod=30 Dec 11 10:38:09 crc kubenswrapper[4953]: I1211 10:38:09.093684 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c62085a-9722-4020-a26f-2adee83f78c8-operator-scripts\") pod \"novacell1e356-account-delete-h5n9c\" (UID: \"9c62085a-9722-4020-a26f-2adee83f78c8\") " pod="openstack/novacell1e356-account-delete-h5n9c" Dec 11 10:38:09 crc kubenswrapper[4953]: I1211 10:38:09.093935 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-999fx\" (UniqueName: \"kubernetes.io/projected/9c62085a-9722-4020-a26f-2adee83f78c8-kube-api-access-999fx\") pod \"novacell1e356-account-delete-h5n9c\" (UID: \"9c62085a-9722-4020-a26f-2adee83f78c8\") " pod="openstack/novacell1e356-account-delete-h5n9c" Dec 11 10:38:09 crc kubenswrapper[4953]: I1211 10:38:09.094160 4953 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8bb60f7f-22a1-48c2-8815-f2c6cfddfc2e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:09 crc kubenswrapper[4953]: I1211 10:38:09.094183 4953 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:09 crc kubenswrapper[4953]: E1211 10:38:09.099184 4953 configmap.go:193] Couldn't get configMap openstack/openstack-cell1-scripts: configmap "openstack-cell1-scripts" not found Dec 11 10:38:09 crc kubenswrapper[4953]: E1211 10:38:09.099294 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9c62085a-9722-4020-a26f-2adee83f78c8-operator-scripts podName:9c62085a-9722-4020-a26f-2adee83f78c8 nodeName:}" failed. No retries permitted until 2025-12-11 10:38:13.099262345 +0000 UTC m=+1611.123121378 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/9c62085a-9722-4020-a26f-2adee83f78c8-operator-scripts") pod "novacell1e356-account-delete-h5n9c" (UID: "9c62085a-9722-4020-a26f-2adee83f78c8") : configmap "openstack-cell1-scripts" not found Dec 11 10:38:09 crc kubenswrapper[4953]: E1211 10:38:09.150182 4953 projected.go:194] Error preparing data for projected volume kube-api-access-999fx for pod openstack/novacell1e356-account-delete-h5n9c: failed to fetch token: serviceaccounts "galera-openstack-cell1" not found Dec 11 10:38:09 crc kubenswrapper[4953]: E1211 10:38:09.150248 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9c62085a-9722-4020-a26f-2adee83f78c8-kube-api-access-999fx podName:9c62085a-9722-4020-a26f-2adee83f78c8 nodeName:}" failed. No retries permitted until 2025-12-11 10:38:13.150228438 +0000 UTC m=+1611.174087471 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-999fx" (UniqueName: "kubernetes.io/projected/9c62085a-9722-4020-a26f-2adee83f78c8-kube-api-access-999fx") pod "novacell1e356-account-delete-h5n9c" (UID: "9c62085a-9722-4020-a26f-2adee83f78c8") : failed to fetch token: serviceaccounts "galera-openstack-cell1" not found Dec 11 10:38:09 crc kubenswrapper[4953]: I1211 10:38:09.150824 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-galera-0"] Dec 11 10:38:09 crc kubenswrapper[4953]: I1211 10:38:09.160798 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone7975-account-delete-bmljp"] Dec 11 10:38:09 crc kubenswrapper[4953]: E1211 10:38:09.161245 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4237606-fdcf-403b-8e5a-1bbb4a2e38de" containerName="openstack-network-exporter" Dec 11 10:38:09 crc kubenswrapper[4953]: I1211 10:38:09.161260 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4237606-fdcf-403b-8e5a-1bbb4a2e38de" containerName="openstack-network-exporter" Dec 11 10:38:09 crc kubenswrapper[4953]: E1211 10:38:09.161270 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bb60f7f-22a1-48c2-8815-f2c6cfddfc2e" containerName="openstack-network-exporter" Dec 11 10:38:09 crc kubenswrapper[4953]: I1211 10:38:09.161277 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bb60f7f-22a1-48c2-8815-f2c6cfddfc2e" containerName="openstack-network-exporter" Dec 11 10:38:09 crc kubenswrapper[4953]: E1211 10:38:09.161289 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e27f6309-0ccd-4aca-ad87-0cd7a9357469" containerName="init" Dec 11 10:38:09 crc kubenswrapper[4953]: I1211 10:38:09.161295 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="e27f6309-0ccd-4aca-ad87-0cd7a9357469" containerName="init" Dec 11 10:38:09 crc kubenswrapper[4953]: E1211 10:38:09.161313 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="498f7a43-7db9-42e8-b722-a5fb6ae4749f" containerName="ovn-controller" Dec 11 10:38:09 crc kubenswrapper[4953]: I1211 10:38:09.161319 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="498f7a43-7db9-42e8-b722-a5fb6ae4749f" containerName="ovn-controller" Dec 11 10:38:09 crc kubenswrapper[4953]: E1211 10:38:09.161327 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48902dd9-8c9f-4983-b8dd-6f22f4382a19" containerName="openstack-network-exporter" Dec 11 10:38:09 crc kubenswrapper[4953]: I1211 10:38:09.161333 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="48902dd9-8c9f-4983-b8dd-6f22f4382a19" containerName="openstack-network-exporter" Dec 11 10:38:09 crc kubenswrapper[4953]: E1211 10:38:09.161344 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bb60f7f-22a1-48c2-8815-f2c6cfddfc2e" containerName="ovsdbserver-sb" Dec 11 10:38:09 crc kubenswrapper[4953]: I1211 10:38:09.161349 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bb60f7f-22a1-48c2-8815-f2c6cfddfc2e" containerName="ovsdbserver-sb" Dec 11 10:38:09 crc kubenswrapper[4953]: E1211 10:38:09.161361 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e27f6309-0ccd-4aca-ad87-0cd7a9357469" containerName="dnsmasq-dns" Dec 11 10:38:09 crc kubenswrapper[4953]: I1211 10:38:09.161368 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="e27f6309-0ccd-4aca-ad87-0cd7a9357469" containerName="dnsmasq-dns" Dec 11 10:38:09 crc kubenswrapper[4953]: E1211 10:38:09.161380 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4237606-fdcf-403b-8e5a-1bbb4a2e38de" containerName="ovsdbserver-nb" Dec 11 10:38:09 crc kubenswrapper[4953]: I1211 10:38:09.161385 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4237606-fdcf-403b-8e5a-1bbb4a2e38de" containerName="ovsdbserver-nb" Dec 11 10:38:09 crc kubenswrapper[4953]: I1211 10:38:09.161591 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="8bb60f7f-22a1-48c2-8815-f2c6cfddfc2e" containerName="openstack-network-exporter" Dec 11 10:38:09 crc kubenswrapper[4953]: I1211 10:38:09.161606 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4237606-fdcf-403b-8e5a-1bbb4a2e38de" containerName="openstack-network-exporter" Dec 11 10:38:09 crc kubenswrapper[4953]: I1211 10:38:09.161620 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="8bb60f7f-22a1-48c2-8815-f2c6cfddfc2e" containerName="ovsdbserver-sb" Dec 11 10:38:09 crc kubenswrapper[4953]: I1211 10:38:09.161631 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="e27f6309-0ccd-4aca-ad87-0cd7a9357469" containerName="dnsmasq-dns" Dec 11 10:38:09 crc kubenswrapper[4953]: I1211 10:38:09.161641 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="498f7a43-7db9-42e8-b722-a5fb6ae4749f" containerName="ovn-controller" Dec 11 10:38:09 crc kubenswrapper[4953]: I1211 10:38:09.161650 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4237606-fdcf-403b-8e5a-1bbb4a2e38de" containerName="ovsdbserver-nb" Dec 11 10:38:09 crc kubenswrapper[4953]: I1211 10:38:09.161658 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="48902dd9-8c9f-4983-b8dd-6f22f4382a19" containerName="openstack-network-exporter" Dec 11 10:38:09 crc kubenswrapper[4953]: I1211 10:38:09.162486 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone7975-account-delete-bmljp" Dec 11 10:38:09 crc kubenswrapper[4953]: I1211 10:38:09.174959 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone7975-account-delete-bmljp"] Dec 11 10:38:09 crc kubenswrapper[4953]: I1211 10:38:09.197844 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-jndc6"] Dec 11 10:38:09 crc kubenswrapper[4953]: I1211 10:38:09.234100 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-jndc6"] Dec 11 10:38:09 crc kubenswrapper[4953]: I1211 10:38:09.307352 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bckrd\" (UniqueName: \"kubernetes.io/projected/38d12eb6-7ca0-4003-a9f7-f691f65097e4-kube-api-access-bckrd\") pod \"keystone7975-account-delete-bmljp\" (UID: \"38d12eb6-7ca0-4003-a9f7-f691f65097e4\") " pod="openstack/keystone7975-account-delete-bmljp" Dec 11 10:38:09 crc kubenswrapper[4953]: I1211 10:38:09.307413 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/38d12eb6-7ca0-4003-a9f7-f691f65097e4-operator-scripts\") pod \"keystone7975-account-delete-bmljp\" (UID: \"38d12eb6-7ca0-4003-a9f7-f691f65097e4\") " pod="openstack/keystone7975-account-delete-bmljp" Dec 11 10:38:09 crc kubenswrapper[4953]: I1211 10:38:09.338117 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-7975-account-create-update-8dscm"] Dec 11 10:38:09 crc kubenswrapper[4953]: I1211 10:38:09.369395 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone7975-account-delete-bmljp"] Dec 11 10:38:09 crc kubenswrapper[4953]: I1211 10:38:09.396009 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-7975-account-create-update-8dscm"] Dec 11 10:38:09 crc kubenswrapper[4953]: I1211 10:38:09.409469 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bckrd\" (UniqueName: \"kubernetes.io/projected/38d12eb6-7ca0-4003-a9f7-f691f65097e4-kube-api-access-bckrd\") pod \"keystone7975-account-delete-bmljp\" (UID: \"38d12eb6-7ca0-4003-a9f7-f691f65097e4\") " pod="openstack/keystone7975-account-delete-bmljp" Dec 11 10:38:09 crc kubenswrapper[4953]: I1211 10:38:09.409550 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/38d12eb6-7ca0-4003-a9f7-f691f65097e4-operator-scripts\") pod \"keystone7975-account-delete-bmljp\" (UID: \"38d12eb6-7ca0-4003-a9f7-f691f65097e4\") " pod="openstack/keystone7975-account-delete-bmljp" Dec 11 10:38:09 crc kubenswrapper[4953]: E1211 10:38:09.409846 4953 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Dec 11 10:38:09 crc kubenswrapper[4953]: E1211 10:38:09.412836 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/38d12eb6-7ca0-4003-a9f7-f691f65097e4-operator-scripts podName:38d12eb6-7ca0-4003-a9f7-f691f65097e4 nodeName:}" failed. No retries permitted until 2025-12-11 10:38:09.909903999 +0000 UTC m=+1607.933763092 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/38d12eb6-7ca0-4003-a9f7-f691f65097e4-operator-scripts") pod "keystone7975-account-delete-bmljp" (UID: "38d12eb6-7ca0-4003-a9f7-f691f65097e4") : configmap "openstack-scripts" not found Dec 11 10:38:09 crc kubenswrapper[4953]: E1211 10:38:09.414882 4953 projected.go:194] Error preparing data for projected volume kube-api-access-bckrd for pod openstack/keystone7975-account-delete-bmljp: failed to fetch token: serviceaccounts "galera-openstack" not found Dec 11 10:38:09 crc kubenswrapper[4953]: E1211 10:38:09.414929 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/38d12eb6-7ca0-4003-a9f7-f691f65097e4-kube-api-access-bckrd podName:38d12eb6-7ca0-4003-a9f7-f691f65097e4 nodeName:}" failed. No retries permitted until 2025-12-11 10:38:09.914918306 +0000 UTC m=+1607.938777339 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-bckrd" (UniqueName: "kubernetes.io/projected/38d12eb6-7ca0-4003-a9f7-f691f65097e4-kube-api-access-bckrd") pod "keystone7975-account-delete-bmljp" (UID: "38d12eb6-7ca0-4003-a9f7-f691f65097e4") : failed to fetch token: serviceaccounts "galera-openstack" not found Dec 11 10:38:09 crc kubenswrapper[4953]: I1211 10:38:09.450786 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8bb60f7f-22a1-48c2-8815-f2c6cfddfc2e-ovsdbserver-sb-tls-certs" (OuterVolumeSpecName: "ovsdbserver-sb-tls-certs") pod "8bb60f7f-22a1-48c2-8815-f2c6cfddfc2e" (UID: "8bb60f7f-22a1-48c2-8815-f2c6cfddfc2e"). InnerVolumeSpecName "ovsdbserver-sb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:38:09 crc kubenswrapper[4953]: I1211 10:38:09.518449 4953 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8bb60f7f-22a1-48c2-8815-f2c6cfddfc2e-ovsdbserver-sb-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:09 crc kubenswrapper[4953]: I1211 10:38:09.554221 4953 generic.go:334] "Generic (PLEG): container finished" podID="345a513a-93a0-4e23-9266-3eeaf3ff0c10" containerID="dfc2a9a94740a5c1e7c18669633ef308479efaeb144cead2c91d20383752f603" exitCode=0 Dec 11 10:38:09 crc kubenswrapper[4953]: I1211 10:38:09.554502 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7567d9469d-rx5dx" event={"ID":"345a513a-93a0-4e23-9266-3eeaf3ff0c10","Type":"ContainerDied","Data":"dfc2a9a94740a5c1e7c18669633ef308479efaeb144cead2c91d20383752f603"} Dec 11 10:38:09 crc kubenswrapper[4953]: I1211 10:38:09.559219 4953 generic.go:334] "Generic (PLEG): container finished" podID="4b1b7520-f52c-4a2a-98e5-16ac7460bade" containerID="7be2bbaefa3e689cb3eb71687b4eaaaa7ace9bf5c6191bc5de9d655c138598a0" exitCode=0 Dec 11 10:38:09 crc kubenswrapper[4953]: I1211 10:38:09.559294 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"4b1b7520-f52c-4a2a-98e5-16ac7460bade","Type":"ContainerDied","Data":"7be2bbaefa3e689cb3eb71687b4eaaaa7ace9bf5c6191bc5de9d655c138598a0"} Dec 11 10:38:09 crc kubenswrapper[4953]: I1211 10:38:09.574510 4953 generic.go:334] "Generic (PLEG): container finished" podID="cd4593de-19d2-47c1-b6b0-b9c0e46e1107" containerID="d01aaa77da386e9baab54f2e6b436105ab0703db857b3d9adc7c4e2df8f0e6e2" exitCode=0 Dec 11 10:38:09 crc kubenswrapper[4953]: I1211 10:38:09.574796 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"cd4593de-19d2-47c1-b6b0-b9c0e46e1107","Type":"ContainerDied","Data":"d01aaa77da386e9baab54f2e6b436105ab0703db857b3d9adc7c4e2df8f0e6e2"} Dec 11 10:38:09 crc kubenswrapper[4953]: I1211 10:38:09.575007 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8bb60f7f-22a1-48c2-8815-f2c6cfddfc2e-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "8bb60f7f-22a1-48c2-8815-f2c6cfddfc2e" (UID: "8bb60f7f-22a1-48c2-8815-f2c6cfddfc2e"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:38:09 crc kubenswrapper[4953]: I1211 10:38:09.597352 4953 generic.go:334] "Generic (PLEG): container finished" podID="7b77681a-0823-42e6-b0a4-2af1ce955970" containerID="4221eaf86758a08993df7de85552e51a217b8b7260281a70c92cd1a666135bc7" exitCode=0 Dec 11 10:38:09 crc kubenswrapper[4953]: I1211 10:38:09.597474 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7b77681a-0823-42e6-b0a4-2af1ce955970","Type":"ContainerDied","Data":"4221eaf86758a08993df7de85552e51a217b8b7260281a70c92cd1a666135bc7"} Dec 11 10:38:09 crc kubenswrapper[4953]: I1211 10:38:09.601635 4953 generic.go:334] "Generic (PLEG): container finished" podID="0fdfbbe2-a3b8-4834-9920-114c40de67dc" containerID="7bcde1f160b621a411c4432d1c9223855ce56dae0721cd858ef2d9f01ba8fc4f" exitCode=0 Dec 11 10:38:09 crc kubenswrapper[4953]: I1211 10:38:09.601667 4953 generic.go:334] "Generic (PLEG): container finished" podID="0fdfbbe2-a3b8-4834-9920-114c40de67dc" containerID="833b6f02c978f12986b237387138803da1e2d0773b34467c4d1a5b383a6b7409" exitCode=2 Dec 11 10:38:09 crc kubenswrapper[4953]: I1211 10:38:09.601674 4953 generic.go:334] "Generic (PLEG): container finished" podID="0fdfbbe2-a3b8-4834-9920-114c40de67dc" containerID="26c557ccd567d40c3f683a4b6ace8ab2e8b7ac5434a459e3c5578f86eab6d9ef" exitCode=0 Dec 11 10:38:09 crc kubenswrapper[4953]: I1211 10:38:09.601712 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0fdfbbe2-a3b8-4834-9920-114c40de67dc","Type":"ContainerDied","Data":"7bcde1f160b621a411c4432d1c9223855ce56dae0721cd858ef2d9f01ba8fc4f"} Dec 11 10:38:09 crc kubenswrapper[4953]: I1211 10:38:09.601737 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0fdfbbe2-a3b8-4834-9920-114c40de67dc","Type":"ContainerDied","Data":"833b6f02c978f12986b237387138803da1e2d0773b34467c4d1a5b383a6b7409"} Dec 11 10:38:09 crc kubenswrapper[4953]: I1211 10:38:09.601746 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0fdfbbe2-a3b8-4834-9920-114c40de67dc","Type":"ContainerDied","Data":"26c557ccd567d40c3f683a4b6ace8ab2e8b7ac5434a459e3c5578f86eab6d9ef"} Dec 11 10:38:09 crc kubenswrapper[4953]: I1211 10:38:09.603850 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/novacell0caaa-account-delete-n4fck" event={"ID":"12df8687-e24e-47fb-802c-3ab978ed04fd","Type":"ContainerStarted","Data":"a0459dab2bbbd23193b9976fdc006d97c50347d90ea60322c21cd8b6deef3262"} Dec 11 10:38:09 crc kubenswrapper[4953]: I1211 10:38:09.604793 4953 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openstack/novacell0caaa-account-delete-n4fck" secret="" err="secret \"galera-openstack-dockercfg-4mfbc\" not found" Dec 11 10:38:09 crc kubenswrapper[4953]: I1211 10:38:09.610690 4953 generic.go:334] "Generic (PLEG): container finished" podID="9da03c89-b3fb-431e-bef0-eb8f6d0b180e" containerID="e77c7ae1c87e7949e1f82009c61668e44597f8128ff13d56a0b924b074388ac2" exitCode=2 Dec 11 10:38:09 crc kubenswrapper[4953]: I1211 10:38:09.610792 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"9da03c89-b3fb-431e-bef0-eb8f6d0b180e","Type":"ContainerDied","Data":"e77c7ae1c87e7949e1f82009c61668e44597f8128ff13d56a0b924b074388ac2"} Dec 11 10:38:09 crc kubenswrapper[4953]: I1211 10:38:09.622118 4953 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8bb60f7f-22a1-48c2-8815-f2c6cfddfc2e-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:09 crc kubenswrapper[4953]: I1211 10:38:09.626615 4953 generic.go:334] "Generic (PLEG): container finished" podID="767370a9-f8dd-4370-a2cc-f5baeff52c54" containerID="f1f3935cba9d49f468aa48835e818e819d4e1455992846d4cd92a2e960523799" exitCode=0 Dec 11 10:38:09 crc kubenswrapper[4953]: I1211 10:38:09.626688 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7c85df7b9d-rdbfq" event={"ID":"767370a9-f8dd-4370-a2cc-f5baeff52c54","Type":"ContainerDied","Data":"f1f3935cba9d49f468aa48835e818e819d4e1455992846d4cd92a2e960523799"} Dec 11 10:38:09 crc kubenswrapper[4953]: I1211 10:38:09.629496 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/novaapi339c-account-delete-l2kws" event={"ID":"10e32559-b465-4538-af8b-9dd3deedf2b9","Type":"ContainerStarted","Data":"4f316ac35b4df1a9f5195a54af7c20b83aa8d2cd8c9a6a360ce94ef77c733433"} Dec 11 10:38:09 crc kubenswrapper[4953]: I1211 10:38:09.632103 4953 generic.go:334] "Generic (PLEG): container finished" podID="e067a835-8a1a-4672-aaea-b8c101109018" containerID="30c55b9a63cff189be97f461ce82cf19d069820c204b72f08733751b6e4d8e3b" exitCode=0 Dec 11 10:38:09 crc kubenswrapper[4953]: I1211 10:38:09.632222 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e067a835-8a1a-4672-aaea-b8c101109018","Type":"ContainerDied","Data":"30c55b9a63cff189be97f461ce82cf19d069820c204b72f08733751b6e4d8e3b"} Dec 11 10:38:09 crc kubenswrapper[4953]: I1211 10:38:09.633908 4953 generic.go:334] "Generic (PLEG): container finished" podID="4b66dbe7-edd9-4e23-a3d0-0661efe89ac6" containerID="39ba09432c8d47141f48eb0a06529b605d51f099d8537d288c6ec875cebae528" exitCode=0 Dec 11 10:38:09 crc kubenswrapper[4953]: I1211 10:38:09.634069 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4b66dbe7-edd9-4e23-a3d0-0661efe89ac6","Type":"ContainerDied","Data":"39ba09432c8d47141f48eb0a06529b605d51f099d8537d288c6ec875cebae528"} Dec 11 10:38:09 crc kubenswrapper[4953]: I1211 10:38:09.634245 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Dec 11 10:38:09 crc kubenswrapper[4953]: E1211 10:38:09.726230 4953 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Dec 11 10:38:09 crc kubenswrapper[4953]: E1211 10:38:09.726304 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/12df8687-e24e-47fb-802c-3ab978ed04fd-operator-scripts podName:12df8687-e24e-47fb-802c-3ab978ed04fd nodeName:}" failed. No retries permitted until 2025-12-11 10:38:10.226286053 +0000 UTC m=+1608.250145086 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/12df8687-e24e-47fb-802c-3ab978ed04fd-operator-scripts") pod "novacell0caaa-account-delete-n4fck" (UID: "12df8687-e24e-47fb-802c-3ab978ed04fd") : configmap "openstack-scripts" not found Dec 11 10:38:09 crc kubenswrapper[4953]: I1211 10:38:09.871222 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-galera-0" podUID="23f99edb-3870-42f3-bdef-ec4db335ba35" containerName="galera" containerID="cri-o://027468b4fd5a12e7e2d663076aa4064b5a0635d8ef820b16038cc9ed4dd22476" gracePeriod=30 Dec 11 10:38:09 crc kubenswrapper[4953]: I1211 10:38:09.930937 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/38d12eb6-7ca0-4003-a9f7-f691f65097e4-operator-scripts\") pod \"keystone7975-account-delete-bmljp\" (UID: \"38d12eb6-7ca0-4003-a9f7-f691f65097e4\") " pod="openstack/keystone7975-account-delete-bmljp" Dec 11 10:38:09 crc kubenswrapper[4953]: I1211 10:38:09.931141 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bckrd\" (UniqueName: \"kubernetes.io/projected/38d12eb6-7ca0-4003-a9f7-f691f65097e4-kube-api-access-bckrd\") pod \"keystone7975-account-delete-bmljp\" (UID: \"38d12eb6-7ca0-4003-a9f7-f691f65097e4\") " pod="openstack/keystone7975-account-delete-bmljp" Dec 11 10:38:09 crc kubenswrapper[4953]: E1211 10:38:09.931718 4953 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Dec 11 10:38:09 crc kubenswrapper[4953]: E1211 10:38:09.931782 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/38d12eb6-7ca0-4003-a9f7-f691f65097e4-operator-scripts podName:38d12eb6-7ca0-4003-a9f7-f691f65097e4 nodeName:}" failed. No retries permitted until 2025-12-11 10:38:10.931769342 +0000 UTC m=+1608.955628375 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/38d12eb6-7ca0-4003-a9f7-f691f65097e4-operator-scripts") pod "keystone7975-account-delete-bmljp" (UID: "38d12eb6-7ca0-4003-a9f7-f691f65097e4") : configmap "openstack-scripts" not found Dec 11 10:38:09 crc kubenswrapper[4953]: E1211 10:38:09.937550 4953 projected.go:194] Error preparing data for projected volume kube-api-access-bckrd for pod openstack/keystone7975-account-delete-bmljp: failed to fetch token: serviceaccounts "galera-openstack" not found Dec 11 10:38:09 crc kubenswrapper[4953]: E1211 10:38:09.937650 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/38d12eb6-7ca0-4003-a9f7-f691f65097e4-kube-api-access-bckrd podName:38d12eb6-7ca0-4003-a9f7-f691f65097e4 nodeName:}" failed. No retries permitted until 2025-12-11 10:38:10.937617975 +0000 UTC m=+1608.961477008 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-bckrd" (UniqueName: "kubernetes.io/projected/38d12eb6-7ca0-4003-a9f7-f691f65097e4-kube-api-access-bckrd") pod "keystone7975-account-delete-bmljp" (UID: "38d12eb6-7ca0-4003-a9f7-f691f65097e4") : failed to fetch token: serviceaccounts "galera-openstack" not found Dec 11 10:38:10 crc kubenswrapper[4953]: I1211 10:38:10.030781 4953 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7c85df7b9d-rdbfq" podUID="767370a9-f8dd-4370-a2cc-f5baeff52c54" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.164:9311/healthcheck\": dial tcp 10.217.0.164:9311: connect: connection refused" Dec 11 10:38:10 crc kubenswrapper[4953]: I1211 10:38:10.030856 4953 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7c85df7b9d-rdbfq" podUID="767370a9-f8dd-4370-a2cc-f5baeff52c54" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.164:9311/healthcheck\": dial tcp 10.217.0.164:9311: connect: connection refused" Dec 11 10:38:10 crc kubenswrapper[4953]: I1211 10:38:10.105984 4953 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="b29c8985-0d8c-4382-9969-29422929136f" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.101:5671: connect: connection refused" Dec 11 10:38:10 crc kubenswrapper[4953]: E1211 10:38:10.241132 4953 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Dec 11 10:38:10 crc kubenswrapper[4953]: E1211 10:38:10.241218 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/12df8687-e24e-47fb-802c-3ab978ed04fd-operator-scripts podName:12df8687-e24e-47fb-802c-3ab978ed04fd nodeName:}" failed. No retries permitted until 2025-12-11 10:38:11.241201157 +0000 UTC m=+1609.265060190 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/12df8687-e24e-47fb-802c-3ab978ed04fd-operator-scripts") pod "novacell0caaa-account-delete-n4fck" (UID: "12df8687-e24e-47fb-802c-3ab978ed04fd") : configmap "openstack-scripts" not found Dec 11 10:38:10 crc kubenswrapper[4953]: I1211 10:38:10.245731 4953 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="01196778-96de-4f79-b9ac-e01243f86ebb" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.102:5671: connect: connection refused" Dec 11 10:38:10 crc kubenswrapper[4953]: I1211 10:38:10.281788 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 11 10:38:10 crc kubenswrapper[4953]: I1211 10:38:10.296055 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/novacell0caaa-account-delete-n4fck" podStartSLOduration=7.29603598 podStartE2EDuration="7.29603598s" podCreationTimestamp="2025-12-11 10:38:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:38:09.624514634 +0000 UTC m=+1607.648373667" watchObservedRunningTime="2025-12-11 10:38:10.29603598 +0000 UTC m=+1608.319895033" Dec 11 10:38:10 crc kubenswrapper[4953]: I1211 10:38:10.306673 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 11 10:38:10 crc kubenswrapper[4953]: I1211 10:38:10.319059 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 11 10:38:10 crc kubenswrapper[4953]: I1211 10:38:10.330780 4953 scope.go:117] "RemoveContainer" containerID="16b6376ca3b41c1f6e9ee55d0479d0566772d86be8f749eb1b02c4edcfa051b9" Dec 11 10:38:10 crc kubenswrapper[4953]: I1211 10:38:10.336333 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 11 10:38:10 crc kubenswrapper[4953]: I1211 10:38:10.341830 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7mghl\" (UniqueName: \"kubernetes.io/projected/d1833793-1408-450f-8a7e-e01e6048edd5-kube-api-access-7mghl\") pod \"d1833793-1408-450f-8a7e-e01e6048edd5\" (UID: \"d1833793-1408-450f-8a7e-e01e6048edd5\") " Dec 11 10:38:10 crc kubenswrapper[4953]: I1211 10:38:10.342056 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1833793-1408-450f-8a7e-e01e6048edd5-combined-ca-bundle\") pod \"d1833793-1408-450f-8a7e-e01e6048edd5\" (UID: \"d1833793-1408-450f-8a7e-e01e6048edd5\") " Dec 11 10:38:10 crc kubenswrapper[4953]: I1211 10:38:10.342167 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1833793-1408-450f-8a7e-e01e6048edd5-config-data\") pod \"d1833793-1408-450f-8a7e-e01e6048edd5\" (UID: \"d1833793-1408-450f-8a7e-e01e6048edd5\") " Dec 11 10:38:10 crc kubenswrapper[4953]: I1211 10:38:10.342204 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1833793-1408-450f-8a7e-e01e6048edd5-scripts\") pod \"d1833793-1408-450f-8a7e-e01e6048edd5\" (UID: \"d1833793-1408-450f-8a7e-e01e6048edd5\") " Dec 11 10:38:10 crc kubenswrapper[4953]: I1211 10:38:10.342236 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d1833793-1408-450f-8a7e-e01e6048edd5-config-data-custom\") pod \"d1833793-1408-450f-8a7e-e01e6048edd5\" (UID: \"d1833793-1408-450f-8a7e-e01e6048edd5\") " Dec 11 10:38:10 crc kubenswrapper[4953]: I1211 10:38:10.342293 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d1833793-1408-450f-8a7e-e01e6048edd5-etc-machine-id\") pod \"d1833793-1408-450f-8a7e-e01e6048edd5\" (UID: \"d1833793-1408-450f-8a7e-e01e6048edd5\") " Dec 11 10:38:10 crc kubenswrapper[4953]: I1211 10:38:10.342969 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d1833793-1408-450f-8a7e-e01e6048edd5-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "d1833793-1408-450f-8a7e-e01e6048edd5" (UID: "d1833793-1408-450f-8a7e-e01e6048edd5"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 10:38:10 crc kubenswrapper[4953]: I1211 10:38:10.356910 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1833793-1408-450f-8a7e-e01e6048edd5-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "d1833793-1408-450f-8a7e-e01e6048edd5" (UID: "d1833793-1408-450f-8a7e-e01e6048edd5"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:38:10 crc kubenswrapper[4953]: I1211 10:38:10.357074 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1833793-1408-450f-8a7e-e01e6048edd5-kube-api-access-7mghl" (OuterVolumeSpecName: "kube-api-access-7mghl") pod "d1833793-1408-450f-8a7e-e01e6048edd5" (UID: "d1833793-1408-450f-8a7e-e01e6048edd5"). InnerVolumeSpecName "kube-api-access-7mghl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:38:10 crc kubenswrapper[4953]: I1211 10:38:10.364483 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/novacell1e356-account-delete-h5n9c"] Dec 11 10:38:10 crc kubenswrapper[4953]: I1211 10:38:10.383761 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1833793-1408-450f-8a7e-e01e6048edd5-scripts" (OuterVolumeSpecName: "scripts") pod "d1833793-1408-450f-8a7e-e01e6048edd5" (UID: "d1833793-1408-450f-8a7e-e01e6048edd5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:38:10 crc kubenswrapper[4953]: I1211 10:38:10.403885 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/novacell1e356-account-delete-h5n9c"] Dec 11 10:38:10 crc kubenswrapper[4953]: I1211 10:38:10.435149 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 11 10:38:10 crc kubenswrapper[4953]: I1211 10:38:10.443845 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 11 10:38:10 crc kubenswrapper[4953]: I1211 10:38:10.444841 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79a93889-ae40-4bd1-a697-5797e065231b-config-data\") pod \"79a93889-ae40-4bd1-a697-5797e065231b\" (UID: \"79a93889-ae40-4bd1-a697-5797e065231b\") " Dec 11 10:38:10 crc kubenswrapper[4953]: I1211 10:38:10.444990 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79a93889-ae40-4bd1-a697-5797e065231b-combined-ca-bundle\") pod \"79a93889-ae40-4bd1-a697-5797e065231b\" (UID: \"79a93889-ae40-4bd1-a697-5797e065231b\") " Dec 11 10:38:10 crc kubenswrapper[4953]: I1211 10:38:10.445032 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/79a93889-ae40-4bd1-a697-5797e065231b-vencrypt-tls-certs\") pod \"79a93889-ae40-4bd1-a697-5797e065231b\" (UID: \"79a93889-ae40-4bd1-a697-5797e065231b\") " Dec 11 10:38:10 crc kubenswrapper[4953]: I1211 10:38:10.445109 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9s4ck\" (UniqueName: \"kubernetes.io/projected/79a93889-ae40-4bd1-a697-5797e065231b-kube-api-access-9s4ck\") pod \"79a93889-ae40-4bd1-a697-5797e065231b\" (UID: \"79a93889-ae40-4bd1-a697-5797e065231b\") " Dec 11 10:38:10 crc kubenswrapper[4953]: I1211 10:38:10.445157 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/79a93889-ae40-4bd1-a697-5797e065231b-nova-novncproxy-tls-certs\") pod \"79a93889-ae40-4bd1-a697-5797e065231b\" (UID: \"79a93889-ae40-4bd1-a697-5797e065231b\") " Dec 11 10:38:10 crc kubenswrapper[4953]: I1211 10:38:10.445643 4953 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c62085a-9722-4020-a26f-2adee83f78c8-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:10 crc kubenswrapper[4953]: I1211 10:38:10.445659 4953 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1833793-1408-450f-8a7e-e01e6048edd5-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:10 crc kubenswrapper[4953]: I1211 10:38:10.445670 4953 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d1833793-1408-450f-8a7e-e01e6048edd5-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:10 crc kubenswrapper[4953]: I1211 10:38:10.445681 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-999fx\" (UniqueName: \"kubernetes.io/projected/9c62085a-9722-4020-a26f-2adee83f78c8-kube-api-access-999fx\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:10 crc kubenswrapper[4953]: I1211 10:38:10.445694 4953 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d1833793-1408-450f-8a7e-e01e6048edd5-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:10 crc kubenswrapper[4953]: I1211 10:38:10.445706 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7mghl\" (UniqueName: \"kubernetes.io/projected/d1833793-1408-450f-8a7e-e01e6048edd5-kube-api-access-7mghl\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:10 crc kubenswrapper[4953]: I1211 10:38:10.456053 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79a93889-ae40-4bd1-a697-5797e065231b-kube-api-access-9s4ck" (OuterVolumeSpecName: "kube-api-access-9s4ck") pod "79a93889-ae40-4bd1-a697-5797e065231b" (UID: "79a93889-ae40-4bd1-a697-5797e065231b"). InnerVolumeSpecName "kube-api-access-9s4ck". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:38:10 crc kubenswrapper[4953]: I1211 10:38:10.495561 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3079f9fc-3d3e-4647-a889-fae4277437fc" path="/var/lib/kubelet/pods/3079f9fc-3d3e-4647-a889-fae4277437fc/volumes" Dec 11 10:38:10 crc kubenswrapper[4953]: I1211 10:38:10.496265 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4318ee7e-5751-4dc5-becf-a06da8ab5a59" path="/var/lib/kubelet/pods/4318ee7e-5751-4dc5-becf-a06da8ab5a59/volumes" Dec 11 10:38:10 crc kubenswrapper[4953]: I1211 10:38:10.497639 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48902dd9-8c9f-4983-b8dd-6f22f4382a19" path="/var/lib/kubelet/pods/48902dd9-8c9f-4983-b8dd-6f22f4382a19/volumes" Dec 11 10:38:10 crc kubenswrapper[4953]: I1211 10:38:10.498996 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="498f7a43-7db9-42e8-b722-a5fb6ae4749f" path="/var/lib/kubelet/pods/498f7a43-7db9-42e8-b722-a5fb6ae4749f/volumes" Dec 11 10:38:10 crc kubenswrapper[4953]: I1211 10:38:10.499745 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8bb60f7f-22a1-48c2-8815-f2c6cfddfc2e" path="/var/lib/kubelet/pods/8bb60f7f-22a1-48c2-8815-f2c6cfddfc2e/volumes" Dec 11 10:38:10 crc kubenswrapper[4953]: I1211 10:38:10.500230 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c62085a-9722-4020-a26f-2adee83f78c8" path="/var/lib/kubelet/pods/9c62085a-9722-4020-a26f-2adee83f78c8/volumes" Dec 11 10:38:10 crc kubenswrapper[4953]: I1211 10:38:10.500683 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4237606-fdcf-403b-8e5a-1bbb4a2e38de" path="/var/lib/kubelet/pods/b4237606-fdcf-403b-8e5a-1bbb4a2e38de/volumes" Dec 11 10:38:10 crc kubenswrapper[4953]: I1211 10:38:10.502157 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d635145c-c504-4916-910f-6a5c18c25aac" path="/var/lib/kubelet/pods/d635145c-c504-4916-910f-6a5c18c25aac/volumes" Dec 11 10:38:10 crc kubenswrapper[4953]: I1211 10:38:10.502709 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e27f6309-0ccd-4aca-ad87-0cd7a9357469" path="/var/lib/kubelet/pods/e27f6309-0ccd-4aca-ad87-0cd7a9357469/volumes" Dec 11 10:38:10 crc kubenswrapper[4953]: I1211 10:38:10.503276 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e47953ec-41e5-458b-ad9f-a7e72a002b8a" path="/var/lib/kubelet/pods/e47953ec-41e5-458b-ad9f-a7e72a002b8a/volumes" Dec 11 10:38:10 crc kubenswrapper[4953]: I1211 10:38:10.518418 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1833793-1408-450f-8a7e-e01e6048edd5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d1833793-1408-450f-8a7e-e01e6048edd5" (UID: "d1833793-1408-450f-8a7e-e01e6048edd5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:38:10 crc kubenswrapper[4953]: I1211 10:38:10.548759 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79a93889-ae40-4bd1-a697-5797e065231b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "79a93889-ae40-4bd1-a697-5797e065231b" (UID: "79a93889-ae40-4bd1-a697-5797e065231b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:38:10 crc kubenswrapper[4953]: I1211 10:38:10.548965 4953 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1833793-1408-450f-8a7e-e01e6048edd5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:10 crc kubenswrapper[4953]: I1211 10:38:10.548989 4953 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79a93889-ae40-4bd1-a697-5797e065231b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:10 crc kubenswrapper[4953]: I1211 10:38:10.549002 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9s4ck\" (UniqueName: \"kubernetes.io/projected/79a93889-ae40-4bd1-a697-5797e065231b-kube-api-access-9s4ck\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:10 crc kubenswrapper[4953]: I1211 10:38:10.572233 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79a93889-ae40-4bd1-a697-5797e065231b-config-data" (OuterVolumeSpecName: "config-data") pod "79a93889-ae40-4bd1-a697-5797e065231b" (UID: "79a93889-ae40-4bd1-a697-5797e065231b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:38:10 crc kubenswrapper[4953]: I1211 10:38:10.573326 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79a93889-ae40-4bd1-a697-5797e065231b-vencrypt-tls-certs" (OuterVolumeSpecName: "vencrypt-tls-certs") pod "79a93889-ae40-4bd1-a697-5797e065231b" (UID: "79a93889-ae40-4bd1-a697-5797e065231b"). InnerVolumeSpecName "vencrypt-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:38:10 crc kubenswrapper[4953]: I1211 10:38:10.599723 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79a93889-ae40-4bd1-a697-5797e065231b-nova-novncproxy-tls-certs" (OuterVolumeSpecName: "nova-novncproxy-tls-certs") pod "79a93889-ae40-4bd1-a697-5797e065231b" (UID: "79a93889-ae40-4bd1-a697-5797e065231b"). InnerVolumeSpecName "nova-novncproxy-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:38:10 crc kubenswrapper[4953]: I1211 10:38:10.615871 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1833793-1408-450f-8a7e-e01e6048edd5-config-data" (OuterVolumeSpecName: "config-data") pod "d1833793-1408-450f-8a7e-e01e6048edd5" (UID: "d1833793-1408-450f-8a7e-e01e6048edd5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:38:10 crc kubenswrapper[4953]: I1211 10:38:10.660902 4953 generic.go:334] "Generic (PLEG): container finished" podID="ab07f951-5c8d-428b-9b26-52ea2284ee52" containerID="41bbb6ee795ebc3c22e509c06b7f775810c8aed2e9da9f0f6b746d7e045c0c23" exitCode=0 Dec 11 10:38:10 crc kubenswrapper[4953]: I1211 10:38:10.666841 4953 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openstack/novaapi339c-account-delete-l2kws" secret="" err="secret \"galera-openstack-dockercfg-4mfbc\" not found" Dec 11 10:38:10 crc kubenswrapper[4953]: I1211 10:38:10.667609 4953 reconciler_common.go:293] "Volume detached for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/79a93889-ae40-4bd1-a697-5797e065231b-nova-novncproxy-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:10 crc kubenswrapper[4953]: I1211 10:38:10.667640 4953 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79a93889-ae40-4bd1-a697-5797e065231b-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:10 crc kubenswrapper[4953]: I1211 10:38:10.667650 4953 reconciler_common.go:293] "Volume detached for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/79a93889-ae40-4bd1-a697-5797e065231b-vencrypt-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:10 crc kubenswrapper[4953]: I1211 10:38:10.667660 4953 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1833793-1408-450f-8a7e-e01e6048edd5-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:10 crc kubenswrapper[4953]: I1211 10:38:10.670746 4953 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openstack/barbican3c8c-account-delete-kzsq8" secret="" err="secret \"galera-openstack-dockercfg-4mfbc\" not found" Dec 11 10:38:10 crc kubenswrapper[4953]: I1211 10:38:10.679331 4953 generic.go:334] "Generic (PLEG): container finished" podID="7af3727e-8096-420d-b8d0-95988a5d36db" containerID="76b1adf1ecb9cc73cce6fab14903ebf309e0061c7db3b0247296d4d28611c686" exitCode=0 Dec 11 10:38:10 crc kubenswrapper[4953]: I1211 10:38:10.685864 4953 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openstack/placementa6a0-account-delete-vhpnd" secret="" err="secret \"galera-openstack-dockercfg-4mfbc\" not found" Dec 11 10:38:10 crc kubenswrapper[4953]: I1211 10:38:10.688831 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/novaapi339c-account-delete-l2kws" podStartSLOduration=7.688804575 podStartE2EDuration="7.688804575s" podCreationTimestamp="2025-12-11 10:38:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:38:10.677045025 +0000 UTC m=+1608.700904058" watchObservedRunningTime="2025-12-11 10:38:10.688804575 +0000 UTC m=+1608.712663608" Dec 11 10:38:10 crc kubenswrapper[4953]: I1211 10:38:10.742496 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican3c8c-account-delete-kzsq8" podStartSLOduration=7.742477122 podStartE2EDuration="7.742477122s" podCreationTimestamp="2025-12-11 10:38:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:38:10.737118624 +0000 UTC m=+1608.760977667" watchObservedRunningTime="2025-12-11 10:38:10.742477122 +0000 UTC m=+1608.766336155" Dec 11 10:38:10 crc kubenswrapper[4953]: E1211 10:38:10.770659 4953 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Dec 11 10:38:10 crc kubenswrapper[4953]: E1211 10:38:10.770706 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3aee1a2c-6a1e-48c0-9491-3f61371047eb-operator-scripts podName:3aee1a2c-6a1e-48c0-9491-3f61371047eb nodeName:}" failed. No retries permitted until 2025-12-11 10:38:11.270692909 +0000 UTC m=+1609.294551942 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/3aee1a2c-6a1e-48c0-9491-3f61371047eb-operator-scripts") pod "placementa6a0-account-delete-vhpnd" (UID: "3aee1a2c-6a1e-48c0-9491-3f61371047eb") : configmap "openstack-scripts" not found Dec 11 10:38:10 crc kubenswrapper[4953]: E1211 10:38:10.770978 4953 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Dec 11 10:38:10 crc kubenswrapper[4953]: E1211 10:38:10.771002 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b09879bd-62c8-4810-ad58-09db28d6afb5-operator-scripts podName:b09879bd-62c8-4810-ad58-09db28d6afb5 nodeName:}" failed. No retries permitted until 2025-12-11 10:38:11.270994879 +0000 UTC m=+1609.294853912 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/b09879bd-62c8-4810-ad58-09db28d6afb5-operator-scripts") pod "barbican3c8c-account-delete-kzsq8" (UID: "b09879bd-62c8-4810-ad58-09db28d6afb5") : configmap "openstack-scripts" not found Dec 11 10:38:10 crc kubenswrapper[4953]: E1211 10:38:10.771029 4953 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Dec 11 10:38:10 crc kubenswrapper[4953]: E1211 10:38:10.771052 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/10e32559-b465-4538-af8b-9dd3deedf2b9-operator-scripts podName:10e32559-b465-4538-af8b-9dd3deedf2b9 nodeName:}" failed. No retries permitted until 2025-12-11 10:38:11.27104088 +0000 UTC m=+1609.294899913 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/10e32559-b465-4538-af8b-9dd3deedf2b9-operator-scripts") pod "novaapi339c-account-delete-l2kws" (UID: "10e32559-b465-4538-af8b-9dd3deedf2b9") : configmap "openstack-scripts" not found Dec 11 10:38:10 crc kubenswrapper[4953]: E1211 10:38:10.771077 4953 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Dec 11 10:38:10 crc kubenswrapper[4953]: E1211 10:38:10.771093 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/01196778-96de-4f79-b9ac-e01243f86ebb-config-data podName:01196778-96de-4f79-b9ac-e01243f86ebb nodeName:}" failed. No retries permitted until 2025-12-11 10:38:18.771088271 +0000 UTC m=+1616.794947294 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/01196778-96de-4f79-b9ac-e01243f86ebb-config-data") pod "rabbitmq-cell1-server-0" (UID: "01196778-96de-4f79-b9ac-e01243f86ebb") : configmap "rabbitmq-cell1-config-data" not found Dec 11 10:38:10 crc kubenswrapper[4953]: E1211 10:38:10.776355 4953 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 76b1adf1ecb9cc73cce6fab14903ebf309e0061c7db3b0247296d4d28611c686 is running failed: container process not found" containerID="76b1adf1ecb9cc73cce6fab14903ebf309e0061c7db3b0247296d4d28611c686" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 11 10:38:10 crc kubenswrapper[4953]: I1211 10:38:10.776560 4953 generic.go:334] "Generic (PLEG): container finished" podID="5c566b6b-16f8-422c-acda-0325e36103e6" containerID="83eedb4ddd84362084d8ccac38fed9fcbcacfbfefe97227d1e7bf4df1164fbc0" exitCode=0 Dec 11 10:38:10 crc kubenswrapper[4953]: E1211 10:38:10.785229 4953 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 76b1adf1ecb9cc73cce6fab14903ebf309e0061c7db3b0247296d4d28611c686 is running failed: container process not found" containerID="76b1adf1ecb9cc73cce6fab14903ebf309e0061c7db3b0247296d4d28611c686" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 11 10:38:10 crc kubenswrapper[4953]: E1211 10:38:10.786608 4953 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 76b1adf1ecb9cc73cce6fab14903ebf309e0061c7db3b0247296d4d28611c686 is running failed: container process not found" containerID="76b1adf1ecb9cc73cce6fab14903ebf309e0061c7db3b0247296d4d28611c686" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 11 10:38:10 crc kubenswrapper[4953]: E1211 10:38:10.786640 4953 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 76b1adf1ecb9cc73cce6fab14903ebf309e0061c7db3b0247296d4d28611c686 is running failed: container process not found" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="7af3727e-8096-420d-b8d0-95988a5d36db" containerName="nova-scheduler-scheduler" Dec 11 10:38:10 crc kubenswrapper[4953]: I1211 10:38:10.788461 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-78f5cf7bd5-24fm8" Dec 11 10:38:10 crc kubenswrapper[4953]: I1211 10:38:10.821476 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 11 10:38:10 crc kubenswrapper[4953]: I1211 10:38:10.821926 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 11 10:38:10 crc kubenswrapper[4953]: I1211 10:38:10.825544 4953 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openstack/novacell0caaa-account-delete-n4fck" secret="" err="secret \"galera-openstack-dockercfg-4mfbc\" not found" Dec 11 10:38:10 crc kubenswrapper[4953]: I1211 10:38:10.836519 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance5aff-account-delete-5hksm" event={"ID":"e6515789-e6f6-4aa3-83f3-4fc58f862dc9","Type":"ContainerDied","Data":"fb60de9865675abd3277bc52365656a3c4bd56bd096306b1e205c9933f3fd7e6"} Dec 11 10:38:10 crc kubenswrapper[4953]: I1211 10:38:10.836893 4953 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fb60de9865675abd3277bc52365656a3c4bd56bd096306b1e205c9933f3fd7e6" Dec 11 10:38:10 crc kubenswrapper[4953]: I1211 10:38:10.836912 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"ab07f951-5c8d-428b-9b26-52ea2284ee52","Type":"ContainerDied","Data":"41bbb6ee795ebc3c22e509c06b7f775810c8aed2e9da9f0f6b746d7e045c0c23"} Dec 11 10:38:10 crc kubenswrapper[4953]: I1211 10:38:10.836929 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/novaapi339c-account-delete-l2kws" event={"ID":"10e32559-b465-4538-af8b-9dd3deedf2b9","Type":"ContainerStarted","Data":"724ef16300326cd83d936dac5cc2888490dcc1ad76a2a512c2133c22dfc295a2"} Dec 11 10:38:10 crc kubenswrapper[4953]: I1211 10:38:10.836944 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican3c8c-account-delete-kzsq8" event={"ID":"b09879bd-62c8-4810-ad58-09db28d6afb5","Type":"ContainerStarted","Data":"b6d2cd8785b03d254d03f3c737ce94fe0726a8177ca0464c1ca95a86c4f2ae0c"} Dec 11 10:38:10 crc kubenswrapper[4953]: I1211 10:38:10.836956 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"7af3727e-8096-420d-b8d0-95988a5d36db","Type":"ContainerDied","Data":"76b1adf1ecb9cc73cce6fab14903ebf309e0061c7db3b0247296d4d28611c686"} Dec 11 10:38:10 crc kubenswrapper[4953]: I1211 10:38:10.836973 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placementa6a0-account-delete-vhpnd" event={"ID":"3aee1a2c-6a1e-48c0-9491-3f61371047eb","Type":"ContainerStarted","Data":"fdc5b9b474ad12ca3867351b611c8f54d3bb3368f91df95bfe30268fd52088fe"} Dec 11 10:38:10 crc kubenswrapper[4953]: I1211 10:38:10.836985 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron242b-account-delete-v47hk" event={"ID":"992b7c13-c6c6-4641-9c9a-3d8bfbd5029c","Type":"ContainerDied","Data":"41dbb44fe99702597b2dfb7089798ccafb97bd82bf8ec43a43f67a8d5f4c222b"} Dec 11 10:38:10 crc kubenswrapper[4953]: I1211 10:38:10.836997 4953 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="41dbb44fe99702597b2dfb7089798ccafb97bd82bf8ec43a43f67a8d5f4c222b" Dec 11 10:38:10 crc kubenswrapper[4953]: I1211 10:38:10.837007 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"5c566b6b-16f8-422c-acda-0325e36103e6","Type":"ContainerDied","Data":"83eedb4ddd84362084d8ccac38fed9fcbcacfbfefe97227d1e7bf4df1164fbc0"} Dec 11 10:38:10 crc kubenswrapper[4953]: I1211 10:38:10.837021 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e067a835-8a1a-4672-aaea-b8c101109018","Type":"ContainerDied","Data":"3ef7f0dbe4f89164e33f294de7045589466f6b4bbcc64f97b84b928e92f92a28"} Dec 11 10:38:10 crc kubenswrapper[4953]: I1211 10:38:10.837035 4953 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3ef7f0dbe4f89164e33f294de7045589466f6b4bbcc64f97b84b928e92f92a28" Dec 11 10:38:10 crc kubenswrapper[4953]: I1211 10:38:10.837045 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinderaf3a-account-delete-rtt56" event={"ID":"46cd550e-17c8-4cd2-a5e0-9746edf42836","Type":"ContainerDied","Data":"e6eb47f34c128cb4f1af94d49e8725fbbd0356cbb00225b9a5ebe82932fa74f7"} Dec 11 10:38:10 crc kubenswrapper[4953]: I1211 10:38:10.837057 4953 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e6eb47f34c128cb4f1af94d49e8725fbbd0356cbb00225b9a5ebe82932fa74f7" Dec 11 10:38:10 crc kubenswrapper[4953]: I1211 10:38:10.837066 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"4b1b7520-f52c-4a2a-98e5-16ac7460bade","Type":"ContainerDied","Data":"ccc51b63a022f3a56643f7cf5c7d4f3cbcc20bcfcd5e67a451eca507a78e60c3"} Dec 11 10:38:10 crc kubenswrapper[4953]: I1211 10:38:10.837079 4953 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ccc51b63a022f3a56643f7cf5c7d4f3cbcc20bcfcd5e67a451eca507a78e60c3" Dec 11 10:38:10 crc kubenswrapper[4953]: I1211 10:38:10.837090 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7b77681a-0823-42e6-b0a4-2af1ce955970","Type":"ContainerDied","Data":"e0f85e3a56fdd109713da0f7db29fda300773a72206b73c5e4adf00a8cc8c7bf"} Dec 11 10:38:10 crc kubenswrapper[4953]: I1211 10:38:10.837103 4953 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e0f85e3a56fdd109713da0f7db29fda300773a72206b73c5e4adf00a8cc8c7bf" Dec 11 10:38:10 crc kubenswrapper[4953]: I1211 10:38:10.837112 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"9da03c89-b3fb-431e-bef0-eb8f6d0b180e","Type":"ContainerDied","Data":"7dcdbbd8ee7d8a16bad77426b73131c9d4d423d0cbe6d097e8bf75b5a2d868cc"} Dec 11 10:38:10 crc kubenswrapper[4953]: I1211 10:38:10.837123 4953 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7dcdbbd8ee7d8a16bad77426b73131c9d4d423d0cbe6d097e8bf75b5a2d868cc" Dec 11 10:38:10 crc kubenswrapper[4953]: I1211 10:38:10.837133 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4b66dbe7-edd9-4e23-a3d0-0661efe89ac6","Type":"ContainerDied","Data":"e1ee2d4c9b8918cc2f47f83d4f058d2615bd70bb7a00d1ea815df94e5ab86f52"} Dec 11 10:38:10 crc kubenswrapper[4953]: I1211 10:38:10.837146 4953 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e1ee2d4c9b8918cc2f47f83d4f058d2615bd70bb7a00d1ea815df94e5ab86f52" Dec 11 10:38:10 crc kubenswrapper[4953]: I1211 10:38:10.837155 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7567d9469d-rx5dx" event={"ID":"345a513a-93a0-4e23-9266-3eeaf3ff0c10","Type":"ContainerDied","Data":"4e803c3ca9de08930239d9fb373f85ae0e80aceb76085ed679d03e93d05ec822"} Dec 11 10:38:10 crc kubenswrapper[4953]: I1211 10:38:10.837166 4953 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4e803c3ca9de08930239d9fb373f85ae0e80aceb76085ed679d03e93d05ec822" Dec 11 10:38:10 crc kubenswrapper[4953]: I1211 10:38:10.853892 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placementa6a0-account-delete-vhpnd" podStartSLOduration=8.853874373 podStartE2EDuration="8.853874373s" podCreationTimestamp="2025-12-11 10:38:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:38:10.765895378 +0000 UTC m=+1608.789754421" watchObservedRunningTime="2025-12-11 10:38:10.853874373 +0000 UTC m=+1608.877733406" Dec 11 10:38:10 crc kubenswrapper[4953]: I1211 10:38:10.871405 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8521d832-efe5-4653-8c0e-8921f916e10f-internal-tls-certs\") pod \"8521d832-efe5-4653-8c0e-8921f916e10f\" (UID: \"8521d832-efe5-4653-8c0e-8921f916e10f\") " Dec 11 10:38:10 crc kubenswrapper[4953]: I1211 10:38:10.871476 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8521d832-efe5-4653-8c0e-8921f916e10f-public-tls-certs\") pod \"8521d832-efe5-4653-8c0e-8921f916e10f\" (UID: \"8521d832-efe5-4653-8c0e-8921f916e10f\") " Dec 11 10:38:10 crc kubenswrapper[4953]: I1211 10:38:10.871555 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8521d832-efe5-4653-8c0e-8921f916e10f-log-httpd\") pod \"8521d832-efe5-4653-8c0e-8921f916e10f\" (UID: \"8521d832-efe5-4653-8c0e-8921f916e10f\") " Dec 11 10:38:10 crc kubenswrapper[4953]: I1211 10:38:10.871597 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8521d832-efe5-4653-8c0e-8921f916e10f-run-httpd\") pod \"8521d832-efe5-4653-8c0e-8921f916e10f\" (UID: \"8521d832-efe5-4653-8c0e-8921f916e10f\") " Dec 11 10:38:10 crc kubenswrapper[4953]: I1211 10:38:10.871818 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8521d832-efe5-4653-8c0e-8921f916e10f-config-data\") pod \"8521d832-efe5-4653-8c0e-8921f916e10f\" (UID: \"8521d832-efe5-4653-8c0e-8921f916e10f\") " Dec 11 10:38:10 crc kubenswrapper[4953]: I1211 10:38:10.871837 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8521d832-efe5-4653-8c0e-8921f916e10f-etc-swift\") pod \"8521d832-efe5-4653-8c0e-8921f916e10f\" (UID: \"8521d832-efe5-4653-8c0e-8921f916e10f\") " Dec 11 10:38:10 crc kubenswrapper[4953]: I1211 10:38:10.873936 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jg5gl\" (UniqueName: \"kubernetes.io/projected/8521d832-efe5-4653-8c0e-8921f916e10f-kube-api-access-jg5gl\") pod \"8521d832-efe5-4653-8c0e-8921f916e10f\" (UID: \"8521d832-efe5-4653-8c0e-8921f916e10f\") " Dec 11 10:38:10 crc kubenswrapper[4953]: I1211 10:38:10.874056 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8521d832-efe5-4653-8c0e-8921f916e10f-combined-ca-bundle\") pod \"8521d832-efe5-4653-8c0e-8921f916e10f\" (UID: \"8521d832-efe5-4653-8c0e-8921f916e10f\") " Dec 11 10:38:10 crc kubenswrapper[4953]: I1211 10:38:10.876456 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8521d832-efe5-4653-8c0e-8921f916e10f-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "8521d832-efe5-4653-8c0e-8921f916e10f" (UID: "8521d832-efe5-4653-8c0e-8921f916e10f"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:38:10 crc kubenswrapper[4953]: I1211 10:38:10.876807 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8521d832-efe5-4653-8c0e-8921f916e10f-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "8521d832-efe5-4653-8c0e-8921f916e10f" (UID: "8521d832-efe5-4653-8c0e-8921f916e10f"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:38:10 crc kubenswrapper[4953]: I1211 10:38:10.884514 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8521d832-efe5-4653-8c0e-8921f916e10f-kube-api-access-jg5gl" (OuterVolumeSpecName: "kube-api-access-jg5gl") pod "8521d832-efe5-4653-8c0e-8921f916e10f" (UID: "8521d832-efe5-4653-8c0e-8921f916e10f"). InnerVolumeSpecName "kube-api-access-jg5gl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:38:10 crc kubenswrapper[4953]: I1211 10:38:10.889782 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8521d832-efe5-4653-8c0e-8921f916e10f-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "8521d832-efe5-4653-8c0e-8921f916e10f" (UID: "8521d832-efe5-4653-8c0e-8921f916e10f"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:38:10 crc kubenswrapper[4953]: I1211 10:38:10.890605 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-6cffd87c8c-wlgnt" Dec 11 10:38:10 crc kubenswrapper[4953]: E1211 10:38:10.973186 4953 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4d40902f2adb77e2b7dde3ed43d14df9863e66572e62ab6a82f12fa7bb0bcca2" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Dec 11 10:38:10 crc kubenswrapper[4953]: I1211 10:38:10.977347 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8q67g\" (UniqueName: \"kubernetes.io/projected/544e1955-4316-4587-90a8-94bac4f81ae5-kube-api-access-8q67g\") pod \"544e1955-4316-4587-90a8-94bac4f81ae5\" (UID: \"544e1955-4316-4587-90a8-94bac4f81ae5\") " Dec 11 10:38:10 crc kubenswrapper[4953]: I1211 10:38:10.977437 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/544e1955-4316-4587-90a8-94bac4f81ae5-logs\") pod \"544e1955-4316-4587-90a8-94bac4f81ae5\" (UID: \"544e1955-4316-4587-90a8-94bac4f81ae5\") " Dec 11 10:38:10 crc kubenswrapper[4953]: I1211 10:38:10.977605 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/544e1955-4316-4587-90a8-94bac4f81ae5-config-data\") pod \"544e1955-4316-4587-90a8-94bac4f81ae5\" (UID: \"544e1955-4316-4587-90a8-94bac4f81ae5\") " Dec 11 10:38:10 crc kubenswrapper[4953]: E1211 10:38:10.977586 4953 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4d40902f2adb77e2b7dde3ed43d14df9863e66572e62ab6a82f12fa7bb0bcca2" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Dec 11 10:38:10 crc kubenswrapper[4953]: I1211 10:38:10.977685 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-555fcfcf54-sqln7" Dec 11 10:38:10 crc kubenswrapper[4953]: I1211 10:38:10.977645 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/544e1955-4316-4587-90a8-94bac4f81ae5-config-data-custom\") pod \"544e1955-4316-4587-90a8-94bac4f81ae5\" (UID: \"544e1955-4316-4587-90a8-94bac4f81ae5\") " Dec 11 10:38:10 crc kubenswrapper[4953]: I1211 10:38:10.977936 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/544e1955-4316-4587-90a8-94bac4f81ae5-combined-ca-bundle\") pod \"544e1955-4316-4587-90a8-94bac4f81ae5\" (UID: \"544e1955-4316-4587-90a8-94bac4f81ae5\") " Dec 11 10:38:10 crc kubenswrapper[4953]: I1211 10:38:10.978677 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/544e1955-4316-4587-90a8-94bac4f81ae5-logs" (OuterVolumeSpecName: "logs") pod "544e1955-4316-4587-90a8-94bac4f81ae5" (UID: "544e1955-4316-4587-90a8-94bac4f81ae5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:38:10 crc kubenswrapper[4953]: I1211 10:38:10.980188 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bckrd\" (UniqueName: \"kubernetes.io/projected/38d12eb6-7ca0-4003-a9f7-f691f65097e4-kube-api-access-bckrd\") pod \"keystone7975-account-delete-bmljp\" (UID: \"38d12eb6-7ca0-4003-a9f7-f691f65097e4\") " pod="openstack/keystone7975-account-delete-bmljp" Dec 11 10:38:10 crc kubenswrapper[4953]: I1211 10:38:10.981854 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/38d12eb6-7ca0-4003-a9f7-f691f65097e4-operator-scripts\") pod \"keystone7975-account-delete-bmljp\" (UID: \"38d12eb6-7ca0-4003-a9f7-f691f65097e4\") " pod="openstack/keystone7975-account-delete-bmljp" Dec 11 10:38:10 crc kubenswrapper[4953]: I1211 10:38:10.982172 4953 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8521d832-efe5-4653-8c0e-8921f916e10f-etc-swift\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:10 crc kubenswrapper[4953]: I1211 10:38:10.982187 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jg5gl\" (UniqueName: \"kubernetes.io/projected/8521d832-efe5-4653-8c0e-8921f916e10f-kube-api-access-jg5gl\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:10 crc kubenswrapper[4953]: I1211 10:38:10.982198 4953 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/544e1955-4316-4587-90a8-94bac4f81ae5-logs\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:10 crc kubenswrapper[4953]: I1211 10:38:10.982206 4953 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8521d832-efe5-4653-8c0e-8921f916e10f-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:10 crc kubenswrapper[4953]: I1211 10:38:10.982214 4953 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8521d832-efe5-4653-8c0e-8921f916e10f-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:10 crc kubenswrapper[4953]: E1211 10:38:10.982280 4953 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Dec 11 10:38:10 crc kubenswrapper[4953]: E1211 10:38:10.982328 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/38d12eb6-7ca0-4003-a9f7-f691f65097e4-operator-scripts podName:38d12eb6-7ca0-4003-a9f7-f691f65097e4 nodeName:}" failed. No retries permitted until 2025-12-11 10:38:12.98231416 +0000 UTC m=+1611.006173193 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/38d12eb6-7ca0-4003-a9f7-f691f65097e4-operator-scripts") pod "keystone7975-account-delete-bmljp" (UID: "38d12eb6-7ca0-4003-a9f7-f691f65097e4") : configmap "openstack-scripts" not found Dec 11 10:38:10 crc kubenswrapper[4953]: E1211 10:38:10.984939 4953 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4d40902f2adb77e2b7dde3ed43d14df9863e66572e62ab6a82f12fa7bb0bcca2" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Dec 11 10:38:10 crc kubenswrapper[4953]: E1211 10:38:10.985012 4953 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="1b3d5c24-61f6-4926-94ec-0e3a462334df" containerName="nova-cell0-conductor-conductor" Dec 11 10:38:10 crc kubenswrapper[4953]: I1211 10:38:10.987921 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8521d832-efe5-4653-8c0e-8921f916e10f-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "8521d832-efe5-4653-8c0e-8921f916e10f" (UID: "8521d832-efe5-4653-8c0e-8921f916e10f"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:38:10 crc kubenswrapper[4953]: I1211 10:38:10.987993 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8521d832-efe5-4653-8c0e-8921f916e10f-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "8521d832-efe5-4653-8c0e-8921f916e10f" (UID: "8521d832-efe5-4653-8c0e-8921f916e10f"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:38:10 crc kubenswrapper[4953]: I1211 10:38:10.988098 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/544e1955-4316-4587-90a8-94bac4f81ae5-kube-api-access-8q67g" (OuterVolumeSpecName: "kube-api-access-8q67g") pod "544e1955-4316-4587-90a8-94bac4f81ae5" (UID: "544e1955-4316-4587-90a8-94bac4f81ae5"). InnerVolumeSpecName "kube-api-access-8q67g". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:38:10 crc kubenswrapper[4953]: E1211 10:38:10.990529 4953 projected.go:194] Error preparing data for projected volume kube-api-access-bckrd for pod openstack/keystone7975-account-delete-bmljp: failed to fetch token: serviceaccounts "galera-openstack" not found Dec 11 10:38:10 crc kubenswrapper[4953]: E1211 10:38:10.990608 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/38d12eb6-7ca0-4003-a9f7-f691f65097e4-kube-api-access-bckrd podName:38d12eb6-7ca0-4003-a9f7-f691f65097e4 nodeName:}" failed. No retries permitted until 2025-12-11 10:38:12.99059218 +0000 UTC m=+1611.014451213 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-bckrd" (UniqueName: "kubernetes.io/projected/38d12eb6-7ca0-4003-a9f7-f691f65097e4-kube-api-access-bckrd") pod "keystone7975-account-delete-bmljp" (UID: "38d12eb6-7ca0-4003-a9f7-f691f65097e4") : failed to fetch token: serviceaccounts "galera-openstack" not found Dec 11 10:38:10 crc kubenswrapper[4953]: I1211 10:38:10.991642 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/544e1955-4316-4587-90a8-94bac4f81ae5-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "544e1955-4316-4587-90a8-94bac4f81ae5" (UID: "544e1955-4316-4587-90a8-94bac4f81ae5"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.000660 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Dec 11 10:38:11 crc kubenswrapper[4953]: E1211 10:38:11.030124 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-bckrd operator-scripts], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/keystone7975-account-delete-bmljp" podUID="38d12eb6-7ca0-4003-a9f7-f691f65097e4" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.030718 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.034846 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7567d9469d-rx5dx" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.036673 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8521d832-efe5-4653-8c0e-8921f916e10f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8521d832-efe5-4653-8c0e-8921f916e10f" (UID: "8521d832-efe5-4653-8c0e-8921f916e10f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.049311 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.049819 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/544e1955-4316-4587-90a8-94bac4f81ae5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "544e1955-4316-4587-90a8-94bac4f81ae5" (UID: "544e1955-4316-4587-90a8-94bac4f81ae5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.059271 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.064674 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.075624 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.085279 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/27258186-4cab-45b4-a20c-a4c3ddc82f76-config-data-generated\") pod \"27258186-4cab-45b4-a20c-a4c3ddc82f76\" (UID: \"27258186-4cab-45b4-a20c-a4c3ddc82f76\") " Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.085402 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27258186-4cab-45b4-a20c-a4c3ddc82f76-combined-ca-bundle\") pod \"27258186-4cab-45b4-a20c-a4c3ddc82f76\" (UID: \"27258186-4cab-45b4-a20c-a4c3ddc82f76\") " Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.085457 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/345a513a-93a0-4e23-9266-3eeaf3ff0c10-public-tls-certs\") pod \"345a513a-93a0-4e23-9266-3eeaf3ff0c10\" (UID: \"345a513a-93a0-4e23-9266-3eeaf3ff0c10\") " Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.085484 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"27258186-4cab-45b4-a20c-a4c3ddc82f76\" (UID: \"27258186-4cab-45b4-a20c-a4c3ddc82f76\") " Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.085513 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/345a513a-93a0-4e23-9266-3eeaf3ff0c10-scripts\") pod \"345a513a-93a0-4e23-9266-3eeaf3ff0c10\" (UID: \"345a513a-93a0-4e23-9266-3eeaf3ff0c10\") " Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.085586 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/caec0159-12b1-46f9-952c-10f229948036-combined-ca-bundle\") pod \"caec0159-12b1-46f9-952c-10f229948036\" (UID: \"caec0159-12b1-46f9-952c-10f229948036\") " Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.085630 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/caec0159-12b1-46f9-952c-10f229948036-config-data\") pod \"caec0159-12b1-46f9-952c-10f229948036\" (UID: \"caec0159-12b1-46f9-952c-10f229948036\") " Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.085657 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/caec0159-12b1-46f9-952c-10f229948036-logs\") pod \"caec0159-12b1-46f9-952c-10f229948036\" (UID: \"caec0159-12b1-46f9-952c-10f229948036\") " Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.085684 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/345a513a-93a0-4e23-9266-3eeaf3ff0c10-combined-ca-bundle\") pod \"345a513a-93a0-4e23-9266-3eeaf3ff0c10\" (UID: \"345a513a-93a0-4e23-9266-3eeaf3ff0c10\") " Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.085748 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/345a513a-93a0-4e23-9266-3eeaf3ff0c10-logs\") pod \"345a513a-93a0-4e23-9266-3eeaf3ff0c10\" (UID: \"345a513a-93a0-4e23-9266-3eeaf3ff0c10\") " Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.085788 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/27258186-4cab-45b4-a20c-a4c3ddc82f76-config-data-default\") pod \"27258186-4cab-45b4-a20c-a4c3ddc82f76\" (UID: \"27258186-4cab-45b4-a20c-a4c3ddc82f76\") " Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.085816 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/caec0159-12b1-46f9-952c-10f229948036-config-data-custom\") pod \"caec0159-12b1-46f9-952c-10f229948036\" (UID: \"caec0159-12b1-46f9-952c-10f229948036\") " Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.085972 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/27258186-4cab-45b4-a20c-a4c3ddc82f76-galera-tls-certs\") pod \"27258186-4cab-45b4-a20c-a4c3ddc82f76\" (UID: \"27258186-4cab-45b4-a20c-a4c3ddc82f76\") " Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.086029 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4zggh\" (UniqueName: \"kubernetes.io/projected/345a513a-93a0-4e23-9266-3eeaf3ff0c10-kube-api-access-4zggh\") pod \"345a513a-93a0-4e23-9266-3eeaf3ff0c10\" (UID: \"345a513a-93a0-4e23-9266-3eeaf3ff0c10\") " Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.086052 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/27258186-4cab-45b4-a20c-a4c3ddc82f76-kolla-config\") pod \"27258186-4cab-45b4-a20c-a4c3ddc82f76\" (UID: \"27258186-4cab-45b4-a20c-a4c3ddc82f76\") " Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.086095 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/345a513a-93a0-4e23-9266-3eeaf3ff0c10-internal-tls-certs\") pod \"345a513a-93a0-4e23-9266-3eeaf3ff0c10\" (UID: \"345a513a-93a0-4e23-9266-3eeaf3ff0c10\") " Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.086142 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5f6q8\" (UniqueName: \"kubernetes.io/projected/caec0159-12b1-46f9-952c-10f229948036-kube-api-access-5f6q8\") pod \"caec0159-12b1-46f9-952c-10f229948036\" (UID: \"caec0159-12b1-46f9-952c-10f229948036\") " Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.086188 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-npns9\" (UniqueName: \"kubernetes.io/projected/27258186-4cab-45b4-a20c-a4c3ddc82f76-kube-api-access-npns9\") pod \"27258186-4cab-45b4-a20c-a4c3ddc82f76\" (UID: \"27258186-4cab-45b4-a20c-a4c3ddc82f76\") " Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.086227 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/27258186-4cab-45b4-a20c-a4c3ddc82f76-operator-scripts\") pod \"27258186-4cab-45b4-a20c-a4c3ddc82f76\" (UID: \"27258186-4cab-45b4-a20c-a4c3ddc82f76\") " Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.086248 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/345a513a-93a0-4e23-9266-3eeaf3ff0c10-config-data\") pod \"345a513a-93a0-4e23-9266-3eeaf3ff0c10\" (UID: \"345a513a-93a0-4e23-9266-3eeaf3ff0c10\") " Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.087027 4953 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8521d832-efe5-4653-8c0e-8921f916e10f-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.087052 4953 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8521d832-efe5-4653-8c0e-8921f916e10f-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.087064 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8q67g\" (UniqueName: \"kubernetes.io/projected/544e1955-4316-4587-90a8-94bac4f81ae5-kube-api-access-8q67g\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.087078 4953 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/544e1955-4316-4587-90a8-94bac4f81ae5-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.087087 4953 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8521d832-efe5-4653-8c0e-8921f916e10f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.087099 4953 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/544e1955-4316-4587-90a8-94bac4f81ae5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.091228 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/27258186-4cab-45b4-a20c-a4c3ddc82f76-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "27258186-4cab-45b4-a20c-a4c3ddc82f76" (UID: "27258186-4cab-45b4-a20c-a4c3ddc82f76"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.095242 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27258186-4cab-45b4-a20c-a4c3ddc82f76-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "27258186-4cab-45b4-a20c-a4c3ddc82f76" (UID: "27258186-4cab-45b4-a20c-a4c3ddc82f76"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.097425 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/caec0159-12b1-46f9-952c-10f229948036-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "caec0159-12b1-46f9-952c-10f229948036" (UID: "caec0159-12b1-46f9-952c-10f229948036"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.098056 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27258186-4cab-45b4-a20c-a4c3ddc82f76-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "27258186-4cab-45b4-a20c-a4c3ddc82f76" (UID: "27258186-4cab-45b4-a20c-a4c3ddc82f76"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.099273 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/345a513a-93a0-4e23-9266-3eeaf3ff0c10-logs" (OuterVolumeSpecName: "logs") pod "345a513a-93a0-4e23-9266-3eeaf3ff0c10" (UID: "345a513a-93a0-4e23-9266-3eeaf3ff0c10"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.100214 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.101298 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/caec0159-12b1-46f9-952c-10f229948036-logs" (OuterVolumeSpecName: "logs") pod "caec0159-12b1-46f9-952c-10f229948036" (UID: "caec0159-12b1-46f9-952c-10f229948036"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.105378 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/345a513a-93a0-4e23-9266-3eeaf3ff0c10-scripts" (OuterVolumeSpecName: "scripts") pod "345a513a-93a0-4e23-9266-3eeaf3ff0c10" (UID: "345a513a-93a0-4e23-9266-3eeaf3ff0c10"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.106274 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27258186-4cab-45b4-a20c-a4c3ddc82f76-kube-api-access-npns9" (OuterVolumeSpecName: "kube-api-access-npns9") pod "27258186-4cab-45b4-a20c-a4c3ddc82f76" (UID: "27258186-4cab-45b4-a20c-a4c3ddc82f76"). InnerVolumeSpecName "kube-api-access-npns9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.108401 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27258186-4cab-45b4-a20c-a4c3ddc82f76-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "27258186-4cab-45b4-a20c-a4c3ddc82f76" (UID: "27258186-4cab-45b4-a20c-a4c3ddc82f76"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.109870 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/345a513a-93a0-4e23-9266-3eeaf3ff0c10-kube-api-access-4zggh" (OuterVolumeSpecName: "kube-api-access-4zggh") pod "345a513a-93a0-4e23-9266-3eeaf3ff0c10" (UID: "345a513a-93a0-4e23-9266-3eeaf3ff0c10"). InnerVolumeSpecName "kube-api-access-4zggh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.115158 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8521d832-efe5-4653-8c0e-8921f916e10f-config-data" (OuterVolumeSpecName: "config-data") pod "8521d832-efe5-4653-8c0e-8921f916e10f" (UID: "8521d832-efe5-4653-8c0e-8921f916e10f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.117601 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.126302 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/544e1955-4316-4587-90a8-94bac4f81ae5-config-data" (OuterVolumeSpecName: "config-data") pod "544e1955-4316-4587-90a8-94bac4f81ae5" (UID: "544e1955-4316-4587-90a8-94bac4f81ae5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.141468 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "mysql-db") pod "27258186-4cab-45b4-a20c-a4c3ddc82f76" (UID: "27258186-4cab-45b4-a20c-a4c3ddc82f76"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.165588 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/caec0159-12b1-46f9-952c-10f229948036-kube-api-access-5f6q8" (OuterVolumeSpecName: "kube-api-access-5f6q8") pod "caec0159-12b1-46f9-952c-10f229948036" (UID: "caec0159-12b1-46f9-952c-10f229948036"). InnerVolumeSpecName "kube-api-access-5f6q8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.168290 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.187830 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lw7zc\" (UniqueName: \"kubernetes.io/projected/4b1b7520-f52c-4a2a-98e5-16ac7460bade-kube-api-access-lw7zc\") pod \"4b1b7520-f52c-4a2a-98e5-16ac7460bade\" (UID: \"4b1b7520-f52c-4a2a-98e5-16ac7460bade\") " Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.187893 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b66dbe7-edd9-4e23-a3d0-0661efe89ac6-logs\") pod \"4b66dbe7-edd9-4e23-a3d0-0661efe89ac6\" (UID: \"4b66dbe7-edd9-4e23-a3d0-0661efe89ac6\") " Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.187957 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4b1b7520-f52c-4a2a-98e5-16ac7460bade-config-data-custom\") pod \"4b1b7520-f52c-4a2a-98e5-16ac7460bade\" (UID: \"4b1b7520-f52c-4a2a-98e5-16ac7460bade\") " Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.187992 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b1b7520-f52c-4a2a-98e5-16ac7460bade-logs\") pod \"4b1b7520-f52c-4a2a-98e5-16ac7460bade\" (UID: \"4b1b7520-f52c-4a2a-98e5-16ac7460bade\") " Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.188012 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b66dbe7-edd9-4e23-a3d0-0661efe89ac6-combined-ca-bundle\") pod \"4b66dbe7-edd9-4e23-a3d0-0661efe89ac6\" (UID: \"4b66dbe7-edd9-4e23-a3d0-0661efe89ac6\") " Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.188064 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e067a835-8a1a-4672-aaea-b8c101109018-combined-ca-bundle\") pod \"e067a835-8a1a-4672-aaea-b8c101109018\" (UID: \"e067a835-8a1a-4672-aaea-b8c101109018\") " Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.188083 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b1b7520-f52c-4a2a-98e5-16ac7460bade-combined-ca-bundle\") pod \"4b1b7520-f52c-4a2a-98e5-16ac7460bade\" (UID: \"4b1b7520-f52c-4a2a-98e5-16ac7460bade\") " Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.188109 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e067a835-8a1a-4672-aaea-b8c101109018-scripts\") pod \"e067a835-8a1a-4672-aaea-b8c101109018\" (UID: \"e067a835-8a1a-4672-aaea-b8c101109018\") " Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.188140 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b1b7520-f52c-4a2a-98e5-16ac7460bade-config-data\") pod \"4b1b7520-f52c-4a2a-98e5-16ac7460bade\" (UID: \"4b1b7520-f52c-4a2a-98e5-16ac7460bade\") " Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.188160 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"e067a835-8a1a-4672-aaea-b8c101109018\" (UID: \"e067a835-8a1a-4672-aaea-b8c101109018\") " Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.188194 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b66dbe7-edd9-4e23-a3d0-0661efe89ac6-config-data\") pod \"4b66dbe7-edd9-4e23-a3d0-0661efe89ac6\" (UID: \"4b66dbe7-edd9-4e23-a3d0-0661efe89ac6\") " Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.188218 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4lgdf\" (UniqueName: \"kubernetes.io/projected/4b66dbe7-edd9-4e23-a3d0-0661efe89ac6-kube-api-access-4lgdf\") pod \"4b66dbe7-edd9-4e23-a3d0-0661efe89ac6\" (UID: \"4b66dbe7-edd9-4e23-a3d0-0661efe89ac6\") " Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.188267 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b1b7520-f52c-4a2a-98e5-16ac7460bade-internal-tls-certs\") pod \"4b1b7520-f52c-4a2a-98e5-16ac7460bade\" (UID: \"4b1b7520-f52c-4a2a-98e5-16ac7460bade\") " Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.188294 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b1b7520-f52c-4a2a-98e5-16ac7460bade-scripts\") pod \"4b1b7520-f52c-4a2a-98e5-16ac7460bade\" (UID: \"4b1b7520-f52c-4a2a-98e5-16ac7460bade\") " Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.188329 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4b1b7520-f52c-4a2a-98e5-16ac7460bade-etc-machine-id\") pod \"4b1b7520-f52c-4a2a-98e5-16ac7460bade\" (UID: \"4b1b7520-f52c-4a2a-98e5-16ac7460bade\") " Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.188347 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e067a835-8a1a-4672-aaea-b8c101109018-logs\") pod \"e067a835-8a1a-4672-aaea-b8c101109018\" (UID: \"e067a835-8a1a-4672-aaea-b8c101109018\") " Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.188367 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b66dbe7-edd9-4e23-a3d0-0661efe89ac6-internal-tls-certs\") pod \"4b66dbe7-edd9-4e23-a3d0-0661efe89ac6\" (UID: \"4b66dbe7-edd9-4e23-a3d0-0661efe89ac6\") " Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.188385 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e067a835-8a1a-4672-aaea-b8c101109018-config-data\") pod \"e067a835-8a1a-4672-aaea-b8c101109018\" (UID: \"e067a835-8a1a-4672-aaea-b8c101109018\") " Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.188407 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b1b7520-f52c-4a2a-98e5-16ac7460bade-public-tls-certs\") pod \"4b1b7520-f52c-4a2a-98e5-16ac7460bade\" (UID: \"4b1b7520-f52c-4a2a-98e5-16ac7460bade\") " Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.188422 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e067a835-8a1a-4672-aaea-b8c101109018-httpd-run\") pod \"e067a835-8a1a-4672-aaea-b8c101109018\" (UID: \"e067a835-8a1a-4672-aaea-b8c101109018\") " Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.188445 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6x595\" (UniqueName: \"kubernetes.io/projected/e067a835-8a1a-4672-aaea-b8c101109018-kube-api-access-6x595\") pod \"e067a835-8a1a-4672-aaea-b8c101109018\" (UID: \"e067a835-8a1a-4672-aaea-b8c101109018\") " Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.188475 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e067a835-8a1a-4672-aaea-b8c101109018-public-tls-certs\") pod \"e067a835-8a1a-4672-aaea-b8c101109018\" (UID: \"e067a835-8a1a-4672-aaea-b8c101109018\") " Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.188490 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b66dbe7-edd9-4e23-a3d0-0661efe89ac6-public-tls-certs\") pod \"4b66dbe7-edd9-4e23-a3d0-0661efe89ac6\" (UID: \"4b66dbe7-edd9-4e23-a3d0-0661efe89ac6\") " Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.188987 4953 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/caec0159-12b1-46f9-952c-10f229948036-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.189002 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4zggh\" (UniqueName: \"kubernetes.io/projected/345a513a-93a0-4e23-9266-3eeaf3ff0c10-kube-api-access-4zggh\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.189013 4953 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/27258186-4cab-45b4-a20c-a4c3ddc82f76-kolla-config\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.189022 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5f6q8\" (UniqueName: \"kubernetes.io/projected/caec0159-12b1-46f9-952c-10f229948036-kube-api-access-5f6q8\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.189031 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-npns9\" (UniqueName: \"kubernetes.io/projected/27258186-4cab-45b4-a20c-a4c3ddc82f76-kube-api-access-npns9\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.189040 4953 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/27258186-4cab-45b4-a20c-a4c3ddc82f76-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.189051 4953 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/27258186-4cab-45b4-a20c-a4c3ddc82f76-config-data-generated\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.189059 4953 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8521d832-efe5-4653-8c0e-8921f916e10f-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.189078 4953 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.189090 4953 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/345a513a-93a0-4e23-9266-3eeaf3ff0c10-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.189100 4953 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/caec0159-12b1-46f9-952c-10f229948036-logs\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.189124 4953 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/345a513a-93a0-4e23-9266-3eeaf3ff0c10-logs\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.189136 4953 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/544e1955-4316-4587-90a8-94bac4f81ae5-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.189148 4953 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/27258186-4cab-45b4-a20c-a4c3ddc82f76-config-data-default\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.195751 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4b1b7520-f52c-4a2a-98e5-16ac7460bade-logs" (OuterVolumeSpecName: "logs") pod "4b1b7520-f52c-4a2a-98e5-16ac7460bade" (UID: "4b1b7520-f52c-4a2a-98e5-16ac7460bade"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.197714 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4b1b7520-f52c-4a2a-98e5-16ac7460bade-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "4b1b7520-f52c-4a2a-98e5-16ac7460bade" (UID: "4b1b7520-f52c-4a2a-98e5-16ac7460bade"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.197864 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b66dbe7-edd9-4e23-a3d0-0661efe89ac6-kube-api-access-4lgdf" (OuterVolumeSpecName: "kube-api-access-4lgdf") pod "4b66dbe7-edd9-4e23-a3d0-0661efe89ac6" (UID: "4b66dbe7-edd9-4e23-a3d0-0661efe89ac6"). InnerVolumeSpecName "kube-api-access-4lgdf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.203527 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e067a835-8a1a-4672-aaea-b8c101109018-logs" (OuterVolumeSpecName: "logs") pod "e067a835-8a1a-4672-aaea-b8c101109018" (UID: "e067a835-8a1a-4672-aaea-b8c101109018"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.205473 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4b66dbe7-edd9-4e23-a3d0-0661efe89ac6-logs" (OuterVolumeSpecName: "logs") pod "4b66dbe7-edd9-4e23-a3d0-0661efe89ac6" (UID: "4b66dbe7-edd9-4e23-a3d0-0661efe89ac6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.205516 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e067a835-8a1a-4672-aaea-b8c101109018-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "e067a835-8a1a-4672-aaea-b8c101109018" (UID: "e067a835-8a1a-4672-aaea-b8c101109018"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.206377 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron242b-account-delete-v47hk" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.215236 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b1b7520-f52c-4a2a-98e5-16ac7460bade-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "4b1b7520-f52c-4a2a-98e5-16ac7460bade" (UID: "4b1b7520-f52c-4a2a-98e5-16ac7460bade"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.216925 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b1b7520-f52c-4a2a-98e5-16ac7460bade-kube-api-access-lw7zc" (OuterVolumeSpecName: "kube-api-access-lw7zc") pod "4b1b7520-f52c-4a2a-98e5-16ac7460bade" (UID: "4b1b7520-f52c-4a2a-98e5-16ac7460bade"). InnerVolumeSpecName "kube-api-access-lw7zc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.217150 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "glance") pod "e067a835-8a1a-4672-aaea-b8c101109018" (UID: "e067a835-8a1a-4672-aaea-b8c101109018"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.217220 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e067a835-8a1a-4672-aaea-b8c101109018-kube-api-access-6x595" (OuterVolumeSpecName: "kube-api-access-6x595") pod "e067a835-8a1a-4672-aaea-b8c101109018" (UID: "e067a835-8a1a-4672-aaea-b8c101109018"). InnerVolumeSpecName "kube-api-access-6x595". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.217439 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b1b7520-f52c-4a2a-98e5-16ac7460bade-scripts" (OuterVolumeSpecName: "scripts") pod "4b1b7520-f52c-4a2a-98e5-16ac7460bade" (UID: "4b1b7520-f52c-4a2a-98e5-16ac7460bade"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.226869 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e067a835-8a1a-4672-aaea-b8c101109018-scripts" (OuterVolumeSpecName: "scripts") pod "e067a835-8a1a-4672-aaea-b8c101109018" (UID: "e067a835-8a1a-4672-aaea-b8c101109018"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.240742 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinderaf3a-account-delete-rtt56" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.253500 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance5aff-account-delete-5hksm" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.275855 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/caec0159-12b1-46f9-952c-10f229948036-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "caec0159-12b1-46f9-952c-10f229948036" (UID: "caec0159-12b1-46f9-952c-10f229948036"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.280331 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.293035 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/46cd550e-17c8-4cd2-a5e0-9746edf42836-operator-scripts\") pod \"46cd550e-17c8-4cd2-a5e0-9746edf42836\" (UID: \"46cd550e-17c8-4cd2-a5e0-9746edf42836\") " Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.293136 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9da03c89-b3fb-431e-bef0-eb8f6d0b180e-combined-ca-bundle\") pod \"9da03c89-b3fb-431e-bef0-eb8f6d0b180e\" (UID: \"9da03c89-b3fb-431e-bef0-eb8f6d0b180e\") " Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.293215 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ln9z6\" (UniqueName: \"kubernetes.io/projected/e6515789-e6f6-4aa3-83f3-4fc58f862dc9-kube-api-access-ln9z6\") pod \"e6515789-e6f6-4aa3-83f3-4fc58f862dc9\" (UID: \"e6515789-e6f6-4aa3-83f3-4fc58f862dc9\") " Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.293263 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6bsbd\" (UniqueName: \"kubernetes.io/projected/9da03c89-b3fb-431e-bef0-eb8f6d0b180e-kube-api-access-6bsbd\") pod \"9da03c89-b3fb-431e-bef0-eb8f6d0b180e\" (UID: \"9da03c89-b3fb-431e-bef0-eb8f6d0b180e\") " Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.293291 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v2ff6\" (UniqueName: \"kubernetes.io/projected/46cd550e-17c8-4cd2-a5e0-9746edf42836-kube-api-access-v2ff6\") pod \"46cd550e-17c8-4cd2-a5e0-9746edf42836\" (UID: \"46cd550e-17c8-4cd2-a5e0-9746edf42836\") " Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.293341 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/9da03c89-b3fb-431e-bef0-eb8f6d0b180e-kube-state-metrics-tls-certs\") pod \"9da03c89-b3fb-431e-bef0-eb8f6d0b180e\" (UID: \"9da03c89-b3fb-431e-bef0-eb8f6d0b180e\") " Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.293384 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mgsqs\" (UniqueName: \"kubernetes.io/projected/992b7c13-c6c6-4641-9c9a-3d8bfbd5029c-kube-api-access-mgsqs\") pod \"992b7c13-c6c6-4641-9c9a-3d8bfbd5029c\" (UID: \"992b7c13-c6c6-4641-9c9a-3d8bfbd5029c\") " Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.293417 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e6515789-e6f6-4aa3-83f3-4fc58f862dc9-operator-scripts\") pod \"e6515789-e6f6-4aa3-83f3-4fc58f862dc9\" (UID: \"e6515789-e6f6-4aa3-83f3-4fc58f862dc9\") " Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.293439 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/992b7c13-c6c6-4641-9c9a-3d8bfbd5029c-operator-scripts\") pod \"992b7c13-c6c6-4641-9c9a-3d8bfbd5029c\" (UID: \"992b7c13-c6c6-4641-9c9a-3d8bfbd5029c\") " Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.293509 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/9da03c89-b3fb-431e-bef0-eb8f6d0b180e-kube-state-metrics-tls-config\") pod \"9da03c89-b3fb-431e-bef0-eb8f6d0b180e\" (UID: \"9da03c89-b3fb-431e-bef0-eb8f6d0b180e\") " Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.294104 4953 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.294126 4953 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/caec0159-12b1-46f9-952c-10f229948036-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.294138 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4lgdf\" (UniqueName: \"kubernetes.io/projected/4b66dbe7-edd9-4e23-a3d0-0661efe89ac6-kube-api-access-4lgdf\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.294148 4953 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b1b7520-f52c-4a2a-98e5-16ac7460bade-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.294156 4953 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4b1b7520-f52c-4a2a-98e5-16ac7460bade-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.294164 4953 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e067a835-8a1a-4672-aaea-b8c101109018-logs\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.294172 4953 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e067a835-8a1a-4672-aaea-b8c101109018-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.294181 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6x595\" (UniqueName: \"kubernetes.io/projected/e067a835-8a1a-4672-aaea-b8c101109018-kube-api-access-6x595\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.294190 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lw7zc\" (UniqueName: \"kubernetes.io/projected/4b1b7520-f52c-4a2a-98e5-16ac7460bade-kube-api-access-lw7zc\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.294199 4953 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b66dbe7-edd9-4e23-a3d0-0661efe89ac6-logs\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.294207 4953 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4b1b7520-f52c-4a2a-98e5-16ac7460bade-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.294216 4953 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b1b7520-f52c-4a2a-98e5-16ac7460bade-logs\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.294224 4953 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e067a835-8a1a-4672-aaea-b8c101109018-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.296778 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/345a513a-93a0-4e23-9266-3eeaf3ff0c10-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "345a513a-93a0-4e23-9266-3eeaf3ff0c10" (UID: "345a513a-93a0-4e23-9266-3eeaf3ff0c10"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.301621 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.302357 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46cd550e-17c8-4cd2-a5e0-9746edf42836-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "46cd550e-17c8-4cd2-a5e0-9746edf42836" (UID: "46cd550e-17c8-4cd2-a5e0-9746edf42836"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.303445 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.312409 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e6515789-e6f6-4aa3-83f3-4fc58f862dc9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e6515789-e6f6-4aa3-83f3-4fc58f862dc9" (UID: "e6515789-e6f6-4aa3-83f3-4fc58f862dc9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.312984 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/992b7c13-c6c6-4641-9c9a-3d8bfbd5029c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "992b7c13-c6c6-4641-9c9a-3d8bfbd5029c" (UID: "992b7c13-c6c6-4641-9c9a-3d8bfbd5029c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:38:11 crc kubenswrapper[4953]: E1211 10:38:11.313141 4953 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Dec 11 10:38:11 crc kubenswrapper[4953]: E1211 10:38:11.313215 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/12df8687-e24e-47fb-802c-3ab978ed04fd-operator-scripts podName:12df8687-e24e-47fb-802c-3ab978ed04fd nodeName:}" failed. No retries permitted until 2025-12-11 10:38:13.31319108 +0000 UTC m=+1611.337050173 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/12df8687-e24e-47fb-802c-3ab978ed04fd-operator-scripts") pod "novacell0caaa-account-delete-n4fck" (UID: "12df8687-e24e-47fb-802c-3ab978ed04fd") : configmap "openstack-scripts" not found Dec 11 10:38:11 crc kubenswrapper[4953]: E1211 10:38:11.313273 4953 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Dec 11 10:38:11 crc kubenswrapper[4953]: E1211 10:38:11.313307 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b09879bd-62c8-4810-ad58-09db28d6afb5-operator-scripts podName:b09879bd-62c8-4810-ad58-09db28d6afb5 nodeName:}" failed. No retries permitted until 2025-12-11 10:38:12.313296163 +0000 UTC m=+1610.337155246 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/b09879bd-62c8-4810-ad58-09db28d6afb5-operator-scripts") pod "barbican3c8c-account-delete-kzsq8" (UID: "b09879bd-62c8-4810-ad58-09db28d6afb5") : configmap "openstack-scripts" not found Dec 11 10:38:11 crc kubenswrapper[4953]: E1211 10:38:11.313921 4953 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Dec 11 10:38:11 crc kubenswrapper[4953]: E1211 10:38:11.313959 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3aee1a2c-6a1e-48c0-9491-3f61371047eb-operator-scripts podName:3aee1a2c-6a1e-48c0-9491-3f61371047eb nodeName:}" failed. No retries permitted until 2025-12-11 10:38:12.313948254 +0000 UTC m=+1610.337807317 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/3aee1a2c-6a1e-48c0-9491-3f61371047eb-operator-scripts") pod "placementa6a0-account-delete-vhpnd" (UID: "3aee1a2c-6a1e-48c0-9491-3f61371047eb") : configmap "openstack-scripts" not found Dec 11 10:38:11 crc kubenswrapper[4953]: E1211 10:38:11.314000 4953 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Dec 11 10:38:11 crc kubenswrapper[4953]: E1211 10:38:11.314029 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/10e32559-b465-4538-af8b-9dd3deedf2b9-operator-scripts podName:10e32559-b465-4538-af8b-9dd3deedf2b9 nodeName:}" failed. No retries permitted until 2025-12-11 10:38:12.314019006 +0000 UTC m=+1610.337878129 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/10e32559-b465-4538-af8b-9dd3deedf2b9-operator-scripts") pod "novaapi339c-account-delete-l2kws" (UID: "10e32559-b465-4538-af8b-9dd3deedf2b9") : configmap "openstack-scripts" not found Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.321142 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b66dbe7-edd9-4e23-a3d0-0661efe89ac6-config-data" (OuterVolumeSpecName: "config-data") pod "4b66dbe7-edd9-4e23-a3d0-0661efe89ac6" (UID: "4b66dbe7-edd9-4e23-a3d0-0661efe89ac6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.321882 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46cd550e-17c8-4cd2-a5e0-9746edf42836-kube-api-access-v2ff6" (OuterVolumeSpecName: "kube-api-access-v2ff6") pod "46cd550e-17c8-4cd2-a5e0-9746edf42836" (UID: "46cd550e-17c8-4cd2-a5e0-9746edf42836"). InnerVolumeSpecName "kube-api-access-v2ff6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.335843 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.347534 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6515789-e6f6-4aa3-83f3-4fc58f862dc9-kube-api-access-ln9z6" (OuterVolumeSpecName: "kube-api-access-ln9z6") pod "e6515789-e6f6-4aa3-83f3-4fc58f862dc9" (UID: "e6515789-e6f6-4aa3-83f3-4fc58f862dc9"). InnerVolumeSpecName "kube-api-access-ln9z6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.349741 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/992b7c13-c6c6-4641-9c9a-3d8bfbd5029c-kube-api-access-mgsqs" (OuterVolumeSpecName: "kube-api-access-mgsqs") pod "992b7c13-c6c6-4641-9c9a-3d8bfbd5029c" (UID: "992b7c13-c6c6-4641-9c9a-3d8bfbd5029c"). InnerVolumeSpecName "kube-api-access-mgsqs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.354151 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7c85df7b9d-rdbfq" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.359478 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9da03c89-b3fb-431e-bef0-eb8f6d0b180e-kube-api-access-6bsbd" (OuterVolumeSpecName: "kube-api-access-6bsbd") pod "9da03c89-b3fb-431e-bef0-eb8f6d0b180e" (UID: "9da03c89-b3fb-431e-bef0-eb8f6d0b180e"). InnerVolumeSpecName "kube-api-access-6bsbd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.359633 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e067a835-8a1a-4672-aaea-b8c101109018-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e067a835-8a1a-4672-aaea-b8c101109018" (UID: "e067a835-8a1a-4672-aaea-b8c101109018"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.395308 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd4593de-19d2-47c1-b6b0-b9c0e46e1107-config-data\") pod \"cd4593de-19d2-47c1-b6b0-b9c0e46e1107\" (UID: \"cd4593de-19d2-47c1-b6b0-b9c0e46e1107\") " Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.395369 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q8f5v\" (UniqueName: \"kubernetes.io/projected/7b77681a-0823-42e6-b0a4-2af1ce955970-kube-api-access-q8f5v\") pod \"7b77681a-0823-42e6-b0a4-2af1ce955970\" (UID: \"7b77681a-0823-42e6-b0a4-2af1ce955970\") " Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.395526 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7b77681a-0823-42e6-b0a4-2af1ce955970-httpd-run\") pod \"7b77681a-0823-42e6-b0a4-2af1ce955970\" (UID: \"7b77681a-0823-42e6-b0a4-2af1ce955970\") " Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.395556 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7b77681a-0823-42e6-b0a4-2af1ce955970-logs\") pod \"7b77681a-0823-42e6-b0a4-2af1ce955970\" (UID: \"7b77681a-0823-42e6-b0a4-2af1ce955970\") " Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.395632 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd4593de-19d2-47c1-b6b0-b9c0e46e1107-combined-ca-bundle\") pod \"cd4593de-19d2-47c1-b6b0-b9c0e46e1107\" (UID: \"cd4593de-19d2-47c1-b6b0-b9c0e46e1107\") " Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.395662 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/767370a9-f8dd-4370-a2cc-f5baeff52c54-config-data-custom\") pod \"767370a9-f8dd-4370-a2cc-f5baeff52c54\" (UID: \"767370a9-f8dd-4370-a2cc-f5baeff52c54\") " Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.395708 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/767370a9-f8dd-4370-a2cc-f5baeff52c54-logs\") pod \"767370a9-f8dd-4370-a2cc-f5baeff52c54\" (UID: \"767370a9-f8dd-4370-a2cc-f5baeff52c54\") " Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.395732 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fx6ft\" (UniqueName: \"kubernetes.io/projected/5c566b6b-16f8-422c-acda-0325e36103e6-kube-api-access-fx6ft\") pod \"5c566b6b-16f8-422c-acda-0325e36103e6\" (UID: \"5c566b6b-16f8-422c-acda-0325e36103e6\") " Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.395782 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/767370a9-f8dd-4370-a2cc-f5baeff52c54-config-data\") pod \"767370a9-f8dd-4370-a2cc-f5baeff52c54\" (UID: \"767370a9-f8dd-4370-a2cc-f5baeff52c54\") " Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.395837 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xl6kg\" (UniqueName: \"kubernetes.io/projected/cd4593de-19d2-47c1-b6b0-b9c0e46e1107-kube-api-access-xl6kg\") pod \"cd4593de-19d2-47c1-b6b0-b9c0e46e1107\" (UID: \"cd4593de-19d2-47c1-b6b0-b9c0e46e1107\") " Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.395865 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b77681a-0823-42e6-b0a4-2af1ce955970-scripts\") pod \"7b77681a-0823-42e6-b0a4-2af1ce955970\" (UID: \"7b77681a-0823-42e6-b0a4-2af1ce955970\") " Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.395889 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/767370a9-f8dd-4370-a2cc-f5baeff52c54-public-tls-certs\") pod \"767370a9-f8dd-4370-a2cc-f5baeff52c54\" (UID: \"767370a9-f8dd-4370-a2cc-f5baeff52c54\") " Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.395922 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cd4593de-19d2-47c1-b6b0-b9c0e46e1107-logs\") pod \"cd4593de-19d2-47c1-b6b0-b9c0e46e1107\" (UID: \"cd4593de-19d2-47c1-b6b0-b9c0e46e1107\") " Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.395945 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c566b6b-16f8-422c-acda-0325e36103e6-config-data\") pod \"5c566b6b-16f8-422c-acda-0325e36103e6\" (UID: \"5c566b6b-16f8-422c-acda-0325e36103e6\") " Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.395982 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b77681a-0823-42e6-b0a4-2af1ce955970-config-data\") pod \"7b77681a-0823-42e6-b0a4-2af1ce955970\" (UID: \"7b77681a-0823-42e6-b0a4-2af1ce955970\") " Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.396039 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b77681a-0823-42e6-b0a4-2af1ce955970-internal-tls-certs\") pod \"7b77681a-0823-42e6-b0a4-2af1ce955970\" (UID: \"7b77681a-0823-42e6-b0a4-2af1ce955970\") " Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.396093 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd4593de-19d2-47c1-b6b0-b9c0e46e1107-nova-metadata-tls-certs\") pod \"cd4593de-19d2-47c1-b6b0-b9c0e46e1107\" (UID: \"cd4593de-19d2-47c1-b6b0-b9c0e46e1107\") " Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.396123 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b77681a-0823-42e6-b0a4-2af1ce955970-combined-ca-bundle\") pod \"7b77681a-0823-42e6-b0a4-2af1ce955970\" (UID: \"7b77681a-0823-42e6-b0a4-2af1ce955970\") " Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.396148 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6npsw\" (UniqueName: \"kubernetes.io/projected/ab07f951-5c8d-428b-9b26-52ea2284ee52-kube-api-access-6npsw\") pod \"ab07f951-5c8d-428b-9b26-52ea2284ee52\" (UID: \"ab07f951-5c8d-428b-9b26-52ea2284ee52\") " Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.396172 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab07f951-5c8d-428b-9b26-52ea2284ee52-memcached-tls-certs\") pod \"ab07f951-5c8d-428b-9b26-52ea2284ee52\" (UID: \"ab07f951-5c8d-428b-9b26-52ea2284ee52\") " Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.396198 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ab07f951-5c8d-428b-9b26-52ea2284ee52-config-data\") pod \"ab07f951-5c8d-428b-9b26-52ea2284ee52\" (UID: \"ab07f951-5c8d-428b-9b26-52ea2284ee52\") " Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.396687 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab07f951-5c8d-428b-9b26-52ea2284ee52-combined-ca-bundle\") pod \"ab07f951-5c8d-428b-9b26-52ea2284ee52\" (UID: \"ab07f951-5c8d-428b-9b26-52ea2284ee52\") " Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.396731 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"7b77681a-0823-42e6-b0a4-2af1ce955970\" (UID: \"7b77681a-0823-42e6-b0a4-2af1ce955970\") " Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.396766 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dqtmj\" (UniqueName: \"kubernetes.io/projected/767370a9-f8dd-4370-a2cc-f5baeff52c54-kube-api-access-dqtmj\") pod \"767370a9-f8dd-4370-a2cc-f5baeff52c54\" (UID: \"767370a9-f8dd-4370-a2cc-f5baeff52c54\") " Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.396795 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c566b6b-16f8-422c-acda-0325e36103e6-combined-ca-bundle\") pod \"5c566b6b-16f8-422c-acda-0325e36103e6\" (UID: \"5c566b6b-16f8-422c-acda-0325e36103e6\") " Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.396827 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/767370a9-f8dd-4370-a2cc-f5baeff52c54-combined-ca-bundle\") pod \"767370a9-f8dd-4370-a2cc-f5baeff52c54\" (UID: \"767370a9-f8dd-4370-a2cc-f5baeff52c54\") " Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.396849 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/767370a9-f8dd-4370-a2cc-f5baeff52c54-internal-tls-certs\") pod \"767370a9-f8dd-4370-a2cc-f5baeff52c54\" (UID: \"767370a9-f8dd-4370-a2cc-f5baeff52c54\") " Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.396874 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ab07f951-5c8d-428b-9b26-52ea2284ee52-kolla-config\") pod \"ab07f951-5c8d-428b-9b26-52ea2284ee52\" (UID: \"ab07f951-5c8d-428b-9b26-52ea2284ee52\") " Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.399051 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b77681a-0823-42e6-b0a4-2af1ce955970-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "7b77681a-0823-42e6-b0a4-2af1ce955970" (UID: "7b77681a-0823-42e6-b0a4-2af1ce955970"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.400307 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b77681a-0823-42e6-b0a4-2af1ce955970-logs" (OuterVolumeSpecName: "logs") pod "7b77681a-0823-42e6-b0a4-2af1ce955970" (UID: "7b77681a-0823-42e6-b0a4-2af1ce955970"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.402440 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/767370a9-f8dd-4370-a2cc-f5baeff52c54-logs" (OuterVolumeSpecName: "logs") pod "767370a9-f8dd-4370-a2cc-f5baeff52c54" (UID: "767370a9-f8dd-4370-a2cc-f5baeff52c54"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.404279 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab07f951-5c8d-428b-9b26-52ea2284ee52-config-data" (OuterVolumeSpecName: "config-data") pod "ab07f951-5c8d-428b-9b26-52ea2284ee52" (UID: "ab07f951-5c8d-428b-9b26-52ea2284ee52"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.406993 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd4593de-19d2-47c1-b6b0-b9c0e46e1107-logs" (OuterVolumeSpecName: "logs") pod "cd4593de-19d2-47c1-b6b0-b9c0e46e1107" (UID: "cd4593de-19d2-47c1-b6b0-b9c0e46e1107"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.408741 4953 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/345a513a-93a0-4e23-9266-3eeaf3ff0c10-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.408785 4953 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/46cd550e-17c8-4cd2-a5e0-9746edf42836-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.408804 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ln9z6\" (UniqueName: \"kubernetes.io/projected/e6515789-e6f6-4aa3-83f3-4fc58f862dc9-kube-api-access-ln9z6\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.408823 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6bsbd\" (UniqueName: \"kubernetes.io/projected/9da03c89-b3fb-431e-bef0-eb8f6d0b180e-kube-api-access-6bsbd\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.408839 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v2ff6\" (UniqueName: \"kubernetes.io/projected/46cd550e-17c8-4cd2-a5e0-9746edf42836-kube-api-access-v2ff6\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.408853 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mgsqs\" (UniqueName: \"kubernetes.io/projected/992b7c13-c6c6-4641-9c9a-3d8bfbd5029c-kube-api-access-mgsqs\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.408867 4953 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e6515789-e6f6-4aa3-83f3-4fc58f862dc9-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.408881 4953 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/992b7c13-c6c6-4641-9c9a-3d8bfbd5029c-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.408897 4953 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e067a835-8a1a-4672-aaea-b8c101109018-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.408915 4953 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b66dbe7-edd9-4e23-a3d0-0661efe89ac6-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.409691 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9da03c89-b3fb-431e-bef0-eb8f6d0b180e-kube-state-metrics-tls-config" (OuterVolumeSpecName: "kube-state-metrics-tls-config") pod "9da03c89-b3fb-431e-bef0-eb8f6d0b180e" (UID: "9da03c89-b3fb-431e-bef0-eb8f6d0b180e"). InnerVolumeSpecName "kube-state-metrics-tls-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.409956 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab07f951-5c8d-428b-9b26-52ea2284ee52-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "ab07f951-5c8d-428b-9b26-52ea2284ee52" (UID: "ab07f951-5c8d-428b-9b26-52ea2284ee52"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.410705 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd4593de-19d2-47c1-b6b0-b9c0e46e1107-kube-api-access-xl6kg" (OuterVolumeSpecName: "kube-api-access-xl6kg") pod "cd4593de-19d2-47c1-b6b0-b9c0e46e1107" (UID: "cd4593de-19d2-47c1-b6b0-b9c0e46e1107"). InnerVolumeSpecName "kube-api-access-xl6kg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.410767 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/767370a9-f8dd-4370-a2cc-f5baeff52c54-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "767370a9-f8dd-4370-a2cc-f5baeff52c54" (UID: "767370a9-f8dd-4370-a2cc-f5baeff52c54"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.410830 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c566b6b-16f8-422c-acda-0325e36103e6-kube-api-access-fx6ft" (OuterVolumeSpecName: "kube-api-access-fx6ft") pod "5c566b6b-16f8-422c-acda-0325e36103e6" (UID: "5c566b6b-16f8-422c-acda-0325e36103e6"). InnerVolumeSpecName "kube-api-access-fx6ft". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.410878 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "glance") pod "7b77681a-0823-42e6-b0a4-2af1ce955970" (UID: "7b77681a-0823-42e6-b0a4-2af1ce955970"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.427875 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab07f951-5c8d-428b-9b26-52ea2284ee52-kube-api-access-6npsw" (OuterVolumeSpecName: "kube-api-access-6npsw") pod "ab07f951-5c8d-428b-9b26-52ea2284ee52" (UID: "ab07f951-5c8d-428b-9b26-52ea2284ee52"). InnerVolumeSpecName "kube-api-access-6npsw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.428022 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b77681a-0823-42e6-b0a4-2af1ce955970-kube-api-access-q8f5v" (OuterVolumeSpecName: "kube-api-access-q8f5v") pod "7b77681a-0823-42e6-b0a4-2af1ce955970" (UID: "7b77681a-0823-42e6-b0a4-2af1ce955970"). InnerVolumeSpecName "kube-api-access-q8f5v". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.428130 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/767370a9-f8dd-4370-a2cc-f5baeff52c54-kube-api-access-dqtmj" (OuterVolumeSpecName: "kube-api-access-dqtmj") pod "767370a9-f8dd-4370-a2cc-f5baeff52c54" (UID: "767370a9-f8dd-4370-a2cc-f5baeff52c54"). InnerVolumeSpecName "kube-api-access-dqtmj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.431633 4953 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.434685 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b77681a-0823-42e6-b0a4-2af1ce955970-scripts" (OuterVolumeSpecName: "scripts") pod "7b77681a-0823-42e6-b0a4-2af1ce955970" (UID: "7b77681a-0823-42e6-b0a4-2af1ce955970"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.434732 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b1b7520-f52c-4a2a-98e5-16ac7460bade-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4b1b7520-f52c-4a2a-98e5-16ac7460bade" (UID: "4b1b7520-f52c-4a2a-98e5-16ac7460bade"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.480262 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/345a513a-93a0-4e23-9266-3eeaf3ff0c10-config-data" (OuterVolumeSpecName: "config-data") pod "345a513a-93a0-4e23-9266-3eeaf3ff0c10" (UID: "345a513a-93a0-4e23-9266-3eeaf3ff0c10"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.483831 4953 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.502505 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27258186-4cab-45b4-a20c-a4c3ddc82f76-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "27258186-4cab-45b4-a20c-a4c3ddc82f76" (UID: "27258186-4cab-45b4-a20c-a4c3ddc82f76"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.510486 4953 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ab07f951-5c8d-428b-9b26-52ea2284ee52-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.510736 4953 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.510817 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dqtmj\" (UniqueName: \"kubernetes.io/projected/767370a9-f8dd-4370-a2cc-f5baeff52c54-kube-api-access-dqtmj\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.513287 4953 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ab07f951-5c8d-428b-9b26-52ea2284ee52-kolla-config\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.513384 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q8f5v\" (UniqueName: \"kubernetes.io/projected/7b77681a-0823-42e6-b0a4-2af1ce955970-kube-api-access-q8f5v\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.513467 4953 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/345a513a-93a0-4e23-9266-3eeaf3ff0c10-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.513529 4953 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7b77681a-0823-42e6-b0a4-2af1ce955970-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.513615 4953 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7b77681a-0823-42e6-b0a4-2af1ce955970-logs\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.513700 4953 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/767370a9-f8dd-4370-a2cc-f5baeff52c54-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.513784 4953 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b1b7520-f52c-4a2a-98e5-16ac7460bade-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.513863 4953 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27258186-4cab-45b4-a20c-a4c3ddc82f76-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.513931 4953 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/767370a9-f8dd-4370-a2cc-f5baeff52c54-logs\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.513990 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fx6ft\" (UniqueName: \"kubernetes.io/projected/5c566b6b-16f8-422c-acda-0325e36103e6-kube-api-access-fx6ft\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.514056 4953 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.514150 4953 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.514261 4953 reconciler_common.go:293] "Volume detached for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/9da03c89-b3fb-431e-bef0-eb8f6d0b180e-kube-state-metrics-tls-config\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.514356 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xl6kg\" (UniqueName: \"kubernetes.io/projected/cd4593de-19d2-47c1-b6b0-b9c0e46e1107-kube-api-access-xl6kg\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.514444 4953 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b77681a-0823-42e6-b0a4-2af1ce955970-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.514529 4953 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cd4593de-19d2-47c1-b6b0-b9c0e46e1107-logs\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.514694 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6npsw\" (UniqueName: \"kubernetes.io/projected/ab07f951-5c8d-428b-9b26-52ea2284ee52-kube-api-access-6npsw\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.529692 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9da03c89-b3fb-431e-bef0-eb8f6d0b180e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9da03c89-b3fb-431e-bef0-eb8f6d0b180e" (UID: "9da03c89-b3fb-431e-bef0-eb8f6d0b180e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.538277 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b66dbe7-edd9-4e23-a3d0-0661efe89ac6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4b66dbe7-edd9-4e23-a3d0-0661efe89ac6" (UID: "4b66dbe7-edd9-4e23-a3d0-0661efe89ac6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.583560 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd4593de-19d2-47c1-b6b0-b9c0e46e1107-config-data" (OuterVolumeSpecName: "config-data") pod "cd4593de-19d2-47c1-b6b0-b9c0e46e1107" (UID: "cd4593de-19d2-47c1-b6b0-b9c0e46e1107"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.586400 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab07f951-5c8d-428b-9b26-52ea2284ee52-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ab07f951-5c8d-428b-9b26-52ea2284ee52" (UID: "ab07f951-5c8d-428b-9b26-52ea2284ee52"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.612882 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9da03c89-b3fb-431e-bef0-eb8f6d0b180e-kube-state-metrics-tls-certs" (OuterVolumeSpecName: "kube-state-metrics-tls-certs") pod "9da03c89-b3fb-431e-bef0-eb8f6d0b180e" (UID: "9da03c89-b3fb-431e-bef0-eb8f6d0b180e"). InnerVolumeSpecName "kube-state-metrics-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.613663 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab07f951-5c8d-428b-9b26-52ea2284ee52-memcached-tls-certs" (OuterVolumeSpecName: "memcached-tls-certs") pod "ab07f951-5c8d-428b-9b26-52ea2284ee52" (UID: "ab07f951-5c8d-428b-9b26-52ea2284ee52"). InnerVolumeSpecName "memcached-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.616858 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c566b6b-16f8-422c-acda-0325e36103e6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5c566b6b-16f8-422c-acda-0325e36103e6" (UID: "5c566b6b-16f8-422c-acda-0325e36103e6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.617097 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c566b6b-16f8-422c-acda-0325e36103e6-combined-ca-bundle\") pod \"5c566b6b-16f8-422c-acda-0325e36103e6\" (UID: \"5c566b6b-16f8-422c-acda-0325e36103e6\") " Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.618190 4953 reconciler_common.go:293] "Volume detached for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab07f951-5c8d-428b-9b26-52ea2284ee52-memcached-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.618220 4953 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab07f951-5c8d-428b-9b26-52ea2284ee52-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.618236 4953 reconciler_common.go:293] "Volume detached for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/9da03c89-b3fb-431e-bef0-eb8f6d0b180e-kube-state-metrics-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.618252 4953 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd4593de-19d2-47c1-b6b0-b9c0e46e1107-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.618265 4953 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b66dbe7-edd9-4e23-a3d0-0661efe89ac6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.618277 4953 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9da03c89-b3fb-431e-bef0-eb8f6d0b180e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:11 crc kubenswrapper[4953]: W1211 10:38:11.620927 4953 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/5c566b6b-16f8-422c-acda-0325e36103e6/volumes/kubernetes.io~secret/combined-ca-bundle Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.620951 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c566b6b-16f8-422c-acda-0325e36103e6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5c566b6b-16f8-422c-acda-0325e36103e6" (UID: "5c566b6b-16f8-422c-acda-0325e36103e6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.638970 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c566b6b-16f8-422c-acda-0325e36103e6-config-data" (OuterVolumeSpecName: "config-data") pod "5c566b6b-16f8-422c-acda-0325e36103e6" (UID: "5c566b6b-16f8-422c-acda-0325e36103e6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.641294 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd4593de-19d2-47c1-b6b0-b9c0e46e1107-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cd4593de-19d2-47c1-b6b0-b9c0e46e1107" (UID: "cd4593de-19d2-47c1-b6b0-b9c0e46e1107"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.655210 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/caec0159-12b1-46f9-952c-10f229948036-config-data" (OuterVolumeSpecName: "config-data") pod "caec0159-12b1-46f9-952c-10f229948036" (UID: "caec0159-12b1-46f9-952c-10f229948036"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:38:11 crc kubenswrapper[4953]: E1211 10:38:11.655912 4953 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="027468b4fd5a12e7e2d663076aa4064b5a0635d8ef820b16038cc9ed4dd22476" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Dec 11 10:38:11 crc kubenswrapper[4953]: E1211 10:38:11.668772 4953 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="027468b4fd5a12e7e2d663076aa4064b5a0635d8ef820b16038cc9ed4dd22476" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Dec 11 10:38:11 crc kubenswrapper[4953]: E1211 10:38:11.670699 4953 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="027468b4fd5a12e7e2d663076aa4064b5a0635d8ef820b16038cc9ed4dd22476" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Dec 11 10:38:11 crc kubenswrapper[4953]: E1211 10:38:11.670780 4953 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="23f99edb-3870-42f3-bdef-ec4db335ba35" containerName="galera" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.673699 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/345a513a-93a0-4e23-9266-3eeaf3ff0c10-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "345a513a-93a0-4e23-9266-3eeaf3ff0c10" (UID: "345a513a-93a0-4e23-9266-3eeaf3ff0c10"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.685521 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27258186-4cab-45b4-a20c-a4c3ddc82f76-galera-tls-certs" (OuterVolumeSpecName: "galera-tls-certs") pod "27258186-4cab-45b4-a20c-a4c3ddc82f76" (UID: "27258186-4cab-45b4-a20c-a4c3ddc82f76"). InnerVolumeSpecName "galera-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.696671 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b77681a-0823-42e6-b0a4-2af1ce955970-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7b77681a-0823-42e6-b0a4-2af1ce955970" (UID: "7b77681a-0823-42e6-b0a4-2af1ce955970"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.696916 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b77681a-0823-42e6-b0a4-2af1ce955970-config-data" (OuterVolumeSpecName: "config-data") pod "7b77681a-0823-42e6-b0a4-2af1ce955970" (UID: "7b77681a-0823-42e6-b0a4-2af1ce955970"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.702330 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/767370a9-f8dd-4370-a2cc-f5baeff52c54-config-data" (OuterVolumeSpecName: "config-data") pod "767370a9-f8dd-4370-a2cc-f5baeff52c54" (UID: "767370a9-f8dd-4370-a2cc-f5baeff52c54"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.704438 4953 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.708844 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b1b7520-f52c-4a2a-98e5-16ac7460bade-config-data" (OuterVolumeSpecName: "config-data") pod "4b1b7520-f52c-4a2a-98e5-16ac7460bade" (UID: "4b1b7520-f52c-4a2a-98e5-16ac7460bade"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.719992 4953 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.720024 4953 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c566b6b-16f8-422c-acda-0325e36103e6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.720035 4953 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd4593de-19d2-47c1-b6b0-b9c0e46e1107-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.720047 4953 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/345a513a-93a0-4e23-9266-3eeaf3ff0c10-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.720057 4953 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b1b7520-f52c-4a2a-98e5-16ac7460bade-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.720066 4953 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/767370a9-f8dd-4370-a2cc-f5baeff52c54-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.720075 4953 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/caec0159-12b1-46f9-952c-10f229948036-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.720083 4953 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c566b6b-16f8-422c-acda-0325e36103e6-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.720091 4953 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b77681a-0823-42e6-b0a4-2af1ce955970-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.720102 4953 reconciler_common.go:293] "Volume detached for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/27258186-4cab-45b4-a20c-a4c3ddc82f76-galera-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.720111 4953 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b77681a-0823-42e6-b0a4-2af1ce955970-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.744677 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/345a513a-93a0-4e23-9266-3eeaf3ff0c10-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "345a513a-93a0-4e23-9266-3eeaf3ff0c10" (UID: "345a513a-93a0-4e23-9266-3eeaf3ff0c10"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.744877 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/767370a9-f8dd-4370-a2cc-f5baeff52c54-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "767370a9-f8dd-4370-a2cc-f5baeff52c54" (UID: "767370a9-f8dd-4370-a2cc-f5baeff52c54"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.757026 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b77681a-0823-42e6-b0a4-2af1ce955970-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "7b77681a-0823-42e6-b0a4-2af1ce955970" (UID: "7b77681a-0823-42e6-b0a4-2af1ce955970"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.757649 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b1b7520-f52c-4a2a-98e5-16ac7460bade-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "4b1b7520-f52c-4a2a-98e5-16ac7460bade" (UID: "4b1b7520-f52c-4a2a-98e5-16ac7460bade"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.760727 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b66dbe7-edd9-4e23-a3d0-0661efe89ac6-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "4b66dbe7-edd9-4e23-a3d0-0661efe89ac6" (UID: "4b66dbe7-edd9-4e23-a3d0-0661efe89ac6"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.771105 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b1b7520-f52c-4a2a-98e5-16ac7460bade-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "4b1b7520-f52c-4a2a-98e5-16ac7460bade" (UID: "4b1b7520-f52c-4a2a-98e5-16ac7460bade"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.776391 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/767370a9-f8dd-4370-a2cc-f5baeff52c54-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "767370a9-f8dd-4370-a2cc-f5baeff52c54" (UID: "767370a9-f8dd-4370-a2cc-f5baeff52c54"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.795741 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b66dbe7-edd9-4e23-a3d0-0661efe89ac6-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "4b66dbe7-edd9-4e23-a3d0-0661efe89ac6" (UID: "4b66dbe7-edd9-4e23-a3d0-0661efe89ac6"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.812876 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e067a835-8a1a-4672-aaea-b8c101109018-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "e067a835-8a1a-4672-aaea-b8c101109018" (UID: "e067a835-8a1a-4672-aaea-b8c101109018"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.812900 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/767370a9-f8dd-4370-a2cc-f5baeff52c54-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "767370a9-f8dd-4370-a2cc-f5baeff52c54" (UID: "767370a9-f8dd-4370-a2cc-f5baeff52c54"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.819791 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd4593de-19d2-47c1-b6b0-b9c0e46e1107-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "cd4593de-19d2-47c1-b6b0-b9c0e46e1107" (UID: "cd4593de-19d2-47c1-b6b0-b9c0e46e1107"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.824112 4953 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd4593de-19d2-47c1-b6b0-b9c0e46e1107-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.824155 4953 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b1b7520-f52c-4a2a-98e5-16ac7460bade-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.824169 4953 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/345a513a-93a0-4e23-9266-3eeaf3ff0c10-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.824181 4953 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e067a835-8a1a-4672-aaea-b8c101109018-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.824194 4953 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b66dbe7-edd9-4e23-a3d0-0661efe89ac6-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.824208 4953 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/767370a9-f8dd-4370-a2cc-f5baeff52c54-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.824221 4953 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/767370a9-f8dd-4370-a2cc-f5baeff52c54-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.824233 4953 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/767370a9-f8dd-4370-a2cc-f5baeff52c54-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.824244 4953 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b1b7520-f52c-4a2a-98e5-16ac7460bade-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.824257 4953 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b77681a-0823-42e6-b0a4-2af1ce955970-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.824269 4953 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b66dbe7-edd9-4e23-a3d0-0661efe89ac6-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:11 crc kubenswrapper[4953]: E1211 10:38:11.824346 4953 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Dec 11 10:38:11 crc kubenswrapper[4953]: E1211 10:38:11.824442 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b29c8985-0d8c-4382-9969-29422929136f-config-data podName:b29c8985-0d8c-4382-9969-29422929136f nodeName:}" failed. No retries permitted until 2025-12-11 10:38:19.824388767 +0000 UTC m=+1617.848247800 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/b29c8985-0d8c-4382-9969-29422929136f-config-data") pod "rabbitmq-server-0" (UID: "b29c8985-0d8c-4382-9969-29422929136f") : configmap "rabbitmq-config-data" not found Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.830603 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e067a835-8a1a-4672-aaea-b8c101109018-config-data" (OuterVolumeSpecName: "config-data") pod "e067a835-8a1a-4672-aaea-b8c101109018" (UID: "e067a835-8a1a-4672-aaea-b8c101109018"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.835946 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"7af3727e-8096-420d-b8d0-95988a5d36db","Type":"ContainerDied","Data":"86825f2ee0b416e454f9379aa6d0427a47b3a0136dbdcb16fc2c48e6946ce5b8"} Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.836001 4953 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="86825f2ee0b416e454f9379aa6d0427a47b3a0136dbdcb16fc2c48e6946ce5b8" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.841338 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7c85df7b9d-rdbfq" event={"ID":"767370a9-f8dd-4370-a2cc-f5baeff52c54","Type":"ContainerDied","Data":"938c6088d34c783f9d003005b0a091ac9452749112428e1e65d1b0596807ca34"} Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.841392 4953 scope.go:117] "RemoveContainer" containerID="f1f3935cba9d49f468aa48835e818e819d4e1455992846d4cd92a2e960523799" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.841530 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7c85df7b9d-rdbfq" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.850959 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.850980 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"5c566b6b-16f8-422c-acda-0325e36103e6","Type":"ContainerDied","Data":"46060cb0b03d6217003dc0e2828ed30e4435ff802d981fb000c90b8daf0398fb"} Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.858116 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"cd4593de-19d2-47c1-b6b0-b9c0e46e1107","Type":"ContainerDied","Data":"5febf106fc861bdba9ab3c21cea0374dd05c3d46b644b1e6266d7eb01aa1ff7e"} Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.858117 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.860217 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.862989 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-78f5cf7bd5-24fm8" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.863067 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.864773 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance5aff-account-delete-5hksm" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.865045 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinderaf3a-account-delete-rtt56" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.865114 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"ab07f951-5c8d-428b-9b26-52ea2284ee52","Type":"ContainerDied","Data":"7738e432bc1c8b812b699e92585c8058cc38d67737fb88e0c1088d675668ea1e"} Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.865277 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-555fcfcf54-sqln7" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.865332 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone7975-account-delete-bmljp" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.867339 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.867519 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron242b-account-delete-v47hk" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.867957 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.876052 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-6cffd87c8c-wlgnt" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.876333 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7567d9469d-rx5dx" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.876607 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.877462 4953 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openstack/novaapi339c-account-delete-l2kws" secret="" err="secret \"galera-openstack-dockercfg-4mfbc\" not found" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.877632 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.878005 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.892229 4953 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openstack/placementa6a0-account-delete-vhpnd" secret="" err="secret \"galera-openstack-dockercfg-4mfbc\" not found" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.894132 4953 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openstack/barbican3c8c-account-delete-kzsq8" secret="" err="secret \"galera-openstack-dockercfg-4mfbc\" not found" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.926731 4953 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e067a835-8a1a-4672-aaea-b8c101109018-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:11 crc kubenswrapper[4953]: I1211 10:38:11.954224 4953 scope.go:117] "RemoveContainer" containerID="ae3f22ec9f89b003c85fac5cd8cf0695244934ca68cf1fc2a0f17935650f23bf" Dec 11 10:38:12 crc kubenswrapper[4953]: I1211 10:38:12.273830 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 11 10:38:12 crc kubenswrapper[4953]: I1211 10:38:12.285555 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone7975-account-delete-bmljp" Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:12.308459 4953 scope.go:117] "RemoveContainer" containerID="83eedb4ddd84362084d8ccac38fed9fcbcacfbfefe97227d1e7bf4df1164fbc0" Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:12.334231 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7af3727e-8096-420d-b8d0-95988a5d36db-config-data\") pod \"7af3727e-8096-420d-b8d0-95988a5d36db\" (UID: \"7af3727e-8096-420d-b8d0-95988a5d36db\") " Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:12.334303 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mrt7p\" (UniqueName: \"kubernetes.io/projected/7af3727e-8096-420d-b8d0-95988a5d36db-kube-api-access-mrt7p\") pod \"7af3727e-8096-420d-b8d0-95988a5d36db\" (UID: \"7af3727e-8096-420d-b8d0-95988a5d36db\") " Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:12.334359 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7af3727e-8096-420d-b8d0-95988a5d36db-combined-ca-bundle\") pod \"7af3727e-8096-420d-b8d0-95988a5d36db\" (UID: \"7af3727e-8096-420d-b8d0-95988a5d36db\") " Dec 11 10:38:13 crc kubenswrapper[4953]: E1211 10:38:12.335301 4953 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Dec 11 10:38:13 crc kubenswrapper[4953]: E1211 10:38:12.335354 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b09879bd-62c8-4810-ad58-09db28d6afb5-operator-scripts podName:b09879bd-62c8-4810-ad58-09db28d6afb5 nodeName:}" failed. No retries permitted until 2025-12-11 10:38:14.335334646 +0000 UTC m=+1612.359193679 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/b09879bd-62c8-4810-ad58-09db28d6afb5-operator-scripts") pod "barbican3c8c-account-delete-kzsq8" (UID: "b09879bd-62c8-4810-ad58-09db28d6afb5") : configmap "openstack-scripts" not found Dec 11 10:38:13 crc kubenswrapper[4953]: E1211 10:38:12.336009 4953 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Dec 11 10:38:13 crc kubenswrapper[4953]: E1211 10:38:12.336029 4953 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Dec 11 10:38:13 crc kubenswrapper[4953]: E1211 10:38:12.336045 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3aee1a2c-6a1e-48c0-9491-3f61371047eb-operator-scripts podName:3aee1a2c-6a1e-48c0-9491-3f61371047eb nodeName:}" failed. No retries permitted until 2025-12-11 10:38:14.336036058 +0000 UTC m=+1612.359895091 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/3aee1a2c-6a1e-48c0-9491-3f61371047eb-operator-scripts") pod "placementa6a0-account-delete-vhpnd" (UID: "3aee1a2c-6a1e-48c0-9491-3f61371047eb") : configmap "openstack-scripts" not found Dec 11 10:38:13 crc kubenswrapper[4953]: E1211 10:38:12.336100 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/10e32559-b465-4538-af8b-9dd3deedf2b9-operator-scripts podName:10e32559-b465-4538-af8b-9dd3deedf2b9 nodeName:}" failed. No retries permitted until 2025-12-11 10:38:14.33608397 +0000 UTC m=+1612.359943003 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/10e32559-b465-4538-af8b-9dd3deedf2b9-operator-scripts") pod "novaapi339c-account-delete-l2kws" (UID: "10e32559-b465-4538-af8b-9dd3deedf2b9") : configmap "openstack-scripts" not found Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:12.338437 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:12.371491 4953 scope.go:117] "RemoveContainer" containerID="d01aaa77da386e9baab54f2e6b436105ab0703db857b3d9adc7c4e2df8f0e6e2" Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:12.371860 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7af3727e-8096-420d-b8d0-95988a5d36db-kube-api-access-mrt7p" (OuterVolumeSpecName: "kube-api-access-mrt7p") pod "7af3727e-8096-420d-b8d0-95988a5d36db" (UID: "7af3727e-8096-420d-b8d0-95988a5d36db"). InnerVolumeSpecName "kube-api-access-mrt7p". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:12.375688 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7af3727e-8096-420d-b8d0-95988a5d36db-config-data" (OuterVolumeSpecName: "config-data") pod "7af3727e-8096-420d-b8d0-95988a5d36db" (UID: "7af3727e-8096-420d-b8d0-95988a5d36db"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:12.383853 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7af3727e-8096-420d-b8d0-95988a5d36db-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7af3727e-8096-420d-b8d0-95988a5d36db" (UID: "7af3727e-8096-420d-b8d0-95988a5d36db"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:12.443007 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/23f99edb-3870-42f3-bdef-ec4db335ba35-operator-scripts\") pod \"23f99edb-3870-42f3-bdef-ec4db335ba35\" (UID: \"23f99edb-3870-42f3-bdef-ec4db335ba35\") " Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:12.443039 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/23f99edb-3870-42f3-bdef-ec4db335ba35-config-data-generated\") pod \"23f99edb-3870-42f3-bdef-ec4db335ba35\" (UID: \"23f99edb-3870-42f3-bdef-ec4db335ba35\") " Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:12.443065 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/23f99edb-3870-42f3-bdef-ec4db335ba35-config-data-default\") pod \"23f99edb-3870-42f3-bdef-ec4db335ba35\" (UID: \"23f99edb-3870-42f3-bdef-ec4db335ba35\") " Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:12.443081 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/23f99edb-3870-42f3-bdef-ec4db335ba35-kolla-config\") pod \"23f99edb-3870-42f3-bdef-ec4db335ba35\" (UID: \"23f99edb-3870-42f3-bdef-ec4db335ba35\") " Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:12.443183 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23f99edb-3870-42f3-bdef-ec4db335ba35-combined-ca-bundle\") pod \"23f99edb-3870-42f3-bdef-ec4db335ba35\" (UID: \"23f99edb-3870-42f3-bdef-ec4db335ba35\") " Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:12.443221 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"23f99edb-3870-42f3-bdef-ec4db335ba35\" (UID: \"23f99edb-3870-42f3-bdef-ec4db335ba35\") " Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:12.443267 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/23f99edb-3870-42f3-bdef-ec4db335ba35-galera-tls-certs\") pod \"23f99edb-3870-42f3-bdef-ec4db335ba35\" (UID: \"23f99edb-3870-42f3-bdef-ec4db335ba35\") " Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:12.443286 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c9xhj\" (UniqueName: \"kubernetes.io/projected/23f99edb-3870-42f3-bdef-ec4db335ba35-kube-api-access-c9xhj\") pod \"23f99edb-3870-42f3-bdef-ec4db335ba35\" (UID: \"23f99edb-3870-42f3-bdef-ec4db335ba35\") " Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:12.444789 4953 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7af3727e-8096-420d-b8d0-95988a5d36db-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:12.444809 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mrt7p\" (UniqueName: \"kubernetes.io/projected/7af3727e-8096-420d-b8d0-95988a5d36db-kube-api-access-mrt7p\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:12.444819 4953 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7af3727e-8096-420d-b8d0-95988a5d36db-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:12.446237 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23f99edb-3870-42f3-bdef-ec4db335ba35-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "23f99edb-3870-42f3-bdef-ec4db335ba35" (UID: "23f99edb-3870-42f3-bdef-ec4db335ba35"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:12.447395 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/23f99edb-3870-42f3-bdef-ec4db335ba35-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "23f99edb-3870-42f3-bdef-ec4db335ba35" (UID: "23f99edb-3870-42f3-bdef-ec4db335ba35"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:12.447642 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23f99edb-3870-42f3-bdef-ec4db335ba35-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "23f99edb-3870-42f3-bdef-ec4db335ba35" (UID: "23f99edb-3870-42f3-bdef-ec4db335ba35"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:12.448224 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23f99edb-3870-42f3-bdef-ec4db335ba35-kube-api-access-c9xhj" (OuterVolumeSpecName: "kube-api-access-c9xhj") pod "23f99edb-3870-42f3-bdef-ec4db335ba35" (UID: "23f99edb-3870-42f3-bdef-ec4db335ba35"). InnerVolumeSpecName "kube-api-access-c9xhj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:12.448306 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23f99edb-3870-42f3-bdef-ec4db335ba35-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "23f99edb-3870-42f3-bdef-ec4db335ba35" (UID: "23f99edb-3870-42f3-bdef-ec4db335ba35"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:12.456162 4953 scope.go:117] "RemoveContainer" containerID="f81a9c3634afcd79a633362dcf52201c0a4c001fbfe1929486695ab342d99feb" Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:12.468809 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:12.477042 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "mysql-db") pod "23f99edb-3870-42f3-bdef-ec4db335ba35" (UID: "23f99edb-3870-42f3-bdef-ec4db335ba35"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:12.499832 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23f99edb-3870-42f3-bdef-ec4db335ba35-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "23f99edb-3870-42f3-bdef-ec4db335ba35" (UID: "23f99edb-3870-42f3-bdef-ec4db335ba35"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:12.499911 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79a93889-ae40-4bd1-a697-5797e065231b" path="/var/lib/kubelet/pods/79a93889-ae40-4bd1-a697-5797e065231b/volumes" Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:12.500718 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1833793-1408-450f-8a7e-e01e6048edd5" path="/var/lib/kubelet/pods/d1833793-1408-450f-8a7e-e01e6048edd5/volumes" Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:12.546248 4953 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/23f99edb-3870-42f3-bdef-ec4db335ba35-config-data-generated\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:12.546277 4953 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/23f99edb-3870-42f3-bdef-ec4db335ba35-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:12.546286 4953 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/23f99edb-3870-42f3-bdef-ec4db335ba35-config-data-default\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:12.546294 4953 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/23f99edb-3870-42f3-bdef-ec4db335ba35-kolla-config\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:12.546304 4953 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23f99edb-3870-42f3-bdef-ec4db335ba35-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:12.546322 4953 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:12.546331 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c9xhj\" (UniqueName: \"kubernetes.io/projected/23f99edb-3870-42f3-bdef-ec4db335ba35-kube-api-access-c9xhj\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:12.550141 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23f99edb-3870-42f3-bdef-ec4db335ba35-galera-tls-certs" (OuterVolumeSpecName: "galera-tls-certs") pod "23f99edb-3870-42f3-bdef-ec4db335ba35" (UID: "23f99edb-3870-42f3-bdef-ec4db335ba35"). InnerVolumeSpecName "galera-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:12.554117 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:12.554150 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-78f5cf7bd5-24fm8"] Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:12.554182 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-proxy-78f5cf7bd5-24fm8"] Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:12.554195 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:12.564248 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:12.576388 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-7567d9469d-rx5dx"] Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:12.585058 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-7567d9469d-rx5dx"] Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:12.588897 4953 scope.go:117] "RemoveContainer" containerID="41bbb6ee795ebc3c22e509c06b7f775810c8aed2e9da9f0f6b746d7e045c0c23" Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:12.589965 4953 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:12.600146 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-6cffd87c8c-wlgnt"] Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:12.609488 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-worker-6cffd87c8c-wlgnt"] Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:12.617043 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/memcached-0"] Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:12.624667 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/memcached-0"] Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:12.631533 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-555fcfcf54-sqln7"] Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:12.639736 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-keystone-listener-555fcfcf54-sqln7"] Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:12.647163 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:12.648205 4953 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:12.648223 4953 reconciler_common.go:293] "Volume detached for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/23f99edb-3870-42f3-bdef-ec4db335ba35-galera-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:12.654953 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:12.660624 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:12.666021 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:12.672652 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:12.680158 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:12.687033 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-7c85df7b9d-rdbfq"] Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:12.696088 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-7c85df7b9d-rdbfq"] Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:12.704597 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:12.712542 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:12.720781 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:12.774162 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:12.787412 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:12.787466 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:12.845717 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-6zplv"] Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:12.864629 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-6zplv"] Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:12.879134 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinderaf3a-account-delete-rtt56"] Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:12.883762 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_4287349e-ff2e-483c-9ede-08ec5e03a2b4/ovn-northd/0.log" Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:12.883800 4953 generic.go:334] "Generic (PLEG): container finished" podID="4287349e-ff2e-483c-9ede-08ec5e03a2b4" containerID="c0f6853d6258372aa9946f5e58c9f253d8e32cbaa5a5914801b2de468c7d1703" exitCode=139 Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:12.883856 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"4287349e-ff2e-483c-9ede-08ec5e03a2b4","Type":"ContainerDied","Data":"c0f6853d6258372aa9946f5e58c9f253d8e32cbaa5a5914801b2de468c7d1703"} Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:12.903647 4953 generic.go:334] "Generic (PLEG): container finished" podID="23f99edb-3870-42f3-bdef-ec4db335ba35" containerID="027468b4fd5a12e7e2d663076aa4064b5a0635d8ef820b16038cc9ed4dd22476" exitCode=0 Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:12.903756 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"23f99edb-3870-42f3-bdef-ec4db335ba35","Type":"ContainerDied","Data":"027468b4fd5a12e7e2d663076aa4064b5a0635d8ef820b16038cc9ed4dd22476"} Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:12.903787 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"23f99edb-3870-42f3-bdef-ec4db335ba35","Type":"ContainerDied","Data":"19f2ff2b286c609eb991cdae646608c15ba0dd6a59d7d95eb2c9b362d53eba22"} Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:12.903817 4953 scope.go:117] "RemoveContainer" containerID="027468b4fd5a12e7e2d663076aa4064b5a0635d8ef820b16038cc9ed4dd22476" Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:12.903995 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:12.910904 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-af3a-account-create-update-jhm5l"] Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:12.921949 4953 generic.go:334] "Generic (PLEG): container finished" podID="01196778-96de-4f79-b9ac-e01243f86ebb" containerID="a1bc8164296634778d4abaa0460ca228c5ac0bad626c3a54c3a93f97fe857237" exitCode=0 Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:12.922027 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"01196778-96de-4f79-b9ac-e01243f86ebb","Type":"ContainerDied","Data":"a1bc8164296634778d4abaa0460ca228c5ac0bad626c3a54c3a93f97fe857237"} Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:12.932540 4953 generic.go:334] "Generic (PLEG): container finished" podID="b4e64ea9-3129-46a7-8197-bdd7730ad3f1" containerID="e95a830582a33c31ab5aeaf4d56f0badd309548a0a67ed5368d9a6983add712a" exitCode=0 Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:12.932618 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-55b68558f8-r49n8" event={"ID":"b4e64ea9-3129-46a7-8197-bdd7730ad3f1","Type":"ContainerDied","Data":"e95a830582a33c31ab5aeaf4d56f0badd309548a0a67ed5368d9a6983add712a"} Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:12.935131 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone7975-account-delete-bmljp" Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:12.936287 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:12.981278 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinderaf3a-account-delete-rtt56"] Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:12.985464 4953 scope.go:117] "RemoveContainer" containerID="6ee2835a71d7d5e83718a29d1cfff494a3741681572cda87af607b814ba32761" Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:12.991787 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-af3a-account-create-update-jhm5l"] Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:12.998430 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-6vj2c"] Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.014385 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-6vj2c"] Dec 11 10:38:13 crc kubenswrapper[4953]: E1211 10:38:13.016407 4953 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f6f4f73f93ab838f657b20b0e0f2f7780e20c20fb3adfe66d3e44a87fc1d18c6" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Dec 11 10:38:13 crc kubenswrapper[4953]: E1211 10:38:13.016436 4953 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9dd0749df58975f05de050cfcf92dc87ed6378284f27a69c71579f156df64d52 is running failed: container process not found" containerID="9dd0749df58975f05de050cfcf92dc87ed6378284f27a69c71579f156df64d52" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Dec 11 10:38:13 crc kubenswrapper[4953]: E1211 10:38:13.017552 4953 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9dd0749df58975f05de050cfcf92dc87ed6378284f27a69c71579f156df64d52 is running failed: container process not found" containerID="9dd0749df58975f05de050cfcf92dc87ed6378284f27a69c71579f156df64d52" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Dec 11 10:38:13 crc kubenswrapper[4953]: E1211 10:38:13.020056 4953 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9dd0749df58975f05de050cfcf92dc87ed6378284f27a69c71579f156df64d52 is running failed: container process not found" containerID="9dd0749df58975f05de050cfcf92dc87ed6378284f27a69c71579f156df64d52" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Dec 11 10:38:13 crc kubenswrapper[4953]: E1211 10:38:13.020106 4953 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9dd0749df58975f05de050cfcf92dc87ed6378284f27a69c71579f156df64d52 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-mbtwm" podUID="5cfd14e5-05e2-4cc5-ba83-259321c6f872" containerName="ovsdb-server" Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.028773 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron242b-account-delete-v47hk"] Dec 11 10:38:13 crc kubenswrapper[4953]: E1211 10:38:13.029136 4953 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f6f4f73f93ab838f657b20b0e0f2f7780e20c20fb3adfe66d3e44a87fc1d18c6" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Dec 11 10:38:13 crc kubenswrapper[4953]: E1211 10:38:13.031284 4953 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f6f4f73f93ab838f657b20b0e0f2f7780e20c20fb3adfe66d3e44a87fc1d18c6" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Dec 11 10:38:13 crc kubenswrapper[4953]: E1211 10:38:13.031337 4953 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-mbtwm" podUID="5cfd14e5-05e2-4cc5-ba83-259321c6f872" containerName="ovs-vswitchd" Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.036860 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-242b-account-create-update-x7c2z"] Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.048184 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron242b-account-delete-v47hk"] Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.054218 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-242b-account-create-update-x7c2z"] Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.063757 4953 scope.go:117] "RemoveContainer" containerID="027468b4fd5a12e7e2d663076aa4064b5a0635d8ef820b16038cc9ed4dd22476" Dec 11 10:38:13 crc kubenswrapper[4953]: E1211 10:38:13.064143 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"027468b4fd5a12e7e2d663076aa4064b5a0635d8ef820b16038cc9ed4dd22476\": container with ID starting with 027468b4fd5a12e7e2d663076aa4064b5a0635d8ef820b16038cc9ed4dd22476 not found: ID does not exist" containerID="027468b4fd5a12e7e2d663076aa4064b5a0635d8ef820b16038cc9ed4dd22476" Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.064186 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"027468b4fd5a12e7e2d663076aa4064b5a0635d8ef820b16038cc9ed4dd22476"} err="failed to get container status \"027468b4fd5a12e7e2d663076aa4064b5a0635d8ef820b16038cc9ed4dd22476\": rpc error: code = NotFound desc = could not find container \"027468b4fd5a12e7e2d663076aa4064b5a0635d8ef820b16038cc9ed4dd22476\": container with ID starting with 027468b4fd5a12e7e2d663076aa4064b5a0635d8ef820b16038cc9ed4dd22476 not found: ID does not exist" Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.064215 4953 scope.go:117] "RemoveContainer" containerID="6ee2835a71d7d5e83718a29d1cfff494a3741681572cda87af607b814ba32761" Dec 11 10:38:13 crc kubenswrapper[4953]: E1211 10:38:13.064468 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ee2835a71d7d5e83718a29d1cfff494a3741681572cda87af607b814ba32761\": container with ID starting with 6ee2835a71d7d5e83718a29d1cfff494a3741681572cda87af607b814ba32761 not found: ID does not exist" containerID="6ee2835a71d7d5e83718a29d1cfff494a3741681572cda87af607b814ba32761" Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.064493 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ee2835a71d7d5e83718a29d1cfff494a3741681572cda87af607b814ba32761"} err="failed to get container status \"6ee2835a71d7d5e83718a29d1cfff494a3741681572cda87af607b814ba32761\": rpc error: code = NotFound desc = could not find container \"6ee2835a71d7d5e83718a29d1cfff494a3741681572cda87af607b814ba32761\": container with ID starting with 6ee2835a71d7d5e83718a29d1cfff494a3741681572cda87af607b814ba32761 not found: ID does not exist" Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.066961 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone7975-account-delete-bmljp"] Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.072377 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone7975-account-delete-bmljp"] Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.077653 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.083509 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bckrd\" (UniqueName: \"kubernetes.io/projected/38d12eb6-7ca0-4003-a9f7-f691f65097e4-kube-api-access-bckrd\") pod \"keystone7975-account-delete-bmljp\" (UID: \"38d12eb6-7ca0-4003-a9f7-f691f65097e4\") " pod="openstack/keystone7975-account-delete-bmljp" Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.083607 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/38d12eb6-7ca0-4003-a9f7-f691f65097e4-operator-scripts\") pod \"keystone7975-account-delete-bmljp\" (UID: \"38d12eb6-7ca0-4003-a9f7-f691f65097e4\") " pod="openstack/keystone7975-account-delete-bmljp" Dec 11 10:38:13 crc kubenswrapper[4953]: E1211 10:38:13.083757 4953 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Dec 11 10:38:13 crc kubenswrapper[4953]: E1211 10:38:13.083846 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/38d12eb6-7ca0-4003-a9f7-f691f65097e4-operator-scripts podName:38d12eb6-7ca0-4003-a9f7-f691f65097e4 nodeName:}" failed. No retries permitted until 2025-12-11 10:38:17.083823931 +0000 UTC m=+1615.107683034 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/38d12eb6-7ca0-4003-a9f7-f691f65097e4-operator-scripts") pod "keystone7975-account-delete-bmljp" (UID: "38d12eb6-7ca0-4003-a9f7-f691f65097e4") : configmap "openstack-scripts" not found Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.084649 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Dec 11 10:38:13 crc kubenswrapper[4953]: E1211 10:38:13.085700 4953 projected.go:194] Error preparing data for projected volume kube-api-access-bckrd for pod openstack/keystone7975-account-delete-bmljp: failed to fetch token: pod "keystone7975-account-delete-bmljp" not found Dec 11 10:38:13 crc kubenswrapper[4953]: E1211 10:38:13.085903 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/38d12eb6-7ca0-4003-a9f7-f691f65097e4-kube-api-access-bckrd podName:38d12eb6-7ca0-4003-a9f7-f691f65097e4 nodeName:}" failed. No retries permitted until 2025-12-11 10:38:17.085885006 +0000 UTC m=+1615.109744039 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-bckrd" (UniqueName: "kubernetes.io/projected/38d12eb6-7ca0-4003-a9f7-f691f65097e4-kube-api-access-bckrd") pod "keystone7975-account-delete-bmljp" (UID: "38d12eb6-7ca0-4003-a9f7-f691f65097e4") : failed to fetch token: pod "keystone7975-account-delete-bmljp" not found Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.095748 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-galera-0"] Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.116140 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-ktvrp"] Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.127667 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstack-galera-0"] Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.140700 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-ktvrp"] Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.207943 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bckrd\" (UniqueName: \"kubernetes.io/projected/38d12eb6-7ca0-4003-a9f7-f691f65097e4-kube-api-access-bckrd\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.207982 4953 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/38d12eb6-7ca0-4003-a9f7-f691f65097e4-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.215820 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-a6a0-account-create-update-6rf9r"] Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.226518 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-a6a0-account-create-update-6rf9r"] Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.234481 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placementa6a0-account-delete-vhpnd"] Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.234752 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placementa6a0-account-delete-vhpnd" podUID="3aee1a2c-6a1e-48c0-9491-3f61371047eb" containerName="mariadb-account-delete" containerID="cri-o://fdc5b9b474ad12ca3867351b611c8f54d3bb3368f91df95bfe30268fd52088fe" gracePeriod=30 Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.245286 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-lpbjw"] Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.254702 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-lpbjw"] Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.263776 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance5aff-account-delete-5hksm"] Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.272905 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-5aff-account-create-update-8rn6d"] Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.284331 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance5aff-account-delete-5hksm"] Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.294767 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-5aff-account-create-update-8rn6d"] Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.370823 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-qnwm6"] Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.376922 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-qnwm6"] Dec 11 10:38:13 crc kubenswrapper[4953]: E1211 10:38:13.410958 4953 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Dec 11 10:38:13 crc kubenswrapper[4953]: E1211 10:38:13.411041 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/12df8687-e24e-47fb-802c-3ab978ed04fd-operator-scripts podName:12df8687-e24e-47fb-802c-3ab978ed04fd nodeName:}" failed. No retries permitted until 2025-12-11 10:38:17.411021135 +0000 UTC m=+1615.434880168 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/12df8687-e24e-47fb-802c-3ab978ed04fd-operator-scripts") pod "novacell0caaa-account-delete-n4fck" (UID: "12df8687-e24e-47fb-802c-3ab978ed04fd") : configmap "openstack-scripts" not found Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.415073 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-3c8c-account-create-update-rrd7p"] Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.423939 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican3c8c-account-delete-kzsq8"] Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.424194 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican3c8c-account-delete-kzsq8" podUID="b09879bd-62c8-4810-ad58-09db28d6afb5" containerName="mariadb-account-delete" containerID="cri-o://b6d2cd8785b03d254d03f3c737ce94fe0726a8177ca0464c1ca95a86c4f2ae0c" gracePeriod=30 Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.430189 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-3c8c-account-create-update-rrd7p"] Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.523086 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_4287349e-ff2e-483c-9ede-08ec5e03a2b4/ovn-northd/0.log" Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.523155 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.524108 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.618469 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-2529j"] Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.637684 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-2529j"] Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.644536 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.673939 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-55b68558f8-r49n8" Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.690957 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/novacell0caaa-account-delete-n4fck"] Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.691160 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/novacell0caaa-account-delete-n4fck" podUID="12df8687-e24e-47fb-802c-3ab978ed04fd" containerName="mariadb-account-delete" containerID="cri-o://a0459dab2bbbd23193b9976fdc006d97c50347d90ea60322c21cd8b6deef3262" gracePeriod=30 Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.702425 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-caaa-account-create-update-q2szt"] Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.715788 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-caaa-account-create-update-q2szt"] Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.718709 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fv427\" (UniqueName: \"kubernetes.io/projected/b29c8985-0d8c-4382-9969-29422929136f-kube-api-access-fv427\") pod \"b29c8985-0d8c-4382-9969-29422929136f\" (UID: \"b29c8985-0d8c-4382-9969-29422929136f\") " Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.718753 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b29c8985-0d8c-4382-9969-29422929136f-plugins-conf\") pod \"b29c8985-0d8c-4382-9969-29422929136f\" (UID: \"b29c8985-0d8c-4382-9969-29422929136f\") " Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.718779 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4287349e-ff2e-483c-9ede-08ec5e03a2b4-scripts\") pod \"4287349e-ff2e-483c-9ede-08ec5e03a2b4\" (UID: \"4287349e-ff2e-483c-9ede-08ec5e03a2b4\") " Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.718797 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4287349e-ff2e-483c-9ede-08ec5e03a2b4-metrics-certs-tls-certs\") pod \"4287349e-ff2e-483c-9ede-08ec5e03a2b4\" (UID: \"4287349e-ff2e-483c-9ede-08ec5e03a2b4\") " Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.718821 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/4287349e-ff2e-483c-9ede-08ec5e03a2b4-ovn-northd-tls-certs\") pod \"4287349e-ff2e-483c-9ede-08ec5e03a2b4\" (UID: \"4287349e-ff2e-483c-9ede-08ec5e03a2b4\") " Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.718837 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/01196778-96de-4f79-b9ac-e01243f86ebb-erlang-cookie-secret\") pod \"01196778-96de-4f79-b9ac-e01243f86ebb\" (UID: \"01196778-96de-4f79-b9ac-e01243f86ebb\") " Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.718856 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/01196778-96de-4f79-b9ac-e01243f86ebb-server-conf\") pod \"01196778-96de-4f79-b9ac-e01243f86ebb\" (UID: \"01196778-96de-4f79-b9ac-e01243f86ebb\") " Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.718870 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/01196778-96de-4f79-b9ac-e01243f86ebb-rabbitmq-tls\") pod \"01196778-96de-4f79-b9ac-e01243f86ebb\" (UID: \"01196778-96de-4f79-b9ac-e01243f86ebb\") " Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.718883 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4287349e-ff2e-483c-9ede-08ec5e03a2b4-combined-ca-bundle\") pod \"4287349e-ff2e-483c-9ede-08ec5e03a2b4\" (UID: \"4287349e-ff2e-483c-9ede-08ec5e03a2b4\") " Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.718904 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/01196778-96de-4f79-b9ac-e01243f86ebb-rabbitmq-plugins\") pod \"01196778-96de-4f79-b9ac-e01243f86ebb\" (UID: \"01196778-96de-4f79-b9ac-e01243f86ebb\") " Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.718921 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b29c8985-0d8c-4382-9969-29422929136f-rabbitmq-erlang-cookie\") pod \"b29c8985-0d8c-4382-9969-29422929136f\" (UID: \"b29c8985-0d8c-4382-9969-29422929136f\") " Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.718936 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4e64ea9-3129-46a7-8197-bdd7730ad3f1-combined-ca-bundle\") pod \"b4e64ea9-3129-46a7-8197-bdd7730ad3f1\" (UID: \"b4e64ea9-3129-46a7-8197-bdd7730ad3f1\") " Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.718952 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/01196778-96de-4f79-b9ac-e01243f86ebb-rabbitmq-confd\") pod \"01196778-96de-4f79-b9ac-e01243f86ebb\" (UID: \"01196778-96de-4f79-b9ac-e01243f86ebb\") " Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.718970 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b4e64ea9-3129-46a7-8197-bdd7730ad3f1-fernet-keys\") pod \"b4e64ea9-3129-46a7-8197-bdd7730ad3f1\" (UID: \"b4e64ea9-3129-46a7-8197-bdd7730ad3f1\") " Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.718991 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"b29c8985-0d8c-4382-9969-29422929136f\" (UID: \"b29c8985-0d8c-4382-9969-29422929136f\") " Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.719010 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4287349e-ff2e-483c-9ede-08ec5e03a2b4-config\") pod \"4287349e-ff2e-483c-9ede-08ec5e03a2b4\" (UID: \"4287349e-ff2e-483c-9ede-08ec5e03a2b4\") " Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.719035 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/01196778-96de-4f79-b9ac-e01243f86ebb-pod-info\") pod \"01196778-96de-4f79-b9ac-e01243f86ebb\" (UID: \"01196778-96de-4f79-b9ac-e01243f86ebb\") " Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.719054 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/01196778-96de-4f79-b9ac-e01243f86ebb-plugins-conf\") pod \"01196778-96de-4f79-b9ac-e01243f86ebb\" (UID: \"01196778-96de-4f79-b9ac-e01243f86ebb\") " Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.719069 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4e64ea9-3129-46a7-8197-bdd7730ad3f1-public-tls-certs\") pod \"b4e64ea9-3129-46a7-8197-bdd7730ad3f1\" (UID: \"b4e64ea9-3129-46a7-8197-bdd7730ad3f1\") " Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.719093 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b29c8985-0d8c-4382-9969-29422929136f-rabbitmq-tls\") pod \"b29c8985-0d8c-4382-9969-29422929136f\" (UID: \"b29c8985-0d8c-4382-9969-29422929136f\") " Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.719113 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b4e64ea9-3129-46a7-8197-bdd7730ad3f1-credential-keys\") pod \"b4e64ea9-3129-46a7-8197-bdd7730ad3f1\" (UID: \"b4e64ea9-3129-46a7-8197-bdd7730ad3f1\") " Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.719128 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4e64ea9-3129-46a7-8197-bdd7730ad3f1-config-data\") pod \"b4e64ea9-3129-46a7-8197-bdd7730ad3f1\" (UID: \"b4e64ea9-3129-46a7-8197-bdd7730ad3f1\") " Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.719143 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b29c8985-0d8c-4382-9969-29422929136f-rabbitmq-plugins\") pod \"b29c8985-0d8c-4382-9969-29422929136f\" (UID: \"b29c8985-0d8c-4382-9969-29422929136f\") " Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.719166 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qz6qh\" (UniqueName: \"kubernetes.io/projected/b4e64ea9-3129-46a7-8197-bdd7730ad3f1-kube-api-access-qz6qh\") pod \"b4e64ea9-3129-46a7-8197-bdd7730ad3f1\" (UID: \"b4e64ea9-3129-46a7-8197-bdd7730ad3f1\") " Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.719188 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b29c8985-0d8c-4382-9969-29422929136f-config-data\") pod \"b29c8985-0d8c-4382-9969-29422929136f\" (UID: \"b29c8985-0d8c-4382-9969-29422929136f\") " Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.719202 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"01196778-96de-4f79-b9ac-e01243f86ebb\" (UID: \"01196778-96de-4f79-b9ac-e01243f86ebb\") " Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.719234 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b29c8985-0d8c-4382-9969-29422929136f-server-conf\") pod \"b29c8985-0d8c-4382-9969-29422929136f\" (UID: \"b29c8985-0d8c-4382-9969-29422929136f\") " Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.719251 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/01196778-96de-4f79-b9ac-e01243f86ebb-rabbitmq-erlang-cookie\") pod \"01196778-96de-4f79-b9ac-e01243f86ebb\" (UID: \"01196778-96de-4f79-b9ac-e01243f86ebb\") " Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.719268 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4e64ea9-3129-46a7-8197-bdd7730ad3f1-internal-tls-certs\") pod \"b4e64ea9-3129-46a7-8197-bdd7730ad3f1\" (UID: \"b4e64ea9-3129-46a7-8197-bdd7730ad3f1\") " Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.719293 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b29c8985-0d8c-4382-9969-29422929136f-rabbitmq-confd\") pod \"b29c8985-0d8c-4382-9969-29422929136f\" (UID: \"b29c8985-0d8c-4382-9969-29422929136f\") " Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.719311 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4e64ea9-3129-46a7-8197-bdd7730ad3f1-scripts\") pod \"b4e64ea9-3129-46a7-8197-bdd7730ad3f1\" (UID: \"b4e64ea9-3129-46a7-8197-bdd7730ad3f1\") " Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.719339 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b29c8985-0d8c-4382-9969-29422929136f-erlang-cookie-secret\") pod \"b29c8985-0d8c-4382-9969-29422929136f\" (UID: \"b29c8985-0d8c-4382-9969-29422929136f\") " Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.719356 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/4287349e-ff2e-483c-9ede-08ec5e03a2b4-ovn-rundir\") pod \"4287349e-ff2e-483c-9ede-08ec5e03a2b4\" (UID: \"4287349e-ff2e-483c-9ede-08ec5e03a2b4\") " Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.719371 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hsppt\" (UniqueName: \"kubernetes.io/projected/01196778-96de-4f79-b9ac-e01243f86ebb-kube-api-access-hsppt\") pod \"01196778-96de-4f79-b9ac-e01243f86ebb\" (UID: \"01196778-96de-4f79-b9ac-e01243f86ebb\") " Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.719388 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/01196778-96de-4f79-b9ac-e01243f86ebb-config-data\") pod \"01196778-96de-4f79-b9ac-e01243f86ebb\" (UID: \"01196778-96de-4f79-b9ac-e01243f86ebb\") " Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.719406 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b29c8985-0d8c-4382-9969-29422929136f-pod-info\") pod \"b29c8985-0d8c-4382-9969-29422929136f\" (UID: \"b29c8985-0d8c-4382-9969-29422929136f\") " Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.719423 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4v8nq\" (UniqueName: \"kubernetes.io/projected/4287349e-ff2e-483c-9ede-08ec5e03a2b4-kube-api-access-4v8nq\") pod \"4287349e-ff2e-483c-9ede-08ec5e03a2b4\" (UID: \"4287349e-ff2e-483c-9ede-08ec5e03a2b4\") " Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.721402 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01196778-96de-4f79-b9ac-e01243f86ebb-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "01196778-96de-4f79-b9ac-e01243f86ebb" (UID: "01196778-96de-4f79-b9ac-e01243f86ebb"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.721850 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/01196778-96de-4f79-b9ac-e01243f86ebb-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "01196778-96de-4f79-b9ac-e01243f86ebb" (UID: "01196778-96de-4f79-b9ac-e01243f86ebb"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.722427 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b29c8985-0d8c-4382-9969-29422929136f-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "b29c8985-0d8c-4382-9969-29422929136f" (UID: "b29c8985-0d8c-4382-9969-29422929136f"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.735395 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4287349e-ff2e-483c-9ede-08ec5e03a2b4-ovn-rundir" (OuterVolumeSpecName: "ovn-rundir") pod "4287349e-ff2e-483c-9ede-08ec5e03a2b4" (UID: "4287349e-ff2e-483c-9ede-08ec5e03a2b4"). InnerVolumeSpecName "ovn-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.736235 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b29c8985-0d8c-4382-9969-29422929136f-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "b29c8985-0d8c-4382-9969-29422929136f" (UID: "b29c8985-0d8c-4382-9969-29422929136f"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.737458 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01196778-96de-4f79-b9ac-e01243f86ebb-kube-api-access-hsppt" (OuterVolumeSpecName: "kube-api-access-hsppt") pod "01196778-96de-4f79-b9ac-e01243f86ebb" (UID: "01196778-96de-4f79-b9ac-e01243f86ebb"). InnerVolumeSpecName "kube-api-access-hsppt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.737549 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/01196778-96de-4f79-b9ac-e01243f86ebb-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "01196778-96de-4f79-b9ac-e01243f86ebb" (UID: "01196778-96de-4f79-b9ac-e01243f86ebb"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.738337 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4287349e-ff2e-483c-9ede-08ec5e03a2b4-scripts" (OuterVolumeSpecName: "scripts") pod "4287349e-ff2e-483c-9ede-08ec5e03a2b4" (UID: "4287349e-ff2e-483c-9ede-08ec5e03a2b4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.746405 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b29c8985-0d8c-4382-9969-29422929136f-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "b29c8985-0d8c-4382-9969-29422929136f" (UID: "b29c8985-0d8c-4382-9969-29422929136f"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.750100 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4287349e-ff2e-483c-9ede-08ec5e03a2b4-config" (OuterVolumeSpecName: "config") pod "4287349e-ff2e-483c-9ede-08ec5e03a2b4" (UID: "4287349e-ff2e-483c-9ede-08ec5e03a2b4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.751268 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4287349e-ff2e-483c-9ede-08ec5e03a2b4-kube-api-access-4v8nq" (OuterVolumeSpecName: "kube-api-access-4v8nq") pod "4287349e-ff2e-483c-9ede-08ec5e03a2b4" (UID: "4287349e-ff2e-483c-9ede-08ec5e03a2b4"). InnerVolumeSpecName "kube-api-access-4v8nq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.755963 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "persistence") pod "01196778-96de-4f79-b9ac-e01243f86ebb" (UID: "01196778-96de-4f79-b9ac-e01243f86ebb"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.756974 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b29c8985-0d8c-4382-9969-29422929136f-kube-api-access-fv427" (OuterVolumeSpecName: "kube-api-access-fv427") pod "b29c8985-0d8c-4382-9969-29422929136f" (UID: "b29c8985-0d8c-4382-9969-29422929136f"). InnerVolumeSpecName "kube-api-access-fv427". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.757043 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "persistence") pod "b29c8985-0d8c-4382-9969-29422929136f" (UID: "b29c8985-0d8c-4382-9969-29422929136f"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.758940 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/b29c8985-0d8c-4382-9969-29422929136f-pod-info" (OuterVolumeSpecName: "pod-info") pod "b29c8985-0d8c-4382-9969-29422929136f" (UID: "b29c8985-0d8c-4382-9969-29422929136f"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.760793 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4e64ea9-3129-46a7-8197-bdd7730ad3f1-kube-api-access-qz6qh" (OuterVolumeSpecName: "kube-api-access-qz6qh") pod "b4e64ea9-3129-46a7-8197-bdd7730ad3f1" (UID: "b4e64ea9-3129-46a7-8197-bdd7730ad3f1"). InnerVolumeSpecName "kube-api-access-qz6qh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.762082 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4e64ea9-3129-46a7-8197-bdd7730ad3f1-scripts" (OuterVolumeSpecName: "scripts") pod "b4e64ea9-3129-46a7-8197-bdd7730ad3f1" (UID: "b4e64ea9-3129-46a7-8197-bdd7730ad3f1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.762103 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b29c8985-0d8c-4382-9969-29422929136f-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "b29c8985-0d8c-4382-9969-29422929136f" (UID: "b29c8985-0d8c-4382-9969-29422929136f"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.764668 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01196778-96de-4f79-b9ac-e01243f86ebb-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "01196778-96de-4f79-b9ac-e01243f86ebb" (UID: "01196778-96de-4f79-b9ac-e01243f86ebb"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.766838 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4e64ea9-3129-46a7-8197-bdd7730ad3f1-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "b4e64ea9-3129-46a7-8197-bdd7730ad3f1" (UID: "b4e64ea9-3129-46a7-8197-bdd7730ad3f1"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.774343 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/01196778-96de-4f79-b9ac-e01243f86ebb-pod-info" (OuterVolumeSpecName: "pod-info") pod "01196778-96de-4f79-b9ac-e01243f86ebb" (UID: "01196778-96de-4f79-b9ac-e01243f86ebb"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.777946 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b29c8985-0d8c-4382-9969-29422929136f-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "b29c8985-0d8c-4382-9969-29422929136f" (UID: "b29c8985-0d8c-4382-9969-29422929136f"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.778033 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01196778-96de-4f79-b9ac-e01243f86ebb-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "01196778-96de-4f79-b9ac-e01243f86ebb" (UID: "01196778-96de-4f79-b9ac-e01243f86ebb"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.788095 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4e64ea9-3129-46a7-8197-bdd7730ad3f1-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "b4e64ea9-3129-46a7-8197-bdd7730ad3f1" (UID: "b4e64ea9-3129-46a7-8197-bdd7730ad3f1"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.792932 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-vq2rm"] Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.807712 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-vq2rm"] Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.831802 4953 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/01196778-96de-4f79-b9ac-e01243f86ebb-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.832006 4953 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4e64ea9-3129-46a7-8197-bdd7730ad3f1-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.832154 4953 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b29c8985-0d8c-4382-9969-29422929136f-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.832708 4953 reconciler_common.go:293] "Volume detached for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/4287349e-ff2e-483c-9ede-08ec5e03a2b4-ovn-rundir\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.832845 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hsppt\" (UniqueName: \"kubernetes.io/projected/01196778-96de-4f79-b9ac-e01243f86ebb-kube-api-access-hsppt\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.832982 4953 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b29c8985-0d8c-4382-9969-29422929136f-pod-info\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.833714 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4v8nq\" (UniqueName: \"kubernetes.io/projected/4287349e-ff2e-483c-9ede-08ec5e03a2b4-kube-api-access-4v8nq\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.833885 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fv427\" (UniqueName: \"kubernetes.io/projected/b29c8985-0d8c-4382-9969-29422929136f-kube-api-access-fv427\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.834796 4953 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b29c8985-0d8c-4382-9969-29422929136f-plugins-conf\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.834937 4953 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4287349e-ff2e-483c-9ede-08ec5e03a2b4-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.835034 4953 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/01196778-96de-4f79-b9ac-e01243f86ebb-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.835143 4953 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/01196778-96de-4f79-b9ac-e01243f86ebb-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.835252 4953 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/01196778-96de-4f79-b9ac-e01243f86ebb-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.835348 4953 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b29c8985-0d8c-4382-9969-29422929136f-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.835442 4953 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b4e64ea9-3129-46a7-8197-bdd7730ad3f1-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.835609 4953 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.835859 4953 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4287349e-ff2e-483c-9ede-08ec5e03a2b4-config\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.836052 4953 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/01196778-96de-4f79-b9ac-e01243f86ebb-pod-info\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.836186 4953 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/01196778-96de-4f79-b9ac-e01243f86ebb-plugins-conf\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.836291 4953 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b29c8985-0d8c-4382-9969-29422929136f-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.836388 4953 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b4e64ea9-3129-46a7-8197-bdd7730ad3f1-credential-keys\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.867711 4953 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b29c8985-0d8c-4382-9969-29422929136f-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.867999 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qz6qh\" (UniqueName: \"kubernetes.io/projected/b4e64ea9-3129-46a7-8197-bdd7730ad3f1-kube-api-access-qz6qh\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.867197 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-339c-account-create-update-jjjms"] Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.868375 4953 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.837807 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01196778-96de-4f79-b9ac-e01243f86ebb-config-data" (OuterVolumeSpecName: "config-data") pod "01196778-96de-4f79-b9ac-e01243f86ebb" (UID: "01196778-96de-4f79-b9ac-e01243f86ebb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.847925 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4287349e-ff2e-483c-9ede-08ec5e03a2b4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4287349e-ff2e-483c-9ede-08ec5e03a2b4" (UID: "4287349e-ff2e-483c-9ede-08ec5e03a2b4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.874424 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/novaapi339c-account-delete-l2kws"] Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.875256 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/novaapi339c-account-delete-l2kws" podUID="10e32559-b465-4538-af8b-9dd3deedf2b9" containerName="mariadb-account-delete" containerID="cri-o://724ef16300326cd83d936dac5cc2888490dcc1ad76a2a512c2133c22dfc295a2" gracePeriod=30 Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.890463 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-339c-account-create-update-jjjms"] Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.942604 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4e64ea9-3129-46a7-8197-bdd7730ad3f1-config-data" (OuterVolumeSpecName: "config-data") pod "b4e64ea9-3129-46a7-8197-bdd7730ad3f1" (UID: "b4e64ea9-3129-46a7-8197-bdd7730ad3f1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.946225 4953 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.957224 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b29c8985-0d8c-4382-9969-29422929136f-config-data" (OuterVolumeSpecName: "config-data") pod "b29c8985-0d8c-4382-9969-29422929136f" (UID: "b29c8985-0d8c-4382-9969-29422929136f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.961639 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-55b68558f8-r49n8" event={"ID":"b4e64ea9-3129-46a7-8197-bdd7730ad3f1","Type":"ContainerDied","Data":"55c5e9404c3838b9c9ec907a74d56b2088eb8ac8f0292c6df2f9030b8129afcb"} Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.961691 4953 scope.go:117] "RemoveContainer" containerID="e95a830582a33c31ab5aeaf4d56f0badd309548a0a67ed5368d9a6983add712a" Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.961831 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-55b68558f8-r49n8" Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.970466 4953 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/01196778-96de-4f79-b9ac-e01243f86ebb-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.970499 4953 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4287349e-ff2e-483c-9ede-08ec5e03a2b4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.970511 4953 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4e64ea9-3129-46a7-8197-bdd7730ad3f1-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.970522 4953 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b29c8985-0d8c-4382-9969-29422929136f-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.970534 4953 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.974018 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_4287349e-ff2e-483c-9ede-08ec5e03a2b4/ovn-northd/0.log" Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.974130 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"4287349e-ff2e-483c-9ede-08ec5e03a2b4","Type":"ContainerDied","Data":"cf045c703a6eff02b21452057e74d0bf94c9c669d3f08d464785c479c49135e8"} Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.974217 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.978934 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4e64ea9-3129-46a7-8197-bdd7730ad3f1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b4e64ea9-3129-46a7-8197-bdd7730ad3f1" (UID: "b4e64ea9-3129-46a7-8197-bdd7730ad3f1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.980263 4953 generic.go:334] "Generic (PLEG): container finished" podID="b29c8985-0d8c-4382-9969-29422929136f" containerID="8193f374115b267f95840c2fe78180f26fa81a7641851959e8cc0f1231cdb480" exitCode=0 Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.980314 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.980327 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b29c8985-0d8c-4382-9969-29422929136f","Type":"ContainerDied","Data":"8193f374115b267f95840c2fe78180f26fa81a7641851959e8cc0f1231cdb480"} Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.980890 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b29c8985-0d8c-4382-9969-29422929136f","Type":"ContainerDied","Data":"1656a1963db1fe5049b3c6547d05e4f0c8bac32bebf99bdbbe2e3b32ddd579f6"} Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.986508 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"01196778-96de-4f79-b9ac-e01243f86ebb","Type":"ContainerDied","Data":"335e970b2bb298e905b5df33c2c1753b2cd38ee7861b2571301309400e9b9c32"} Dec 11 10:38:13 crc kubenswrapper[4953]: I1211 10:38:13.986595 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 11 10:38:14 crc kubenswrapper[4953]: I1211 10:38:14.010663 4953 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Dec 11 10:38:14 crc kubenswrapper[4953]: I1211 10:38:14.011691 4953 scope.go:117] "RemoveContainer" containerID="08ed06bcd9932bd8cfb8cd17406a3860f1745658a72ff1d03735d51b925d7e64" Dec 11 10:38:14 crc kubenswrapper[4953]: I1211 10:38:14.013640 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4e64ea9-3129-46a7-8197-bdd7730ad3f1-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "b4e64ea9-3129-46a7-8197-bdd7730ad3f1" (UID: "b4e64ea9-3129-46a7-8197-bdd7730ad3f1"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:38:14 crc kubenswrapper[4953]: I1211 10:38:14.026615 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4e64ea9-3129-46a7-8197-bdd7730ad3f1-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "b4e64ea9-3129-46a7-8197-bdd7730ad3f1" (UID: "b4e64ea9-3129-46a7-8197-bdd7730ad3f1"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:38:14 crc kubenswrapper[4953]: I1211 10:38:14.035131 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b29c8985-0d8c-4382-9969-29422929136f-server-conf" (OuterVolumeSpecName: "server-conf") pod "b29c8985-0d8c-4382-9969-29422929136f" (UID: "b29c8985-0d8c-4382-9969-29422929136f"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:38:14 crc kubenswrapper[4953]: I1211 10:38:14.041223 4953 scope.go:117] "RemoveContainer" containerID="c0f6853d6258372aa9946f5e58c9f253d8e32cbaa5a5914801b2de468c7d1703" Dec 11 10:38:14 crc kubenswrapper[4953]: I1211 10:38:14.042304 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01196778-96de-4f79-b9ac-e01243f86ebb-server-conf" (OuterVolumeSpecName: "server-conf") pod "01196778-96de-4f79-b9ac-e01243f86ebb" (UID: "01196778-96de-4f79-b9ac-e01243f86ebb"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:38:14 crc kubenswrapper[4953]: I1211 10:38:14.058872 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4287349e-ff2e-483c-9ede-08ec5e03a2b4-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "4287349e-ff2e-483c-9ede-08ec5e03a2b4" (UID: "4287349e-ff2e-483c-9ede-08ec5e03a2b4"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:38:14 crc kubenswrapper[4953]: I1211 10:38:14.071971 4953 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:14 crc kubenswrapper[4953]: I1211 10:38:14.072004 4953 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4e64ea9-3129-46a7-8197-bdd7730ad3f1-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:14 crc kubenswrapper[4953]: I1211 10:38:14.072014 4953 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b29c8985-0d8c-4382-9969-29422929136f-server-conf\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:14 crc kubenswrapper[4953]: I1211 10:38:14.072023 4953 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4e64ea9-3129-46a7-8197-bdd7730ad3f1-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:14 crc kubenswrapper[4953]: I1211 10:38:14.072032 4953 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4287349e-ff2e-483c-9ede-08ec5e03a2b4-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:14 crc kubenswrapper[4953]: I1211 10:38:14.072040 4953 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/01196778-96de-4f79-b9ac-e01243f86ebb-server-conf\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:14 crc kubenswrapper[4953]: I1211 10:38:14.072049 4953 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4e64ea9-3129-46a7-8197-bdd7730ad3f1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:14 crc kubenswrapper[4953]: I1211 10:38:14.074087 4953 scope.go:117] "RemoveContainer" containerID="8193f374115b267f95840c2fe78180f26fa81a7641851959e8cc0f1231cdb480" Dec 11 10:38:14 crc kubenswrapper[4953]: I1211 10:38:14.091428 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4287349e-ff2e-483c-9ede-08ec5e03a2b4-ovn-northd-tls-certs" (OuterVolumeSpecName: "ovn-northd-tls-certs") pod "4287349e-ff2e-483c-9ede-08ec5e03a2b4" (UID: "4287349e-ff2e-483c-9ede-08ec5e03a2b4"). InnerVolumeSpecName "ovn-northd-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:38:14 crc kubenswrapper[4953]: I1211 10:38:14.097852 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01196778-96de-4f79-b9ac-e01243f86ebb-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "01196778-96de-4f79-b9ac-e01243f86ebb" (UID: "01196778-96de-4f79-b9ac-e01243f86ebb"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:38:14 crc kubenswrapper[4953]: I1211 10:38:14.102546 4953 scope.go:117] "RemoveContainer" containerID="8ccd21efbe477435dfe6f0792b8d26e2c55b2f1636676f65356fe0d625e5ad71" Dec 11 10:38:14 crc kubenswrapper[4953]: I1211 10:38:14.120713 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b29c8985-0d8c-4382-9969-29422929136f-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "b29c8985-0d8c-4382-9969-29422929136f" (UID: "b29c8985-0d8c-4382-9969-29422929136f"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:38:14 crc kubenswrapper[4953]: I1211 10:38:14.173737 4953 reconciler_common.go:293] "Volume detached for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/4287349e-ff2e-483c-9ede-08ec5e03a2b4-ovn-northd-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:14 crc kubenswrapper[4953]: I1211 10:38:14.173778 4953 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/01196778-96de-4f79-b9ac-e01243f86ebb-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:14 crc kubenswrapper[4953]: I1211 10:38:14.173793 4953 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b29c8985-0d8c-4382-9969-29422929136f-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:14 crc kubenswrapper[4953]: I1211 10:38:14.195436 4953 scope.go:117] "RemoveContainer" containerID="8193f374115b267f95840c2fe78180f26fa81a7641851959e8cc0f1231cdb480" Dec 11 10:38:14 crc kubenswrapper[4953]: E1211 10:38:14.196901 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8193f374115b267f95840c2fe78180f26fa81a7641851959e8cc0f1231cdb480\": container with ID starting with 8193f374115b267f95840c2fe78180f26fa81a7641851959e8cc0f1231cdb480 not found: ID does not exist" containerID="8193f374115b267f95840c2fe78180f26fa81a7641851959e8cc0f1231cdb480" Dec 11 10:38:14 crc kubenswrapper[4953]: I1211 10:38:14.196938 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8193f374115b267f95840c2fe78180f26fa81a7641851959e8cc0f1231cdb480"} err="failed to get container status \"8193f374115b267f95840c2fe78180f26fa81a7641851959e8cc0f1231cdb480\": rpc error: code = NotFound desc = could not find container \"8193f374115b267f95840c2fe78180f26fa81a7641851959e8cc0f1231cdb480\": container with ID starting with 8193f374115b267f95840c2fe78180f26fa81a7641851959e8cc0f1231cdb480 not found: ID does not exist" Dec 11 10:38:14 crc kubenswrapper[4953]: I1211 10:38:14.196961 4953 scope.go:117] "RemoveContainer" containerID="8ccd21efbe477435dfe6f0792b8d26e2c55b2f1636676f65356fe0d625e5ad71" Dec 11 10:38:14 crc kubenswrapper[4953]: E1211 10:38:14.197218 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ccd21efbe477435dfe6f0792b8d26e2c55b2f1636676f65356fe0d625e5ad71\": container with ID starting with 8ccd21efbe477435dfe6f0792b8d26e2c55b2f1636676f65356fe0d625e5ad71 not found: ID does not exist" containerID="8ccd21efbe477435dfe6f0792b8d26e2c55b2f1636676f65356fe0d625e5ad71" Dec 11 10:38:14 crc kubenswrapper[4953]: I1211 10:38:14.197243 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ccd21efbe477435dfe6f0792b8d26e2c55b2f1636676f65356fe0d625e5ad71"} err="failed to get container status \"8ccd21efbe477435dfe6f0792b8d26e2c55b2f1636676f65356fe0d625e5ad71\": rpc error: code = NotFound desc = could not find container \"8ccd21efbe477435dfe6f0792b8d26e2c55b2f1636676f65356fe0d625e5ad71\": container with ID starting with 8ccd21efbe477435dfe6f0792b8d26e2c55b2f1636676f65356fe0d625e5ad71 not found: ID does not exist" Dec 11 10:38:14 crc kubenswrapper[4953]: I1211 10:38:14.197257 4953 scope.go:117] "RemoveContainer" containerID="a1bc8164296634778d4abaa0460ca228c5ac0bad626c3a54c3a93f97fe857237" Dec 11 10:38:14 crc kubenswrapper[4953]: I1211 10:38:14.231074 4953 scope.go:117] "RemoveContainer" containerID="94ecea46a02f645c72f741be8c0e8c18496d154632db9f0e42995f5ff8e48207" Dec 11 10:38:14 crc kubenswrapper[4953]: E1211 10:38:14.376029 4953 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Dec 11 10:38:14 crc kubenswrapper[4953]: E1211 10:38:14.376099 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3aee1a2c-6a1e-48c0-9491-3f61371047eb-operator-scripts podName:3aee1a2c-6a1e-48c0-9491-3f61371047eb nodeName:}" failed. No retries permitted until 2025-12-11 10:38:18.376083417 +0000 UTC m=+1616.399942450 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/3aee1a2c-6a1e-48c0-9491-3f61371047eb-operator-scripts") pod "placementa6a0-account-delete-vhpnd" (UID: "3aee1a2c-6a1e-48c0-9491-3f61371047eb") : configmap "openstack-scripts" not found Dec 11 10:38:14 crc kubenswrapper[4953]: E1211 10:38:14.376107 4953 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Dec 11 10:38:14 crc kubenswrapper[4953]: E1211 10:38:14.376193 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/10e32559-b465-4538-af8b-9dd3deedf2b9-operator-scripts podName:10e32559-b465-4538-af8b-9dd3deedf2b9 nodeName:}" failed. No retries permitted until 2025-12-11 10:38:18.376169819 +0000 UTC m=+1616.400028902 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/10e32559-b465-4538-af8b-9dd3deedf2b9-operator-scripts") pod "novaapi339c-account-delete-l2kws" (UID: "10e32559-b465-4538-af8b-9dd3deedf2b9") : configmap "openstack-scripts" not found Dec 11 10:38:14 crc kubenswrapper[4953]: E1211 10:38:14.376419 4953 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Dec 11 10:38:14 crc kubenswrapper[4953]: E1211 10:38:14.376446 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b09879bd-62c8-4810-ad58-09db28d6afb5-operator-scripts podName:b09879bd-62c8-4810-ad58-09db28d6afb5 nodeName:}" failed. No retries permitted until 2025-12-11 10:38:18.376439178 +0000 UTC m=+1616.400298211 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/b09879bd-62c8-4810-ad58-09db28d6afb5-operator-scripts") pod "barbican3c8c-account-delete-kzsq8" (UID: "b09879bd-62c8-4810-ad58-09db28d6afb5") : configmap "openstack-scripts" not found Dec 11 10:38:14 crc kubenswrapper[4953]: I1211 10:38:14.389992 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-55b68558f8-r49n8"] Dec 11 10:38:14 crc kubenswrapper[4953]: I1211 10:38:14.400454 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-55b68558f8-r49n8"] Dec 11 10:38:14 crc kubenswrapper[4953]: I1211 10:38:14.525828 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="150b2b4b-1e20-4e44-a696-ccca1d850081" path="/var/lib/kubelet/pods/150b2b4b-1e20-4e44-a696-ccca1d850081/volumes" Dec 11 10:38:14 crc kubenswrapper[4953]: I1211 10:38:14.530359 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23f99edb-3870-42f3-bdef-ec4db335ba35" path="/var/lib/kubelet/pods/23f99edb-3870-42f3-bdef-ec4db335ba35/volumes" Dec 11 10:38:14 crc kubenswrapper[4953]: I1211 10:38:14.539364 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27258186-4cab-45b4-a20c-a4c3ddc82f76" path="/var/lib/kubelet/pods/27258186-4cab-45b4-a20c-a4c3ddc82f76/volumes" Dec 11 10:38:14 crc kubenswrapper[4953]: I1211 10:38:14.544534 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="345a513a-93a0-4e23-9266-3eeaf3ff0c10" path="/var/lib/kubelet/pods/345a513a-93a0-4e23-9266-3eeaf3ff0c10/volumes" Dec 11 10:38:14 crc kubenswrapper[4953]: I1211 10:38:14.547660 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38d12eb6-7ca0-4003-a9f7-f691f65097e4" path="/var/lib/kubelet/pods/38d12eb6-7ca0-4003-a9f7-f691f65097e4/volumes" Dec 11 10:38:14 crc kubenswrapper[4953]: I1211 10:38:14.548839 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46cd550e-17c8-4cd2-a5e0-9746edf42836" path="/var/lib/kubelet/pods/46cd550e-17c8-4cd2-a5e0-9746edf42836/volumes" Dec 11 10:38:14 crc kubenswrapper[4953]: I1211 10:38:14.549655 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b1b7520-f52c-4a2a-98e5-16ac7460bade" path="/var/lib/kubelet/pods/4b1b7520-f52c-4a2a-98e5-16ac7460bade/volumes" Dec 11 10:38:14 crc kubenswrapper[4953]: I1211 10:38:14.551804 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b66dbe7-edd9-4e23-a3d0-0661efe89ac6" path="/var/lib/kubelet/pods/4b66dbe7-edd9-4e23-a3d0-0661efe89ac6/volumes" Dec 11 10:38:14 crc kubenswrapper[4953]: I1211 10:38:14.552448 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4dc8c1b4-a275-4c7f-bfd1-a38cfd35b62d" path="/var/lib/kubelet/pods/4dc8c1b4-a275-4c7f-bfd1-a38cfd35b62d/volumes" Dec 11 10:38:14 crc kubenswrapper[4953]: I1211 10:38:14.554737 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="544e1955-4316-4587-90a8-94bac4f81ae5" path="/var/lib/kubelet/pods/544e1955-4316-4587-90a8-94bac4f81ae5/volumes" Dec 11 10:38:14 crc kubenswrapper[4953]: I1211 10:38:14.557165 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5531d3f7-dc86-4e44-8044-3fd0a6f05afc" path="/var/lib/kubelet/pods/5531d3f7-dc86-4e44-8044-3fd0a6f05afc/volumes" Dec 11 10:38:14 crc kubenswrapper[4953]: I1211 10:38:14.557789 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c566b6b-16f8-422c-acda-0325e36103e6" path="/var/lib/kubelet/pods/5c566b6b-16f8-422c-acda-0325e36103e6/volumes" Dec 11 10:38:14 crc kubenswrapper[4953]: I1211 10:38:14.558412 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="767370a9-f8dd-4370-a2cc-f5baeff52c54" path="/var/lib/kubelet/pods/767370a9-f8dd-4370-a2cc-f5baeff52c54/volumes" Dec 11 10:38:14 crc kubenswrapper[4953]: I1211 10:38:14.559641 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7af3727e-8096-420d-b8d0-95988a5d36db" path="/var/lib/kubelet/pods/7af3727e-8096-420d-b8d0-95988a5d36db/volumes" Dec 11 10:38:14 crc kubenswrapper[4953]: I1211 10:38:14.560298 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b77681a-0823-42e6-b0a4-2af1ce955970" path="/var/lib/kubelet/pods/7b77681a-0823-42e6-b0a4-2af1ce955970/volumes" Dec 11 10:38:14 crc kubenswrapper[4953]: I1211 10:38:14.561186 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8022e972-07c0-4d22-837f-d70700c0fc83" path="/var/lib/kubelet/pods/8022e972-07c0-4d22-837f-d70700c0fc83/volumes" Dec 11 10:38:14 crc kubenswrapper[4953]: I1211 10:38:14.562364 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80cd1362-41c0-4df3-8c3d-566ab77b6edf" path="/var/lib/kubelet/pods/80cd1362-41c0-4df3-8c3d-566ab77b6edf/volumes" Dec 11 10:38:14 crc kubenswrapper[4953]: I1211 10:38:14.563035 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8521d832-efe5-4653-8c0e-8921f916e10f" path="/var/lib/kubelet/pods/8521d832-efe5-4653-8c0e-8921f916e10f/volumes" Dec 11 10:38:14 crc kubenswrapper[4953]: I1211 10:38:14.563719 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97b9ff8e-f944-48ee-803a-d6873a9db805" path="/var/lib/kubelet/pods/97b9ff8e-f944-48ee-803a-d6873a9db805/volumes" Dec 11 10:38:14 crc kubenswrapper[4953]: I1211 10:38:14.564865 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97da65db-6787-4eee-b1de-cd7da56f16e3" path="/var/lib/kubelet/pods/97da65db-6787-4eee-b1de-cd7da56f16e3/volumes" Dec 11 10:38:14 crc kubenswrapper[4953]: I1211 10:38:14.565387 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="992b7c13-c6c6-4641-9c9a-3d8bfbd5029c" path="/var/lib/kubelet/pods/992b7c13-c6c6-4641-9c9a-3d8bfbd5029c/volumes" Dec 11 10:38:14 crc kubenswrapper[4953]: I1211 10:38:14.565900 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9da03c89-b3fb-431e-bef0-eb8f6d0b180e" path="/var/lib/kubelet/pods/9da03c89-b3fb-431e-bef0-eb8f6d0b180e/volumes" Dec 11 10:38:14 crc kubenswrapper[4953]: I1211 10:38:14.566952 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab07f951-5c8d-428b-9b26-52ea2284ee52" path="/var/lib/kubelet/pods/ab07f951-5c8d-428b-9b26-52ea2284ee52/volumes" Dec 11 10:38:14 crc kubenswrapper[4953]: I1211 10:38:14.567482 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab54cd16-46dc-45ba-95b2-28afc8aef126" path="/var/lib/kubelet/pods/ab54cd16-46dc-45ba-95b2-28afc8aef126/volumes" Dec 11 10:38:14 crc kubenswrapper[4953]: I1211 10:38:14.567981 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4e64ea9-3129-46a7-8197-bdd7730ad3f1" path="/var/lib/kubelet/pods/b4e64ea9-3129-46a7-8197-bdd7730ad3f1/volumes" Dec 11 10:38:14 crc kubenswrapper[4953]: I1211 10:38:14.568560 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c095531c-eb2f-46ab-a014-2526cdc5462f" path="/var/lib/kubelet/pods/c095531c-eb2f-46ab-a014-2526cdc5462f/volumes" Dec 11 10:38:14 crc kubenswrapper[4953]: I1211 10:38:14.569517 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="caec0159-12b1-46f9-952c-10f229948036" path="/var/lib/kubelet/pods/caec0159-12b1-46f9-952c-10f229948036/volumes" Dec 11 10:38:14 crc kubenswrapper[4953]: I1211 10:38:14.570169 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd4593de-19d2-47c1-b6b0-b9c0e46e1107" path="/var/lib/kubelet/pods/cd4593de-19d2-47c1-b6b0-b9c0e46e1107/volumes" Dec 11 10:38:14 crc kubenswrapper[4953]: I1211 10:38:14.571016 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0e6f8ed-38a5-46f4-b408-54f0f8f0be59" path="/var/lib/kubelet/pods/d0e6f8ed-38a5-46f4-b408-54f0f8f0be59/volumes" Dec 11 10:38:14 crc kubenswrapper[4953]: I1211 10:38:14.572099 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3193537-daf8-4c54-9200-4db57f86b98d" path="/var/lib/kubelet/pods/d3193537-daf8-4c54-9200-4db57f86b98d/volumes" Dec 11 10:38:14 crc kubenswrapper[4953]: I1211 10:38:14.572701 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e067a835-8a1a-4672-aaea-b8c101109018" path="/var/lib/kubelet/pods/e067a835-8a1a-4672-aaea-b8c101109018/volumes" Dec 11 10:38:14 crc kubenswrapper[4953]: I1211 10:38:14.573731 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6515789-e6f6-4aa3-83f3-4fc58f862dc9" path="/var/lib/kubelet/pods/e6515789-e6f6-4aa3-83f3-4fc58f862dc9/volumes" Dec 11 10:38:14 crc kubenswrapper[4953]: I1211 10:38:14.574880 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f43e322f-fe22-4942-8536-2e29d5bb0639" path="/var/lib/kubelet/pods/f43e322f-fe22-4942-8536-2e29d5bb0639/volumes" Dec 11 10:38:14 crc kubenswrapper[4953]: I1211 10:38:14.575834 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7e05c83-6a7a-453a-89d5-ba471aba22e8" path="/var/lib/kubelet/pods/f7e05c83-6a7a-453a-89d5-ba471aba22e8/volumes" Dec 11 10:38:14 crc kubenswrapper[4953]: I1211 10:38:14.578363 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb34da44-aab9-4100-90fd-dfd6b323e85d" path="/var/lib/kubelet/pods/fb34da44-aab9-4100-90fd-dfd6b323e85d/volumes" Dec 11 10:38:14 crc kubenswrapper[4953]: I1211 10:38:14.579214 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 11 10:38:14 crc kubenswrapper[4953]: I1211 10:38:14.579247 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 11 10:38:14 crc kubenswrapper[4953]: I1211 10:38:14.579263 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 11 10:38:14 crc kubenswrapper[4953]: I1211 10:38:14.579275 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 11 10:38:14 crc kubenswrapper[4953]: I1211 10:38:14.579285 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-northd-0"] Dec 11 10:38:14 crc kubenswrapper[4953]: I1211 10:38:14.579295 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-northd-0"] Dec 11 10:38:14 crc kubenswrapper[4953]: I1211 10:38:14.761933 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 10:38:14 crc kubenswrapper[4953]: I1211 10:38:14.896981 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0fdfbbe2-a3b8-4834-9920-114c40de67dc-log-httpd\") pod \"0fdfbbe2-a3b8-4834-9920-114c40de67dc\" (UID: \"0fdfbbe2-a3b8-4834-9920-114c40de67dc\") " Dec 11 10:38:14 crc kubenswrapper[4953]: I1211 10:38:14.897142 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0fdfbbe2-a3b8-4834-9920-114c40de67dc-ceilometer-tls-certs\") pod \"0fdfbbe2-a3b8-4834-9920-114c40de67dc\" (UID: \"0fdfbbe2-a3b8-4834-9920-114c40de67dc\") " Dec 11 10:38:14 crc kubenswrapper[4953]: I1211 10:38:14.897170 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0fdfbbe2-a3b8-4834-9920-114c40de67dc-scripts\") pod \"0fdfbbe2-a3b8-4834-9920-114c40de67dc\" (UID: \"0fdfbbe2-a3b8-4834-9920-114c40de67dc\") " Dec 11 10:38:14 crc kubenswrapper[4953]: I1211 10:38:14.897191 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g6mt8\" (UniqueName: \"kubernetes.io/projected/0fdfbbe2-a3b8-4834-9920-114c40de67dc-kube-api-access-g6mt8\") pod \"0fdfbbe2-a3b8-4834-9920-114c40de67dc\" (UID: \"0fdfbbe2-a3b8-4834-9920-114c40de67dc\") " Dec 11 10:38:14 crc kubenswrapper[4953]: I1211 10:38:14.897268 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0fdfbbe2-a3b8-4834-9920-114c40de67dc-config-data\") pod \"0fdfbbe2-a3b8-4834-9920-114c40de67dc\" (UID: \"0fdfbbe2-a3b8-4834-9920-114c40de67dc\") " Dec 11 10:38:14 crc kubenswrapper[4953]: I1211 10:38:14.897336 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0fdfbbe2-a3b8-4834-9920-114c40de67dc-run-httpd\") pod \"0fdfbbe2-a3b8-4834-9920-114c40de67dc\" (UID: \"0fdfbbe2-a3b8-4834-9920-114c40de67dc\") " Dec 11 10:38:14 crc kubenswrapper[4953]: I1211 10:38:14.897360 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0fdfbbe2-a3b8-4834-9920-114c40de67dc-sg-core-conf-yaml\") pod \"0fdfbbe2-a3b8-4834-9920-114c40de67dc\" (UID: \"0fdfbbe2-a3b8-4834-9920-114c40de67dc\") " Dec 11 10:38:14 crc kubenswrapper[4953]: I1211 10:38:14.897389 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0fdfbbe2-a3b8-4834-9920-114c40de67dc-combined-ca-bundle\") pod \"0fdfbbe2-a3b8-4834-9920-114c40de67dc\" (UID: \"0fdfbbe2-a3b8-4834-9920-114c40de67dc\") " Dec 11 10:38:14 crc kubenswrapper[4953]: I1211 10:38:14.898054 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0fdfbbe2-a3b8-4834-9920-114c40de67dc-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "0fdfbbe2-a3b8-4834-9920-114c40de67dc" (UID: "0fdfbbe2-a3b8-4834-9920-114c40de67dc"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:38:14 crc kubenswrapper[4953]: I1211 10:38:14.901758 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0fdfbbe2-a3b8-4834-9920-114c40de67dc-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "0fdfbbe2-a3b8-4834-9920-114c40de67dc" (UID: "0fdfbbe2-a3b8-4834-9920-114c40de67dc"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:38:14 crc kubenswrapper[4953]: I1211 10:38:14.903644 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0fdfbbe2-a3b8-4834-9920-114c40de67dc-kube-api-access-g6mt8" (OuterVolumeSpecName: "kube-api-access-g6mt8") pod "0fdfbbe2-a3b8-4834-9920-114c40de67dc" (UID: "0fdfbbe2-a3b8-4834-9920-114c40de67dc"). InnerVolumeSpecName "kube-api-access-g6mt8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:38:14 crc kubenswrapper[4953]: I1211 10:38:14.904489 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0fdfbbe2-a3b8-4834-9920-114c40de67dc-scripts" (OuterVolumeSpecName: "scripts") pod "0fdfbbe2-a3b8-4834-9920-114c40de67dc" (UID: "0fdfbbe2-a3b8-4834-9920-114c40de67dc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:38:15 crc kubenswrapper[4953]: I1211 10:38:15.006860 4953 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0fdfbbe2-a3b8-4834-9920-114c40de67dc-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:15 crc kubenswrapper[4953]: I1211 10:38:15.006919 4953 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0fdfbbe2-a3b8-4834-9920-114c40de67dc-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:15 crc kubenswrapper[4953]: I1211 10:38:15.006943 4953 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0fdfbbe2-a3b8-4834-9920-114c40de67dc-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:15 crc kubenswrapper[4953]: I1211 10:38:15.006967 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g6mt8\" (UniqueName: \"kubernetes.io/projected/0fdfbbe2-a3b8-4834-9920-114c40de67dc-kube-api-access-g6mt8\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:15 crc kubenswrapper[4953]: I1211 10:38:15.055618 4953 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-78f5cf7bd5-24fm8" podUID="8521d832-efe5-4653-8c0e-8921f916e10f" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.168:8080/healthcheck\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 11 10:38:15 crc kubenswrapper[4953]: I1211 10:38:15.057171 4953 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-78f5cf7bd5-24fm8" podUID="8521d832-efe5-4653-8c0e-8921f916e10f" containerName="proxy-server" probeResult="failure" output="Get \"https://10.217.0.168:8080/healthcheck\": dial tcp 10.217.0.168:8080: i/o timeout" Dec 11 10:38:15 crc kubenswrapper[4953]: I1211 10:38:15.074471 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0fdfbbe2-a3b8-4834-9920-114c40de67dc-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "0fdfbbe2-a3b8-4834-9920-114c40de67dc" (UID: "0fdfbbe2-a3b8-4834-9920-114c40de67dc"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:38:15 crc kubenswrapper[4953]: I1211 10:38:15.082148 4953 generic.go:334] "Generic (PLEG): container finished" podID="1b3d5c24-61f6-4926-94ec-0e3a462334df" containerID="4d40902f2adb77e2b7dde3ed43d14df9863e66572e62ab6a82f12fa7bb0bcca2" exitCode=0 Dec 11 10:38:15 crc kubenswrapper[4953]: I1211 10:38:15.082211 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"1b3d5c24-61f6-4926-94ec-0e3a462334df","Type":"ContainerDied","Data":"4d40902f2adb77e2b7dde3ed43d14df9863e66572e62ab6a82f12fa7bb0bcca2"} Dec 11 10:38:15 crc kubenswrapper[4953]: I1211 10:38:15.082393 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0fdfbbe2-a3b8-4834-9920-114c40de67dc-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "0fdfbbe2-a3b8-4834-9920-114c40de67dc" (UID: "0fdfbbe2-a3b8-4834-9920-114c40de67dc"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:38:15 crc kubenswrapper[4953]: I1211 10:38:15.085763 4953 generic.go:334] "Generic (PLEG): container finished" podID="0fdfbbe2-a3b8-4834-9920-114c40de67dc" containerID="24a20114145ea26b514ff1c0db96904c68235dddedc19cbb3ebee0b622fd84b3" exitCode=0 Dec 11 10:38:15 crc kubenswrapper[4953]: I1211 10:38:15.085788 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0fdfbbe2-a3b8-4834-9920-114c40de67dc","Type":"ContainerDied","Data":"24a20114145ea26b514ff1c0db96904c68235dddedc19cbb3ebee0b622fd84b3"} Dec 11 10:38:15 crc kubenswrapper[4953]: I1211 10:38:15.085805 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0fdfbbe2-a3b8-4834-9920-114c40de67dc","Type":"ContainerDied","Data":"f6b7bc3fac68eba454c31ca6b4b8dc0e7012a56bee2904baf3ff4895258d09bf"} Dec 11 10:38:15 crc kubenswrapper[4953]: I1211 10:38:15.085824 4953 scope.go:117] "RemoveContainer" containerID="7bcde1f160b621a411c4432d1c9223855ce56dae0721cd858ef2d9f01ba8fc4f" Dec 11 10:38:15 crc kubenswrapper[4953]: I1211 10:38:15.085925 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 10:38:15 crc kubenswrapper[4953]: I1211 10:38:15.101853 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0fdfbbe2-a3b8-4834-9920-114c40de67dc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0fdfbbe2-a3b8-4834-9920-114c40de67dc" (UID: "0fdfbbe2-a3b8-4834-9920-114c40de67dc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:38:15 crc kubenswrapper[4953]: I1211 10:38:15.108068 4953 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0fdfbbe2-a3b8-4834-9920-114c40de67dc-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:15 crc kubenswrapper[4953]: I1211 10:38:15.108237 4953 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0fdfbbe2-a3b8-4834-9920-114c40de67dc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:15 crc kubenswrapper[4953]: I1211 10:38:15.108296 4953 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0fdfbbe2-a3b8-4834-9920-114c40de67dc-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:15 crc kubenswrapper[4953]: I1211 10:38:15.110701 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Dec 11 10:38:15 crc kubenswrapper[4953]: I1211 10:38:15.121061 4953 scope.go:117] "RemoveContainer" containerID="833b6f02c978f12986b237387138803da1e2d0773b34467c4d1a5b383a6b7409" Dec 11 10:38:15 crc kubenswrapper[4953]: I1211 10:38:15.152093 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0fdfbbe2-a3b8-4834-9920-114c40de67dc-config-data" (OuterVolumeSpecName: "config-data") pod "0fdfbbe2-a3b8-4834-9920-114c40de67dc" (UID: "0fdfbbe2-a3b8-4834-9920-114c40de67dc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:38:15 crc kubenswrapper[4953]: I1211 10:38:15.158437 4953 scope.go:117] "RemoveContainer" containerID="24a20114145ea26b514ff1c0db96904c68235dddedc19cbb3ebee0b622fd84b3" Dec 11 10:38:15 crc kubenswrapper[4953]: I1211 10:38:15.191184 4953 scope.go:117] "RemoveContainer" containerID="26c557ccd567d40c3f683a4b6ace8ab2e8b7ac5434a459e3c5578f86eab6d9ef" Dec 11 10:38:15 crc kubenswrapper[4953]: I1211 10:38:15.209084 4953 scope.go:117] "RemoveContainer" containerID="7bcde1f160b621a411c4432d1c9223855ce56dae0721cd858ef2d9f01ba8fc4f" Dec 11 10:38:15 crc kubenswrapper[4953]: I1211 10:38:15.209662 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b3d5c24-61f6-4926-94ec-0e3a462334df-combined-ca-bundle\") pod \"1b3d5c24-61f6-4926-94ec-0e3a462334df\" (UID: \"1b3d5c24-61f6-4926-94ec-0e3a462334df\") " Dec 11 10:38:15 crc kubenswrapper[4953]: I1211 10:38:15.209693 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hpfg7\" (UniqueName: \"kubernetes.io/projected/1b3d5c24-61f6-4926-94ec-0e3a462334df-kube-api-access-hpfg7\") pod \"1b3d5c24-61f6-4926-94ec-0e3a462334df\" (UID: \"1b3d5c24-61f6-4926-94ec-0e3a462334df\") " Dec 11 10:38:15 crc kubenswrapper[4953]: I1211 10:38:15.209727 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b3d5c24-61f6-4926-94ec-0e3a462334df-config-data\") pod \"1b3d5c24-61f6-4926-94ec-0e3a462334df\" (UID: \"1b3d5c24-61f6-4926-94ec-0e3a462334df\") " Dec 11 10:38:15 crc kubenswrapper[4953]: I1211 10:38:15.210157 4953 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0fdfbbe2-a3b8-4834-9920-114c40de67dc-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:15 crc kubenswrapper[4953]: E1211 10:38:15.210833 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7bcde1f160b621a411c4432d1c9223855ce56dae0721cd858ef2d9f01ba8fc4f\": container with ID starting with 7bcde1f160b621a411c4432d1c9223855ce56dae0721cd858ef2d9f01ba8fc4f not found: ID does not exist" containerID="7bcde1f160b621a411c4432d1c9223855ce56dae0721cd858ef2d9f01ba8fc4f" Dec 11 10:38:15 crc kubenswrapper[4953]: I1211 10:38:15.210873 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7bcde1f160b621a411c4432d1c9223855ce56dae0721cd858ef2d9f01ba8fc4f"} err="failed to get container status \"7bcde1f160b621a411c4432d1c9223855ce56dae0721cd858ef2d9f01ba8fc4f\": rpc error: code = NotFound desc = could not find container \"7bcde1f160b621a411c4432d1c9223855ce56dae0721cd858ef2d9f01ba8fc4f\": container with ID starting with 7bcde1f160b621a411c4432d1c9223855ce56dae0721cd858ef2d9f01ba8fc4f not found: ID does not exist" Dec 11 10:38:15 crc kubenswrapper[4953]: I1211 10:38:15.210895 4953 scope.go:117] "RemoveContainer" containerID="833b6f02c978f12986b237387138803da1e2d0773b34467c4d1a5b383a6b7409" Dec 11 10:38:15 crc kubenswrapper[4953]: E1211 10:38:15.211411 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"833b6f02c978f12986b237387138803da1e2d0773b34467c4d1a5b383a6b7409\": container with ID starting with 833b6f02c978f12986b237387138803da1e2d0773b34467c4d1a5b383a6b7409 not found: ID does not exist" containerID="833b6f02c978f12986b237387138803da1e2d0773b34467c4d1a5b383a6b7409" Dec 11 10:38:15 crc kubenswrapper[4953]: I1211 10:38:15.211445 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"833b6f02c978f12986b237387138803da1e2d0773b34467c4d1a5b383a6b7409"} err="failed to get container status \"833b6f02c978f12986b237387138803da1e2d0773b34467c4d1a5b383a6b7409\": rpc error: code = NotFound desc = could not find container \"833b6f02c978f12986b237387138803da1e2d0773b34467c4d1a5b383a6b7409\": container with ID starting with 833b6f02c978f12986b237387138803da1e2d0773b34467c4d1a5b383a6b7409 not found: ID does not exist" Dec 11 10:38:15 crc kubenswrapper[4953]: I1211 10:38:15.211466 4953 scope.go:117] "RemoveContainer" containerID="24a20114145ea26b514ff1c0db96904c68235dddedc19cbb3ebee0b622fd84b3" Dec 11 10:38:15 crc kubenswrapper[4953]: E1211 10:38:15.211855 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"24a20114145ea26b514ff1c0db96904c68235dddedc19cbb3ebee0b622fd84b3\": container with ID starting with 24a20114145ea26b514ff1c0db96904c68235dddedc19cbb3ebee0b622fd84b3 not found: ID does not exist" containerID="24a20114145ea26b514ff1c0db96904c68235dddedc19cbb3ebee0b622fd84b3" Dec 11 10:38:15 crc kubenswrapper[4953]: I1211 10:38:15.211884 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24a20114145ea26b514ff1c0db96904c68235dddedc19cbb3ebee0b622fd84b3"} err="failed to get container status \"24a20114145ea26b514ff1c0db96904c68235dddedc19cbb3ebee0b622fd84b3\": rpc error: code = NotFound desc = could not find container \"24a20114145ea26b514ff1c0db96904c68235dddedc19cbb3ebee0b622fd84b3\": container with ID starting with 24a20114145ea26b514ff1c0db96904c68235dddedc19cbb3ebee0b622fd84b3 not found: ID does not exist" Dec 11 10:38:15 crc kubenswrapper[4953]: I1211 10:38:15.211903 4953 scope.go:117] "RemoveContainer" containerID="26c557ccd567d40c3f683a4b6ace8ab2e8b7ac5434a459e3c5578f86eab6d9ef" Dec 11 10:38:15 crc kubenswrapper[4953]: E1211 10:38:15.212232 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"26c557ccd567d40c3f683a4b6ace8ab2e8b7ac5434a459e3c5578f86eab6d9ef\": container with ID starting with 26c557ccd567d40c3f683a4b6ace8ab2e8b7ac5434a459e3c5578f86eab6d9ef not found: ID does not exist" containerID="26c557ccd567d40c3f683a4b6ace8ab2e8b7ac5434a459e3c5578f86eab6d9ef" Dec 11 10:38:15 crc kubenswrapper[4953]: I1211 10:38:15.212261 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26c557ccd567d40c3f683a4b6ace8ab2e8b7ac5434a459e3c5578f86eab6d9ef"} err="failed to get container status \"26c557ccd567d40c3f683a4b6ace8ab2e8b7ac5434a459e3c5578f86eab6d9ef\": rpc error: code = NotFound desc = could not find container \"26c557ccd567d40c3f683a4b6ace8ab2e8b7ac5434a459e3c5578f86eab6d9ef\": container with ID starting with 26c557ccd567d40c3f683a4b6ace8ab2e8b7ac5434a459e3c5578f86eab6d9ef not found: ID does not exist" Dec 11 10:38:15 crc kubenswrapper[4953]: I1211 10:38:15.213715 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b3d5c24-61f6-4926-94ec-0e3a462334df-kube-api-access-hpfg7" (OuterVolumeSpecName: "kube-api-access-hpfg7") pod "1b3d5c24-61f6-4926-94ec-0e3a462334df" (UID: "1b3d5c24-61f6-4926-94ec-0e3a462334df"). InnerVolumeSpecName "kube-api-access-hpfg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:38:15 crc kubenswrapper[4953]: I1211 10:38:15.230174 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b3d5c24-61f6-4926-94ec-0e3a462334df-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1b3d5c24-61f6-4926-94ec-0e3a462334df" (UID: "1b3d5c24-61f6-4926-94ec-0e3a462334df"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:38:15 crc kubenswrapper[4953]: I1211 10:38:15.232439 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b3d5c24-61f6-4926-94ec-0e3a462334df-config-data" (OuterVolumeSpecName: "config-data") pod "1b3d5c24-61f6-4926-94ec-0e3a462334df" (UID: "1b3d5c24-61f6-4926-94ec-0e3a462334df"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:38:15 crc kubenswrapper[4953]: I1211 10:38:15.311646 4953 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b3d5c24-61f6-4926-94ec-0e3a462334df-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:15 crc kubenswrapper[4953]: I1211 10:38:15.311728 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hpfg7\" (UniqueName: \"kubernetes.io/projected/1b3d5c24-61f6-4926-94ec-0e3a462334df-kube-api-access-hpfg7\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:15 crc kubenswrapper[4953]: I1211 10:38:15.311748 4953 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b3d5c24-61f6-4926-94ec-0e3a462334df-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:15 crc kubenswrapper[4953]: I1211 10:38:15.455917 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 11 10:38:15 crc kubenswrapper[4953]: I1211 10:38:15.461347 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 11 10:38:15 crc kubenswrapper[4953]: I1211 10:38:15.800673 4953 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="cd4593de-19d2-47c1-b6b0-b9c0e46e1107" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.203:8775/\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 11 10:38:15 crc kubenswrapper[4953]: I1211 10:38:15.800765 4953 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="cd4593de-19d2-47c1-b6b0-b9c0e46e1107" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.203:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 10:38:16 crc kubenswrapper[4953]: I1211 10:38:16.094995 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"1b3d5c24-61f6-4926-94ec-0e3a462334df","Type":"ContainerDied","Data":"94dbef9b490a8e8c405a0a6aed23fb65496547797e1ed5c0afaf857e99833387"} Dec 11 10:38:16 crc kubenswrapper[4953]: I1211 10:38:16.095050 4953 scope.go:117] "RemoveContainer" containerID="4d40902f2adb77e2b7dde3ed43d14df9863e66572e62ab6a82f12fa7bb0bcca2" Dec 11 10:38:16 crc kubenswrapper[4953]: I1211 10:38:16.095158 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Dec 11 10:38:16 crc kubenswrapper[4953]: I1211 10:38:16.129865 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 11 10:38:16 crc kubenswrapper[4953]: I1211 10:38:16.134531 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 11 10:38:16 crc kubenswrapper[4953]: I1211 10:38:16.487628 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01196778-96de-4f79-b9ac-e01243f86ebb" path="/var/lib/kubelet/pods/01196778-96de-4f79-b9ac-e01243f86ebb/volumes" Dec 11 10:38:16 crc kubenswrapper[4953]: I1211 10:38:16.488718 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0fdfbbe2-a3b8-4834-9920-114c40de67dc" path="/var/lib/kubelet/pods/0fdfbbe2-a3b8-4834-9920-114c40de67dc/volumes" Dec 11 10:38:16 crc kubenswrapper[4953]: I1211 10:38:16.490155 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b3d5c24-61f6-4926-94ec-0e3a462334df" path="/var/lib/kubelet/pods/1b3d5c24-61f6-4926-94ec-0e3a462334df/volumes" Dec 11 10:38:16 crc kubenswrapper[4953]: I1211 10:38:16.490913 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4287349e-ff2e-483c-9ede-08ec5e03a2b4" path="/var/lib/kubelet/pods/4287349e-ff2e-483c-9ede-08ec5e03a2b4/volumes" Dec 11 10:38:16 crc kubenswrapper[4953]: I1211 10:38:16.491898 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b29c8985-0d8c-4382-9969-29422929136f" path="/var/lib/kubelet/pods/b29c8985-0d8c-4382-9969-29422929136f/volumes" Dec 11 10:38:17 crc kubenswrapper[4953]: E1211 10:38:17.079885 4953 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod261b522a_b786_4b2b_975c_43f1cc0d8ccf.slice/crio-15faef1b4ad4c5d4d8142bd02ca5c8b72aa84f70caf14fbea0d98e763e1ee6d8.scope\": RecentStats: unable to find data in memory cache]" Dec 11 10:38:17 crc kubenswrapper[4953]: I1211 10:38:17.112355 4953 generic.go:334] "Generic (PLEG): container finished" podID="261b522a-b786-4b2b-975c-43f1cc0d8ccf" containerID="15faef1b4ad4c5d4d8142bd02ca5c8b72aa84f70caf14fbea0d98e763e1ee6d8" exitCode=0 Dec 11 10:38:17 crc kubenswrapper[4953]: I1211 10:38:17.112397 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-677c7c8c9c-gh7rd" event={"ID":"261b522a-b786-4b2b-975c-43f1cc0d8ccf","Type":"ContainerDied","Data":"15faef1b4ad4c5d4d8142bd02ca5c8b72aa84f70caf14fbea0d98e763e1ee6d8"} Dec 11 10:38:17 crc kubenswrapper[4953]: E1211 10:38:17.498642 4953 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Dec 11 10:38:17 crc kubenswrapper[4953]: E1211 10:38:17.499004 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/12df8687-e24e-47fb-802c-3ab978ed04fd-operator-scripts podName:12df8687-e24e-47fb-802c-3ab978ed04fd nodeName:}" failed. No retries permitted until 2025-12-11 10:38:25.498982571 +0000 UTC m=+1623.522841604 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/12df8687-e24e-47fb-802c-3ab978ed04fd-operator-scripts") pod "novacell0caaa-account-delete-n4fck" (UID: "12df8687-e24e-47fb-802c-3ab978ed04fd") : configmap "openstack-scripts" not found Dec 11 10:38:17 crc kubenswrapper[4953]: I1211 10:38:17.512319 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-677c7c8c9c-gh7rd" Dec 11 10:38:17 crc kubenswrapper[4953]: I1211 10:38:17.599508 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/261b522a-b786-4b2b-975c-43f1cc0d8ccf-combined-ca-bundle\") pod \"261b522a-b786-4b2b-975c-43f1cc0d8ccf\" (UID: \"261b522a-b786-4b2b-975c-43f1cc0d8ccf\") " Dec 11 10:38:17 crc kubenswrapper[4953]: I1211 10:38:17.599635 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/261b522a-b786-4b2b-975c-43f1cc0d8ccf-internal-tls-certs\") pod \"261b522a-b786-4b2b-975c-43f1cc0d8ccf\" (UID: \"261b522a-b786-4b2b-975c-43f1cc0d8ccf\") " Dec 11 10:38:17 crc kubenswrapper[4953]: I1211 10:38:17.599664 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/261b522a-b786-4b2b-975c-43f1cc0d8ccf-httpd-config\") pod \"261b522a-b786-4b2b-975c-43f1cc0d8ccf\" (UID: \"261b522a-b786-4b2b-975c-43f1cc0d8ccf\") " Dec 11 10:38:17 crc kubenswrapper[4953]: I1211 10:38:17.599683 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/261b522a-b786-4b2b-975c-43f1cc0d8ccf-public-tls-certs\") pod \"261b522a-b786-4b2b-975c-43f1cc0d8ccf\" (UID: \"261b522a-b786-4b2b-975c-43f1cc0d8ccf\") " Dec 11 10:38:17 crc kubenswrapper[4953]: I1211 10:38:17.599708 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qdjfv\" (UniqueName: \"kubernetes.io/projected/261b522a-b786-4b2b-975c-43f1cc0d8ccf-kube-api-access-qdjfv\") pod \"261b522a-b786-4b2b-975c-43f1cc0d8ccf\" (UID: \"261b522a-b786-4b2b-975c-43f1cc0d8ccf\") " Dec 11 10:38:17 crc kubenswrapper[4953]: I1211 10:38:17.599777 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/261b522a-b786-4b2b-975c-43f1cc0d8ccf-config\") pod \"261b522a-b786-4b2b-975c-43f1cc0d8ccf\" (UID: \"261b522a-b786-4b2b-975c-43f1cc0d8ccf\") " Dec 11 10:38:17 crc kubenswrapper[4953]: I1211 10:38:17.599803 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/261b522a-b786-4b2b-975c-43f1cc0d8ccf-ovndb-tls-certs\") pod \"261b522a-b786-4b2b-975c-43f1cc0d8ccf\" (UID: \"261b522a-b786-4b2b-975c-43f1cc0d8ccf\") " Dec 11 10:38:17 crc kubenswrapper[4953]: I1211 10:38:17.605707 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/261b522a-b786-4b2b-975c-43f1cc0d8ccf-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "261b522a-b786-4b2b-975c-43f1cc0d8ccf" (UID: "261b522a-b786-4b2b-975c-43f1cc0d8ccf"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:38:17 crc kubenswrapper[4953]: I1211 10:38:17.606181 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/261b522a-b786-4b2b-975c-43f1cc0d8ccf-kube-api-access-qdjfv" (OuterVolumeSpecName: "kube-api-access-qdjfv") pod "261b522a-b786-4b2b-975c-43f1cc0d8ccf" (UID: "261b522a-b786-4b2b-975c-43f1cc0d8ccf"). InnerVolumeSpecName "kube-api-access-qdjfv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:38:17 crc kubenswrapper[4953]: I1211 10:38:17.652886 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/261b522a-b786-4b2b-975c-43f1cc0d8ccf-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "261b522a-b786-4b2b-975c-43f1cc0d8ccf" (UID: "261b522a-b786-4b2b-975c-43f1cc0d8ccf"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:38:17 crc kubenswrapper[4953]: I1211 10:38:17.653899 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/261b522a-b786-4b2b-975c-43f1cc0d8ccf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "261b522a-b786-4b2b-975c-43f1cc0d8ccf" (UID: "261b522a-b786-4b2b-975c-43f1cc0d8ccf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:38:17 crc kubenswrapper[4953]: I1211 10:38:17.664384 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/261b522a-b786-4b2b-975c-43f1cc0d8ccf-config" (OuterVolumeSpecName: "config") pod "261b522a-b786-4b2b-975c-43f1cc0d8ccf" (UID: "261b522a-b786-4b2b-975c-43f1cc0d8ccf"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:38:17 crc kubenswrapper[4953]: I1211 10:38:17.671379 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/261b522a-b786-4b2b-975c-43f1cc0d8ccf-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "261b522a-b786-4b2b-975c-43f1cc0d8ccf" (UID: "261b522a-b786-4b2b-975c-43f1cc0d8ccf"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:38:17 crc kubenswrapper[4953]: I1211 10:38:17.684282 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/261b522a-b786-4b2b-975c-43f1cc0d8ccf-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "261b522a-b786-4b2b-975c-43f1cc0d8ccf" (UID: "261b522a-b786-4b2b-975c-43f1cc0d8ccf"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:38:17 crc kubenswrapper[4953]: I1211 10:38:17.702500 4953 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/261b522a-b786-4b2b-975c-43f1cc0d8ccf-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:17 crc kubenswrapper[4953]: I1211 10:38:17.702807 4953 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/261b522a-b786-4b2b-975c-43f1cc0d8ccf-httpd-config\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:17 crc kubenswrapper[4953]: I1211 10:38:17.702887 4953 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/261b522a-b786-4b2b-975c-43f1cc0d8ccf-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:17 crc kubenswrapper[4953]: I1211 10:38:17.702942 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qdjfv\" (UniqueName: \"kubernetes.io/projected/261b522a-b786-4b2b-975c-43f1cc0d8ccf-kube-api-access-qdjfv\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:17 crc kubenswrapper[4953]: I1211 10:38:17.702997 4953 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/261b522a-b786-4b2b-975c-43f1cc0d8ccf-config\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:17 crc kubenswrapper[4953]: I1211 10:38:17.703049 4953 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/261b522a-b786-4b2b-975c-43f1cc0d8ccf-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:17 crc kubenswrapper[4953]: I1211 10:38:17.703099 4953 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/261b522a-b786-4b2b-975c-43f1cc0d8ccf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:18 crc kubenswrapper[4953]: E1211 10:38:18.011910 4953 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9dd0749df58975f05de050cfcf92dc87ed6378284f27a69c71579f156df64d52 is running failed: container process not found" containerID="9dd0749df58975f05de050cfcf92dc87ed6378284f27a69c71579f156df64d52" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Dec 11 10:38:18 crc kubenswrapper[4953]: E1211 10:38:18.012519 4953 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9dd0749df58975f05de050cfcf92dc87ed6378284f27a69c71579f156df64d52 is running failed: container process not found" containerID="9dd0749df58975f05de050cfcf92dc87ed6378284f27a69c71579f156df64d52" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Dec 11 10:38:18 crc kubenswrapper[4953]: E1211 10:38:18.013121 4953 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9dd0749df58975f05de050cfcf92dc87ed6378284f27a69c71579f156df64d52 is running failed: container process not found" containerID="9dd0749df58975f05de050cfcf92dc87ed6378284f27a69c71579f156df64d52" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Dec 11 10:38:18 crc kubenswrapper[4953]: E1211 10:38:18.013164 4953 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9dd0749df58975f05de050cfcf92dc87ed6378284f27a69c71579f156df64d52 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-mbtwm" podUID="5cfd14e5-05e2-4cc5-ba83-259321c6f872" containerName="ovsdb-server" Dec 11 10:38:18 crc kubenswrapper[4953]: E1211 10:38:18.013746 4953 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f6f4f73f93ab838f657b20b0e0f2f7780e20c20fb3adfe66d3e44a87fc1d18c6" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Dec 11 10:38:18 crc kubenswrapper[4953]: E1211 10:38:18.017955 4953 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f6f4f73f93ab838f657b20b0e0f2f7780e20c20fb3adfe66d3e44a87fc1d18c6" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Dec 11 10:38:18 crc kubenswrapper[4953]: E1211 10:38:18.020061 4953 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f6f4f73f93ab838f657b20b0e0f2f7780e20c20fb3adfe66d3e44a87fc1d18c6" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Dec 11 10:38:18 crc kubenswrapper[4953]: E1211 10:38:18.020106 4953 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-mbtwm" podUID="5cfd14e5-05e2-4cc5-ba83-259321c6f872" containerName="ovs-vswitchd" Dec 11 10:38:18 crc kubenswrapper[4953]: I1211 10:38:18.122133 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-677c7c8c9c-gh7rd" event={"ID":"261b522a-b786-4b2b-975c-43f1cc0d8ccf","Type":"ContainerDied","Data":"a95e449e61c33c558d02a877bc79a04b411cb5d264a424c33a6de6627ddfb3ee"} Dec 11 10:38:18 crc kubenswrapper[4953]: I1211 10:38:18.122185 4953 scope.go:117] "RemoveContainer" containerID="8ec34f149eb7b0df59ed60ac6fbbd810019ea5b30d0ab842e625394e2d8c2226" Dec 11 10:38:18 crc kubenswrapper[4953]: I1211 10:38:18.122309 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-677c7c8c9c-gh7rd" Dec 11 10:38:18 crc kubenswrapper[4953]: I1211 10:38:18.164238 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-677c7c8c9c-gh7rd"] Dec 11 10:38:18 crc kubenswrapper[4953]: I1211 10:38:18.170103 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-677c7c8c9c-gh7rd"] Dec 11 10:38:18 crc kubenswrapper[4953]: I1211 10:38:18.172401 4953 scope.go:117] "RemoveContainer" containerID="15faef1b4ad4c5d4d8142bd02ca5c8b72aa84f70caf14fbea0d98e763e1ee6d8" Dec 11 10:38:18 crc kubenswrapper[4953]: I1211 10:38:18.194067 4953 patch_prober.go:28] interesting pod/machine-config-daemon-q2898 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 10:38:18 crc kubenswrapper[4953]: I1211 10:38:18.194204 4953 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 10:38:18 crc kubenswrapper[4953]: I1211 10:38:18.194328 4953 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q2898" Dec 11 10:38:18 crc kubenswrapper[4953]: I1211 10:38:18.195167 4953 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"53d5bf4beeeacbda3dba3d57562ea4385d09cf6341585a459bb0c495199b914c"} pod="openshift-machine-config-operator/machine-config-daemon-q2898" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 11 10:38:18 crc kubenswrapper[4953]: I1211 10:38:18.195326 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" containerName="machine-config-daemon" containerID="cri-o://53d5bf4beeeacbda3dba3d57562ea4385d09cf6341585a459bb0c495199b914c" gracePeriod=600 Dec 11 10:38:18 crc kubenswrapper[4953]: E1211 10:38:18.419763 4953 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Dec 11 10:38:18 crc kubenswrapper[4953]: E1211 10:38:18.420185 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b09879bd-62c8-4810-ad58-09db28d6afb5-operator-scripts podName:b09879bd-62c8-4810-ad58-09db28d6afb5 nodeName:}" failed. No retries permitted until 2025-12-11 10:38:26.420165024 +0000 UTC m=+1624.444024057 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/b09879bd-62c8-4810-ad58-09db28d6afb5-operator-scripts") pod "barbican3c8c-account-delete-kzsq8" (UID: "b09879bd-62c8-4810-ad58-09db28d6afb5") : configmap "openstack-scripts" not found Dec 11 10:38:18 crc kubenswrapper[4953]: E1211 10:38:18.421519 4953 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Dec 11 10:38:18 crc kubenswrapper[4953]: E1211 10:38:18.421557 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3aee1a2c-6a1e-48c0-9491-3f61371047eb-operator-scripts podName:3aee1a2c-6a1e-48c0-9491-3f61371047eb nodeName:}" failed. No retries permitted until 2025-12-11 10:38:26.421545087 +0000 UTC m=+1624.445404120 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/3aee1a2c-6a1e-48c0-9491-3f61371047eb-operator-scripts") pod "placementa6a0-account-delete-vhpnd" (UID: "3aee1a2c-6a1e-48c0-9491-3f61371047eb") : configmap "openstack-scripts" not found Dec 11 10:38:18 crc kubenswrapper[4953]: E1211 10:38:18.421611 4953 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Dec 11 10:38:18 crc kubenswrapper[4953]: E1211 10:38:18.422027 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/10e32559-b465-4538-af8b-9dd3deedf2b9-operator-scripts podName:10e32559-b465-4538-af8b-9dd3deedf2b9 nodeName:}" failed. No retries permitted until 2025-12-11 10:38:26.422014742 +0000 UTC m=+1624.445873775 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/10e32559-b465-4538-af8b-9dd3deedf2b9-operator-scripts") pod "novaapi339c-account-delete-l2kws" (UID: "10e32559-b465-4538-af8b-9dd3deedf2b9") : configmap "openstack-scripts" not found Dec 11 10:38:18 crc kubenswrapper[4953]: I1211 10:38:18.489038 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="261b522a-b786-4b2b-975c-43f1cc0d8ccf" path="/var/lib/kubelet/pods/261b522a-b786-4b2b-975c-43f1cc0d8ccf/volumes" Dec 11 10:38:19 crc kubenswrapper[4953]: I1211 10:38:19.137243 4953 generic.go:334] "Generic (PLEG): container finished" podID="ed741fb7-1326-48b7-a713-17c9f0243eac" containerID="53d5bf4beeeacbda3dba3d57562ea4385d09cf6341585a459bb0c495199b914c" exitCode=0 Dec 11 10:38:19 crc kubenswrapper[4953]: I1211 10:38:19.137292 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q2898" event={"ID":"ed741fb7-1326-48b7-a713-17c9f0243eac","Type":"ContainerDied","Data":"53d5bf4beeeacbda3dba3d57562ea4385d09cf6341585a459bb0c495199b914c"} Dec 11 10:38:19 crc kubenswrapper[4953]: I1211 10:38:19.137333 4953 scope.go:117] "RemoveContainer" containerID="3a6e85260ff84ef604c5e7d3682ea7027e5daf751b9330364d08387a0213f214" Dec 11 10:38:19 crc kubenswrapper[4953]: E1211 10:38:19.334130 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2898_openshift-machine-config-operator(ed741fb7-1326-48b7-a713-17c9f0243eac)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" Dec 11 10:38:20 crc kubenswrapper[4953]: I1211 10:38:20.182691 4953 scope.go:117] "RemoveContainer" containerID="53d5bf4beeeacbda3dba3d57562ea4385d09cf6341585a459bb0c495199b914c" Dec 11 10:38:20 crc kubenswrapper[4953]: E1211 10:38:20.183087 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2898_openshift-machine-config-operator(ed741fb7-1326-48b7-a713-17c9f0243eac)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" Dec 11 10:38:23 crc kubenswrapper[4953]: E1211 10:38:23.012045 4953 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9dd0749df58975f05de050cfcf92dc87ed6378284f27a69c71579f156df64d52 is running failed: container process not found" containerID="9dd0749df58975f05de050cfcf92dc87ed6378284f27a69c71579f156df64d52" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Dec 11 10:38:23 crc kubenswrapper[4953]: E1211 10:38:23.012926 4953 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9dd0749df58975f05de050cfcf92dc87ed6378284f27a69c71579f156df64d52 is running failed: container process not found" containerID="9dd0749df58975f05de050cfcf92dc87ed6378284f27a69c71579f156df64d52" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Dec 11 10:38:23 crc kubenswrapper[4953]: E1211 10:38:23.013297 4953 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9dd0749df58975f05de050cfcf92dc87ed6378284f27a69c71579f156df64d52 is running failed: container process not found" containerID="9dd0749df58975f05de050cfcf92dc87ed6378284f27a69c71579f156df64d52" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Dec 11 10:38:23 crc kubenswrapper[4953]: E1211 10:38:23.013321 4953 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9dd0749df58975f05de050cfcf92dc87ed6378284f27a69c71579f156df64d52 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-mbtwm" podUID="5cfd14e5-05e2-4cc5-ba83-259321c6f872" containerName="ovsdb-server" Dec 11 10:38:23 crc kubenswrapper[4953]: E1211 10:38:23.013740 4953 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f6f4f73f93ab838f657b20b0e0f2f7780e20c20fb3adfe66d3e44a87fc1d18c6" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Dec 11 10:38:23 crc kubenswrapper[4953]: E1211 10:38:23.014885 4953 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f6f4f73f93ab838f657b20b0e0f2f7780e20c20fb3adfe66d3e44a87fc1d18c6" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Dec 11 10:38:23 crc kubenswrapper[4953]: E1211 10:38:23.016519 4953 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f6f4f73f93ab838f657b20b0e0f2f7780e20c20fb3adfe66d3e44a87fc1d18c6" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Dec 11 10:38:23 crc kubenswrapper[4953]: E1211 10:38:23.016553 4953 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-mbtwm" podUID="5cfd14e5-05e2-4cc5-ba83-259321c6f872" containerName="ovs-vswitchd" Dec 11 10:38:25 crc kubenswrapper[4953]: E1211 10:38:25.544677 4953 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Dec 11 10:38:25 crc kubenswrapper[4953]: E1211 10:38:25.544802 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/12df8687-e24e-47fb-802c-3ab978ed04fd-operator-scripts podName:12df8687-e24e-47fb-802c-3ab978ed04fd nodeName:}" failed. No retries permitted until 2025-12-11 10:38:41.544780812 +0000 UTC m=+1639.568639865 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/12df8687-e24e-47fb-802c-3ab978ed04fd-operator-scripts") pod "novacell0caaa-account-delete-n4fck" (UID: "12df8687-e24e-47fb-802c-3ab978ed04fd") : configmap "openstack-scripts" not found Dec 11 10:38:26 crc kubenswrapper[4953]: E1211 10:38:26.459050 4953 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Dec 11 10:38:26 crc kubenswrapper[4953]: E1211 10:38:26.459559 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/10e32559-b465-4538-af8b-9dd3deedf2b9-operator-scripts podName:10e32559-b465-4538-af8b-9dd3deedf2b9 nodeName:}" failed. No retries permitted until 2025-12-11 10:38:42.459532812 +0000 UTC m=+1640.483391855 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/10e32559-b465-4538-af8b-9dd3deedf2b9-operator-scripts") pod "novaapi339c-account-delete-l2kws" (UID: "10e32559-b465-4538-af8b-9dd3deedf2b9") : configmap "openstack-scripts" not found Dec 11 10:38:26 crc kubenswrapper[4953]: E1211 10:38:26.459102 4953 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Dec 11 10:38:26 crc kubenswrapper[4953]: E1211 10:38:26.459122 4953 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Dec 11 10:38:26 crc kubenswrapper[4953]: E1211 10:38:26.459800 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b09879bd-62c8-4810-ad58-09db28d6afb5-operator-scripts podName:b09879bd-62c8-4810-ad58-09db28d6afb5 nodeName:}" failed. No retries permitted until 2025-12-11 10:38:42.459769719 +0000 UTC m=+1640.483628793 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/b09879bd-62c8-4810-ad58-09db28d6afb5-operator-scripts") pod "barbican3c8c-account-delete-kzsq8" (UID: "b09879bd-62c8-4810-ad58-09db28d6afb5") : configmap "openstack-scripts" not found Dec 11 10:38:26 crc kubenswrapper[4953]: E1211 10:38:26.459911 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3aee1a2c-6a1e-48c0-9491-3f61371047eb-operator-scripts podName:3aee1a2c-6a1e-48c0-9491-3f61371047eb nodeName:}" failed. No retries permitted until 2025-12-11 10:38:42.459888563 +0000 UTC m=+1640.483747606 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/3aee1a2c-6a1e-48c0-9491-3f61371047eb-operator-scripts") pod "placementa6a0-account-delete-vhpnd" (UID: "3aee1a2c-6a1e-48c0-9491-3f61371047eb") : configmap "openstack-scripts" not found Dec 11 10:38:28 crc kubenswrapper[4953]: E1211 10:38:28.012357 4953 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9dd0749df58975f05de050cfcf92dc87ed6378284f27a69c71579f156df64d52 is running failed: container process not found" containerID="9dd0749df58975f05de050cfcf92dc87ed6378284f27a69c71579f156df64d52" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Dec 11 10:38:28 crc kubenswrapper[4953]: E1211 10:38:28.013304 4953 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9dd0749df58975f05de050cfcf92dc87ed6378284f27a69c71579f156df64d52 is running failed: container process not found" containerID="9dd0749df58975f05de050cfcf92dc87ed6378284f27a69c71579f156df64d52" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Dec 11 10:38:28 crc kubenswrapper[4953]: E1211 10:38:28.013493 4953 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f6f4f73f93ab838f657b20b0e0f2f7780e20c20fb3adfe66d3e44a87fc1d18c6" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Dec 11 10:38:28 crc kubenswrapper[4953]: E1211 10:38:28.013670 4953 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9dd0749df58975f05de050cfcf92dc87ed6378284f27a69c71579f156df64d52 is running failed: container process not found" containerID="9dd0749df58975f05de050cfcf92dc87ed6378284f27a69c71579f156df64d52" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Dec 11 10:38:28 crc kubenswrapper[4953]: E1211 10:38:28.013714 4953 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9dd0749df58975f05de050cfcf92dc87ed6378284f27a69c71579f156df64d52 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-mbtwm" podUID="5cfd14e5-05e2-4cc5-ba83-259321c6f872" containerName="ovsdb-server" Dec 11 10:38:28 crc kubenswrapper[4953]: E1211 10:38:28.015038 4953 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f6f4f73f93ab838f657b20b0e0f2f7780e20c20fb3adfe66d3e44a87fc1d18c6" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Dec 11 10:38:28 crc kubenswrapper[4953]: E1211 10:38:28.016342 4953 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f6f4f73f93ab838f657b20b0e0f2f7780e20c20fb3adfe66d3e44a87fc1d18c6" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Dec 11 10:38:28 crc kubenswrapper[4953]: E1211 10:38:28.016372 4953 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-mbtwm" podUID="5cfd14e5-05e2-4cc5-ba83-259321c6f872" containerName="ovs-vswitchd" Dec 11 10:38:30 crc kubenswrapper[4953]: I1211 10:38:30.697491 4953 scope.go:117] "RemoveContainer" containerID="01fafdd99eaa9ede427831b53dacf59c2f223520959b1141bcba498e96fc5d55" Dec 11 10:38:30 crc kubenswrapper[4953]: I1211 10:38:30.732542 4953 scope.go:117] "RemoveContainer" containerID="8c899ec3f19ce335b2f89755f8a4e4532bfe9f417bd7fb76d6371e306044ac4e" Dec 11 10:38:30 crc kubenswrapper[4953]: I1211 10:38:30.782407 4953 scope.go:117] "RemoveContainer" containerID="b7f497b107b8e8652a7f168df902d76edf4cc8c0d003e369a126e81b80c2c81c" Dec 11 10:38:33 crc kubenswrapper[4953]: E1211 10:38:33.011594 4953 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9dd0749df58975f05de050cfcf92dc87ed6378284f27a69c71579f156df64d52 is running failed: container process not found" containerID="9dd0749df58975f05de050cfcf92dc87ed6378284f27a69c71579f156df64d52" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Dec 11 10:38:33 crc kubenswrapper[4953]: E1211 10:38:33.012299 4953 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9dd0749df58975f05de050cfcf92dc87ed6378284f27a69c71579f156df64d52 is running failed: container process not found" containerID="9dd0749df58975f05de050cfcf92dc87ed6378284f27a69c71579f156df64d52" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Dec 11 10:38:33 crc kubenswrapper[4953]: E1211 10:38:33.012859 4953 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9dd0749df58975f05de050cfcf92dc87ed6378284f27a69c71579f156df64d52 is running failed: container process not found" containerID="9dd0749df58975f05de050cfcf92dc87ed6378284f27a69c71579f156df64d52" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Dec 11 10:38:33 crc kubenswrapper[4953]: E1211 10:38:33.012904 4953 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9dd0749df58975f05de050cfcf92dc87ed6378284f27a69c71579f156df64d52 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-mbtwm" podUID="5cfd14e5-05e2-4cc5-ba83-259321c6f872" containerName="ovsdb-server" Dec 11 10:38:33 crc kubenswrapper[4953]: E1211 10:38:33.013086 4953 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f6f4f73f93ab838f657b20b0e0f2f7780e20c20fb3adfe66d3e44a87fc1d18c6" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Dec 11 10:38:33 crc kubenswrapper[4953]: E1211 10:38:33.017196 4953 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f6f4f73f93ab838f657b20b0e0f2f7780e20c20fb3adfe66d3e44a87fc1d18c6" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Dec 11 10:38:33 crc kubenswrapper[4953]: E1211 10:38:33.024043 4953 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f6f4f73f93ab838f657b20b0e0f2f7780e20c20fb3adfe66d3e44a87fc1d18c6" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Dec 11 10:38:33 crc kubenswrapper[4953]: E1211 10:38:33.024123 4953 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-mbtwm" podUID="5cfd14e5-05e2-4cc5-ba83-259321c6f872" containerName="ovs-vswitchd" Dec 11 10:38:33 crc kubenswrapper[4953]: I1211 10:38:33.473189 4953 scope.go:117] "RemoveContainer" containerID="53d5bf4beeeacbda3dba3d57562ea4385d09cf6341585a459bb0c495199b914c" Dec 11 10:38:33 crc kubenswrapper[4953]: E1211 10:38:33.473609 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2898_openshift-machine-config-operator(ed741fb7-1326-48b7-a713-17c9f0243eac)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" Dec 11 10:38:34 crc kubenswrapper[4953]: I1211 10:38:34.434533 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-mbtwm_5cfd14e5-05e2-4cc5-ba83-259321c6f872/ovs-vswitchd/0.log" Dec 11 10:38:34 crc kubenswrapper[4953]: I1211 10:38:34.435998 4953 generic.go:334] "Generic (PLEG): container finished" podID="5cfd14e5-05e2-4cc5-ba83-259321c6f872" containerID="f6f4f73f93ab838f657b20b0e0f2f7780e20c20fb3adfe66d3e44a87fc1d18c6" exitCode=137 Dec 11 10:38:34 crc kubenswrapper[4953]: I1211 10:38:34.436089 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-mbtwm" event={"ID":"5cfd14e5-05e2-4cc5-ba83-259321c6f872","Type":"ContainerDied","Data":"f6f4f73f93ab838f657b20b0e0f2f7780e20c20fb3adfe66d3e44a87fc1d18c6"} Dec 11 10:38:34 crc kubenswrapper[4953]: I1211 10:38:34.589374 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-mbtwm_5cfd14e5-05e2-4cc5-ba83-259321c6f872/ovs-vswitchd/0.log" Dec 11 10:38:34 crc kubenswrapper[4953]: I1211 10:38:34.590349 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-mbtwm" Dec 11 10:38:34 crc kubenswrapper[4953]: I1211 10:38:34.729727 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/5cfd14e5-05e2-4cc5-ba83-259321c6f872-var-log\") pod \"5cfd14e5-05e2-4cc5-ba83-259321c6f872\" (UID: \"5cfd14e5-05e2-4cc5-ba83-259321c6f872\") " Dec 11 10:38:34 crc kubenswrapper[4953]: I1211 10:38:34.729853 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5cfd14e5-05e2-4cc5-ba83-259321c6f872-var-log" (OuterVolumeSpecName: "var-log") pod "5cfd14e5-05e2-4cc5-ba83-259321c6f872" (UID: "5cfd14e5-05e2-4cc5-ba83-259321c6f872"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 10:38:34 crc kubenswrapper[4953]: I1211 10:38:34.729904 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/5cfd14e5-05e2-4cc5-ba83-259321c6f872-etc-ovs\") pod \"5cfd14e5-05e2-4cc5-ba83-259321c6f872\" (UID: \"5cfd14e5-05e2-4cc5-ba83-259321c6f872\") " Dec 11 10:38:34 crc kubenswrapper[4953]: I1211 10:38:34.729993 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5cfd14e5-05e2-4cc5-ba83-259321c6f872-scripts\") pod \"5cfd14e5-05e2-4cc5-ba83-259321c6f872\" (UID: \"5cfd14e5-05e2-4cc5-ba83-259321c6f872\") " Dec 11 10:38:34 crc kubenswrapper[4953]: I1211 10:38:34.730068 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/5cfd14e5-05e2-4cc5-ba83-259321c6f872-var-lib\") pod \"5cfd14e5-05e2-4cc5-ba83-259321c6f872\" (UID: \"5cfd14e5-05e2-4cc5-ba83-259321c6f872\") " Dec 11 10:38:34 crc kubenswrapper[4953]: I1211 10:38:34.730084 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5cfd14e5-05e2-4cc5-ba83-259321c6f872-etc-ovs" (OuterVolumeSpecName: "etc-ovs") pod "5cfd14e5-05e2-4cc5-ba83-259321c6f872" (UID: "5cfd14e5-05e2-4cc5-ba83-259321c6f872"). InnerVolumeSpecName "etc-ovs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 10:38:34 crc kubenswrapper[4953]: I1211 10:38:34.730099 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5cfd14e5-05e2-4cc5-ba83-259321c6f872-var-run\") pod \"5cfd14e5-05e2-4cc5-ba83-259321c6f872\" (UID: \"5cfd14e5-05e2-4cc5-ba83-259321c6f872\") " Dec 11 10:38:34 crc kubenswrapper[4953]: I1211 10:38:34.730139 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d9bsb\" (UniqueName: \"kubernetes.io/projected/5cfd14e5-05e2-4cc5-ba83-259321c6f872-kube-api-access-d9bsb\") pod \"5cfd14e5-05e2-4cc5-ba83-259321c6f872\" (UID: \"5cfd14e5-05e2-4cc5-ba83-259321c6f872\") " Dec 11 10:38:34 crc kubenswrapper[4953]: I1211 10:38:34.730351 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5cfd14e5-05e2-4cc5-ba83-259321c6f872-var-lib" (OuterVolumeSpecName: "var-lib") pod "5cfd14e5-05e2-4cc5-ba83-259321c6f872" (UID: "5cfd14e5-05e2-4cc5-ba83-259321c6f872"). InnerVolumeSpecName "var-lib". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 10:38:34 crc kubenswrapper[4953]: I1211 10:38:34.730460 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5cfd14e5-05e2-4cc5-ba83-259321c6f872-var-run" (OuterVolumeSpecName: "var-run") pod "5cfd14e5-05e2-4cc5-ba83-259321c6f872" (UID: "5cfd14e5-05e2-4cc5-ba83-259321c6f872"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 10:38:34 crc kubenswrapper[4953]: I1211 10:38:34.730543 4953 reconciler_common.go:293] "Volume detached for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/5cfd14e5-05e2-4cc5-ba83-259321c6f872-var-lib\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:34 crc kubenswrapper[4953]: I1211 10:38:34.730564 4953 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/5cfd14e5-05e2-4cc5-ba83-259321c6f872-var-log\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:34 crc kubenswrapper[4953]: I1211 10:38:34.730636 4953 reconciler_common.go:293] "Volume detached for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/5cfd14e5-05e2-4cc5-ba83-259321c6f872-etc-ovs\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:34 crc kubenswrapper[4953]: I1211 10:38:34.731818 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5cfd14e5-05e2-4cc5-ba83-259321c6f872-scripts" (OuterVolumeSpecName: "scripts") pod "5cfd14e5-05e2-4cc5-ba83-259321c6f872" (UID: "5cfd14e5-05e2-4cc5-ba83-259321c6f872"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:38:34 crc kubenswrapper[4953]: I1211 10:38:34.737132 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5cfd14e5-05e2-4cc5-ba83-259321c6f872-kube-api-access-d9bsb" (OuterVolumeSpecName: "kube-api-access-d9bsb") pod "5cfd14e5-05e2-4cc5-ba83-259321c6f872" (UID: "5cfd14e5-05e2-4cc5-ba83-259321c6f872"). InnerVolumeSpecName "kube-api-access-d9bsb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:38:34 crc kubenswrapper[4953]: I1211 10:38:34.832248 4953 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5cfd14e5-05e2-4cc5-ba83-259321c6f872-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:34 crc kubenswrapper[4953]: I1211 10:38:34.832285 4953 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5cfd14e5-05e2-4cc5-ba83-259321c6f872-var-run\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:34 crc kubenswrapper[4953]: I1211 10:38:34.832298 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d9bsb\" (UniqueName: \"kubernetes.io/projected/5cfd14e5-05e2-4cc5-ba83-259321c6f872-kube-api-access-d9bsb\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:35 crc kubenswrapper[4953]: I1211 10:38:35.446626 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-mbtwm_5cfd14e5-05e2-4cc5-ba83-259321c6f872/ovs-vswitchd/0.log" Dec 11 10:38:35 crc kubenswrapper[4953]: I1211 10:38:35.447651 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-mbtwm" event={"ID":"5cfd14e5-05e2-4cc5-ba83-259321c6f872","Type":"ContainerDied","Data":"4399ce8689c1cc367477024dd8650a0a20f282fb0dc067b1690b582fe77bbdf2"} Dec 11 10:38:35 crc kubenswrapper[4953]: I1211 10:38:35.447700 4953 scope.go:117] "RemoveContainer" containerID="f6f4f73f93ab838f657b20b0e0f2f7780e20c20fb3adfe66d3e44a87fc1d18c6" Dec 11 10:38:35 crc kubenswrapper[4953]: I1211 10:38:35.447820 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-mbtwm" Dec 11 10:38:35 crc kubenswrapper[4953]: I1211 10:38:35.461867 4953 generic.go:334] "Generic (PLEG): container finished" podID="7be1c768-78bb-476b-b51d-8e4fe80b8500" containerID="510beded97d4416b8880cd56e6120af1f949d427769ab1ee2c169557d12d5494" exitCode=137 Dec 11 10:38:35 crc kubenswrapper[4953]: I1211 10:38:35.461914 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7be1c768-78bb-476b-b51d-8e4fe80b8500","Type":"ContainerDied","Data":"510beded97d4416b8880cd56e6120af1f949d427769ab1ee2c169557d12d5494"} Dec 11 10:38:35 crc kubenswrapper[4953]: I1211 10:38:35.495334 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ovs-mbtwm"] Dec 11 10:38:35 crc kubenswrapper[4953]: I1211 10:38:35.499091 4953 scope.go:117] "RemoveContainer" containerID="9dd0749df58975f05de050cfcf92dc87ed6378284f27a69c71579f156df64d52" Dec 11 10:38:35 crc kubenswrapper[4953]: I1211 10:38:35.500401 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-ovs-mbtwm"] Dec 11 10:38:35 crc kubenswrapper[4953]: I1211 10:38:35.528428 4953 scope.go:117] "RemoveContainer" containerID="a60310c044585031cbb4f2c50aa560e5bb93943517261c36733ba28b71e81580" Dec 11 10:38:35 crc kubenswrapper[4953]: I1211 10:38:35.851173 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Dec 11 10:38:35 crc kubenswrapper[4953]: I1211 10:38:35.948611 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swift\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"7be1c768-78bb-476b-b51d-8e4fe80b8500\" (UID: \"7be1c768-78bb-476b-b51d-8e4fe80b8500\") " Dec 11 10:38:35 crc kubenswrapper[4953]: I1211 10:38:35.948676 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7be1c768-78bb-476b-b51d-8e4fe80b8500-etc-swift\") pod \"7be1c768-78bb-476b-b51d-8e4fe80b8500\" (UID: \"7be1c768-78bb-476b-b51d-8e4fe80b8500\") " Dec 11 10:38:35 crc kubenswrapper[4953]: I1211 10:38:35.948728 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/7be1c768-78bb-476b-b51d-8e4fe80b8500-lock\") pod \"7be1c768-78bb-476b-b51d-8e4fe80b8500\" (UID: \"7be1c768-78bb-476b-b51d-8e4fe80b8500\") " Dec 11 10:38:35 crc kubenswrapper[4953]: I1211 10:38:35.948813 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/7be1c768-78bb-476b-b51d-8e4fe80b8500-cache\") pod \"7be1c768-78bb-476b-b51d-8e4fe80b8500\" (UID: \"7be1c768-78bb-476b-b51d-8e4fe80b8500\") " Dec 11 10:38:35 crc kubenswrapper[4953]: I1211 10:38:35.948884 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2msdn\" (UniqueName: \"kubernetes.io/projected/7be1c768-78bb-476b-b51d-8e4fe80b8500-kube-api-access-2msdn\") pod \"7be1c768-78bb-476b-b51d-8e4fe80b8500\" (UID: \"7be1c768-78bb-476b-b51d-8e4fe80b8500\") " Dec 11 10:38:35 crc kubenswrapper[4953]: I1211 10:38:35.949537 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7be1c768-78bb-476b-b51d-8e4fe80b8500-lock" (OuterVolumeSpecName: "lock") pod "7be1c768-78bb-476b-b51d-8e4fe80b8500" (UID: "7be1c768-78bb-476b-b51d-8e4fe80b8500"). InnerVolumeSpecName "lock". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:38:35 crc kubenswrapper[4953]: I1211 10:38:35.949906 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7be1c768-78bb-476b-b51d-8e4fe80b8500-cache" (OuterVolumeSpecName: "cache") pod "7be1c768-78bb-476b-b51d-8e4fe80b8500" (UID: "7be1c768-78bb-476b-b51d-8e4fe80b8500"). InnerVolumeSpecName "cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:38:35 crc kubenswrapper[4953]: I1211 10:38:35.961894 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "swift") pod "7be1c768-78bb-476b-b51d-8e4fe80b8500" (UID: "7be1c768-78bb-476b-b51d-8e4fe80b8500"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 11 10:38:35 crc kubenswrapper[4953]: I1211 10:38:35.961960 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7be1c768-78bb-476b-b51d-8e4fe80b8500-kube-api-access-2msdn" (OuterVolumeSpecName: "kube-api-access-2msdn") pod "7be1c768-78bb-476b-b51d-8e4fe80b8500" (UID: "7be1c768-78bb-476b-b51d-8e4fe80b8500"). InnerVolumeSpecName "kube-api-access-2msdn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:38:35 crc kubenswrapper[4953]: I1211 10:38:35.962002 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7be1c768-78bb-476b-b51d-8e4fe80b8500-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "7be1c768-78bb-476b-b51d-8e4fe80b8500" (UID: "7be1c768-78bb-476b-b51d-8e4fe80b8500"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:38:36 crc kubenswrapper[4953]: I1211 10:38:36.050545 4953 reconciler_common.go:293] "Volume detached for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/7be1c768-78bb-476b-b51d-8e4fe80b8500-cache\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:36 crc kubenswrapper[4953]: I1211 10:38:36.050607 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2msdn\" (UniqueName: \"kubernetes.io/projected/7be1c768-78bb-476b-b51d-8e4fe80b8500-kube-api-access-2msdn\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:36 crc kubenswrapper[4953]: I1211 10:38:36.050646 4953 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Dec 11 10:38:36 crc kubenswrapper[4953]: I1211 10:38:36.050658 4953 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7be1c768-78bb-476b-b51d-8e4fe80b8500-etc-swift\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:36 crc kubenswrapper[4953]: I1211 10:38:36.050671 4953 reconciler_common.go:293] "Volume detached for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/7be1c768-78bb-476b-b51d-8e4fe80b8500-lock\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:36 crc kubenswrapper[4953]: I1211 10:38:36.065057 4953 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Dec 11 10:38:36 crc kubenswrapper[4953]: I1211 10:38:36.152503 4953 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:36 crc kubenswrapper[4953]: I1211 10:38:36.479684 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Dec 11 10:38:36 crc kubenswrapper[4953]: I1211 10:38:36.483721 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5cfd14e5-05e2-4cc5-ba83-259321c6f872" path="/var/lib/kubelet/pods/5cfd14e5-05e2-4cc5-ba83-259321c6f872/volumes" Dec 11 10:38:36 crc kubenswrapper[4953]: I1211 10:38:36.484715 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7be1c768-78bb-476b-b51d-8e4fe80b8500","Type":"ContainerDied","Data":"c9663b5e599da44a2b74b878eb12528722e78579e0db32a7b865205da603cdb9"} Dec 11 10:38:36 crc kubenswrapper[4953]: I1211 10:38:36.484758 4953 scope.go:117] "RemoveContainer" containerID="510beded97d4416b8880cd56e6120af1f949d427769ab1ee2c169557d12d5494" Dec 11 10:38:36 crc kubenswrapper[4953]: I1211 10:38:36.510064 4953 scope.go:117] "RemoveContainer" containerID="3c65359d49ee68c46b25f7c48cca23725c2a07a228cfed6a3b8c90cef4f401ce" Dec 11 10:38:36 crc kubenswrapper[4953]: I1211 10:38:36.536639 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-storage-0"] Dec 11 10:38:36 crc kubenswrapper[4953]: I1211 10:38:36.539015 4953 scope.go:117] "RemoveContainer" containerID="55455d29b2f9f09dccbeb1ee95244b733e578206c81bca651b8b08a2abc3da6f" Dec 11 10:38:36 crc kubenswrapper[4953]: I1211 10:38:36.545166 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-storage-0"] Dec 11 10:38:36 crc kubenswrapper[4953]: I1211 10:38:36.557629 4953 scope.go:117] "RemoveContainer" containerID="c7fa20846bc15438ea48e549cb0457b5fdbbcd2598a4d940ee938fb4fb3a9db3" Dec 11 10:38:36 crc kubenswrapper[4953]: I1211 10:38:36.575699 4953 scope.go:117] "RemoveContainer" containerID="bc0e3f085ef80ef3d58ffae3ef2a52f5bf40447e1f3f4fae4ba935bd88ae1802" Dec 11 10:38:36 crc kubenswrapper[4953]: I1211 10:38:36.591887 4953 scope.go:117] "RemoveContainer" containerID="8981b379cfe002ec1ffbcd789bf3f9088d55241543514d305383406d070e9749" Dec 11 10:38:36 crc kubenswrapper[4953]: I1211 10:38:36.609959 4953 scope.go:117] "RemoveContainer" containerID="42ee56a6413b971f972dd83deea70f7f4ed0f5bd15d3d8739f47c3de625b36da" Dec 11 10:38:36 crc kubenswrapper[4953]: I1211 10:38:36.632827 4953 scope.go:117] "RemoveContainer" containerID="ee01036005d992c399d8891c4088b620c28089677482095eb23ddbcf5787ed0f" Dec 11 10:38:36 crc kubenswrapper[4953]: I1211 10:38:36.650039 4953 scope.go:117] "RemoveContainer" containerID="8271a6a07ac8401063b754218c3eb89ceb4f2d9d019082057eb897dcd5350656" Dec 11 10:38:36 crc kubenswrapper[4953]: I1211 10:38:36.668465 4953 scope.go:117] "RemoveContainer" containerID="47e0171f5c393def51346598fe0050490ca2584402ed6532e4a68c71c29d1284" Dec 11 10:38:36 crc kubenswrapper[4953]: I1211 10:38:36.687802 4953 scope.go:117] "RemoveContainer" containerID="bf1b66be16060aee36932d81a73465cd1174ad5e0ce2ac136fa9b17ea2beb026" Dec 11 10:38:36 crc kubenswrapper[4953]: I1211 10:38:36.707449 4953 scope.go:117] "RemoveContainer" containerID="84916ff0808e4afae4bbc6dc9c0bfcc649e85608c78bcca53fc062955964d97f" Dec 11 10:38:36 crc kubenswrapper[4953]: I1211 10:38:36.729429 4953 scope.go:117] "RemoveContainer" containerID="8bd2acaf8a28b1f1656e66014334ca8748f846ad6e8ad38b27cb4bdf466f3173" Dec 11 10:38:36 crc kubenswrapper[4953]: I1211 10:38:36.747888 4953 scope.go:117] "RemoveContainer" containerID="679d2553c36012b1b180157877c057ec44f2c2462adfbbecdb5379d3b623b02c" Dec 11 10:38:36 crc kubenswrapper[4953]: I1211 10:38:36.766511 4953 scope.go:117] "RemoveContainer" containerID="8f4c46cb4b9e3e20f278144150f92781df0603ba1ce189953a04f830ee3bc004" Dec 11 10:38:38 crc kubenswrapper[4953]: I1211 10:38:38.485680 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7be1c768-78bb-476b-b51d-8e4fe80b8500" path="/var/lib/kubelet/pods/7be1c768-78bb-476b-b51d-8e4fe80b8500/volumes" Dec 11 10:38:41 crc kubenswrapper[4953]: E1211 10:38:41.632713 4953 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Dec 11 10:38:41 crc kubenswrapper[4953]: E1211 10:38:41.633150 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/12df8687-e24e-47fb-802c-3ab978ed04fd-operator-scripts podName:12df8687-e24e-47fb-802c-3ab978ed04fd nodeName:}" failed. No retries permitted until 2025-12-11 10:39:13.63313378 +0000 UTC m=+1671.656992983 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/12df8687-e24e-47fb-802c-3ab978ed04fd-operator-scripts") pod "novacell0caaa-account-delete-n4fck" (UID: "12df8687-e24e-47fb-802c-3ab978ed04fd") : configmap "openstack-scripts" not found Dec 11 10:38:42 crc kubenswrapper[4953]: E1211 10:38:42.545313 4953 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Dec 11 10:38:42 crc kubenswrapper[4953]: E1211 10:38:42.545566 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3aee1a2c-6a1e-48c0-9491-3f61371047eb-operator-scripts podName:3aee1a2c-6a1e-48c0-9491-3f61371047eb nodeName:}" failed. No retries permitted until 2025-12-11 10:39:14.545499806 +0000 UTC m=+1672.569358849 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/3aee1a2c-6a1e-48c0-9491-3f61371047eb-operator-scripts") pod "placementa6a0-account-delete-vhpnd" (UID: "3aee1a2c-6a1e-48c0-9491-3f61371047eb") : configmap "openstack-scripts" not found Dec 11 10:38:42 crc kubenswrapper[4953]: E1211 10:38:42.546034 4953 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Dec 11 10:38:42 crc kubenswrapper[4953]: E1211 10:38:42.546088 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b09879bd-62c8-4810-ad58-09db28d6afb5-operator-scripts podName:b09879bd-62c8-4810-ad58-09db28d6afb5 nodeName:}" failed. No retries permitted until 2025-12-11 10:39:14.546074224 +0000 UTC m=+1672.569933257 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/b09879bd-62c8-4810-ad58-09db28d6afb5-operator-scripts") pod "barbican3c8c-account-delete-kzsq8" (UID: "b09879bd-62c8-4810-ad58-09db28d6afb5") : configmap "openstack-scripts" not found Dec 11 10:38:42 crc kubenswrapper[4953]: E1211 10:38:42.546125 4953 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Dec 11 10:38:42 crc kubenswrapper[4953]: E1211 10:38:42.546145 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/10e32559-b465-4538-af8b-9dd3deedf2b9-operator-scripts podName:10e32559-b465-4538-af8b-9dd3deedf2b9 nodeName:}" failed. No retries permitted until 2025-12-11 10:39:14.546139236 +0000 UTC m=+1672.569998269 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/10e32559-b465-4538-af8b-9dd3deedf2b9-operator-scripts") pod "novaapi339c-account-delete-l2kws" (UID: "10e32559-b465-4538-af8b-9dd3deedf2b9") : configmap "openstack-scripts" not found Dec 11 10:38:43 crc kubenswrapper[4953]: I1211 10:38:43.560527 4953 generic.go:334] "Generic (PLEG): container finished" podID="b09879bd-62c8-4810-ad58-09db28d6afb5" containerID="b6d2cd8785b03d254d03f3c737ce94fe0726a8177ca0464c1ca95a86c4f2ae0c" exitCode=137 Dec 11 10:38:43 crc kubenswrapper[4953]: I1211 10:38:43.560669 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican3c8c-account-delete-kzsq8" event={"ID":"b09879bd-62c8-4810-ad58-09db28d6afb5","Type":"ContainerDied","Data":"b6d2cd8785b03d254d03f3c737ce94fe0726a8177ca0464c1ca95a86c4f2ae0c"} Dec 11 10:38:43 crc kubenswrapper[4953]: I1211 10:38:43.563502 4953 generic.go:334] "Generic (PLEG): container finished" podID="3aee1a2c-6a1e-48c0-9491-3f61371047eb" containerID="fdc5b9b474ad12ca3867351b611c8f54d3bb3368f91df95bfe30268fd52088fe" exitCode=137 Dec 11 10:38:43 crc kubenswrapper[4953]: I1211 10:38:43.563531 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placementa6a0-account-delete-vhpnd" event={"ID":"3aee1a2c-6a1e-48c0-9491-3f61371047eb","Type":"ContainerDied","Data":"fdc5b9b474ad12ca3867351b611c8f54d3bb3368f91df95bfe30268fd52088fe"} Dec 11 10:38:43 crc kubenswrapper[4953]: I1211 10:38:43.864845 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placementa6a0-account-delete-vhpnd" Dec 11 10:38:43 crc kubenswrapper[4953]: I1211 10:38:43.874829 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican3c8c-account-delete-kzsq8" Dec 11 10:38:43 crc kubenswrapper[4953]: I1211 10:38:43.906661 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vdx4n\" (UniqueName: \"kubernetes.io/projected/3aee1a2c-6a1e-48c0-9491-3f61371047eb-kube-api-access-vdx4n\") pod \"3aee1a2c-6a1e-48c0-9491-3f61371047eb\" (UID: \"3aee1a2c-6a1e-48c0-9491-3f61371047eb\") " Dec 11 10:38:43 crc kubenswrapper[4953]: I1211 10:38:43.906769 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3aee1a2c-6a1e-48c0-9491-3f61371047eb-operator-scripts\") pod \"3aee1a2c-6a1e-48c0-9491-3f61371047eb\" (UID: \"3aee1a2c-6a1e-48c0-9491-3f61371047eb\") " Dec 11 10:38:43 crc kubenswrapper[4953]: I1211 10:38:43.906818 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xt9pr\" (UniqueName: \"kubernetes.io/projected/b09879bd-62c8-4810-ad58-09db28d6afb5-kube-api-access-xt9pr\") pod \"b09879bd-62c8-4810-ad58-09db28d6afb5\" (UID: \"b09879bd-62c8-4810-ad58-09db28d6afb5\") " Dec 11 10:38:43 crc kubenswrapper[4953]: I1211 10:38:43.906959 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b09879bd-62c8-4810-ad58-09db28d6afb5-operator-scripts\") pod \"b09879bd-62c8-4810-ad58-09db28d6afb5\" (UID: \"b09879bd-62c8-4810-ad58-09db28d6afb5\") " Dec 11 10:38:43 crc kubenswrapper[4953]: I1211 10:38:43.907727 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3aee1a2c-6a1e-48c0-9491-3f61371047eb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3aee1a2c-6a1e-48c0-9491-3f61371047eb" (UID: "3aee1a2c-6a1e-48c0-9491-3f61371047eb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:38:43 crc kubenswrapper[4953]: I1211 10:38:43.907952 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b09879bd-62c8-4810-ad58-09db28d6afb5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b09879bd-62c8-4810-ad58-09db28d6afb5" (UID: "b09879bd-62c8-4810-ad58-09db28d6afb5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:38:43 crc kubenswrapper[4953]: I1211 10:38:43.917428 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3aee1a2c-6a1e-48c0-9491-3f61371047eb-kube-api-access-vdx4n" (OuterVolumeSpecName: "kube-api-access-vdx4n") pod "3aee1a2c-6a1e-48c0-9491-3f61371047eb" (UID: "3aee1a2c-6a1e-48c0-9491-3f61371047eb"). InnerVolumeSpecName "kube-api-access-vdx4n". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:38:43 crc kubenswrapper[4953]: I1211 10:38:43.918418 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b09879bd-62c8-4810-ad58-09db28d6afb5-kube-api-access-xt9pr" (OuterVolumeSpecName: "kube-api-access-xt9pr") pod "b09879bd-62c8-4810-ad58-09db28d6afb5" (UID: "b09879bd-62c8-4810-ad58-09db28d6afb5"). InnerVolumeSpecName "kube-api-access-xt9pr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:38:44 crc kubenswrapper[4953]: I1211 10:38:44.021036 4953 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b09879bd-62c8-4810-ad58-09db28d6afb5-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:44 crc kubenswrapper[4953]: I1211 10:38:44.021070 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vdx4n\" (UniqueName: \"kubernetes.io/projected/3aee1a2c-6a1e-48c0-9491-3f61371047eb-kube-api-access-vdx4n\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:44 crc kubenswrapper[4953]: I1211 10:38:44.021087 4953 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3aee1a2c-6a1e-48c0-9491-3f61371047eb-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:44 crc kubenswrapper[4953]: I1211 10:38:44.021105 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xt9pr\" (UniqueName: \"kubernetes.io/projected/b09879bd-62c8-4810-ad58-09db28d6afb5-kube-api-access-xt9pr\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:44 crc kubenswrapper[4953]: I1211 10:38:44.034914 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/novacell0caaa-account-delete-n4fck" Dec 11 10:38:44 crc kubenswrapper[4953]: I1211 10:38:44.121672 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t47tk\" (UniqueName: \"kubernetes.io/projected/12df8687-e24e-47fb-802c-3ab978ed04fd-kube-api-access-t47tk\") pod \"12df8687-e24e-47fb-802c-3ab978ed04fd\" (UID: \"12df8687-e24e-47fb-802c-3ab978ed04fd\") " Dec 11 10:38:44 crc kubenswrapper[4953]: I1211 10:38:44.121902 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/12df8687-e24e-47fb-802c-3ab978ed04fd-operator-scripts\") pod \"12df8687-e24e-47fb-802c-3ab978ed04fd\" (UID: \"12df8687-e24e-47fb-802c-3ab978ed04fd\") " Dec 11 10:38:44 crc kubenswrapper[4953]: I1211 10:38:44.122378 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/12df8687-e24e-47fb-802c-3ab978ed04fd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "12df8687-e24e-47fb-802c-3ab978ed04fd" (UID: "12df8687-e24e-47fb-802c-3ab978ed04fd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:38:44 crc kubenswrapper[4953]: I1211 10:38:44.124660 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12df8687-e24e-47fb-802c-3ab978ed04fd-kube-api-access-t47tk" (OuterVolumeSpecName: "kube-api-access-t47tk") pod "12df8687-e24e-47fb-802c-3ab978ed04fd" (UID: "12df8687-e24e-47fb-802c-3ab978ed04fd"). InnerVolumeSpecName "kube-api-access-t47tk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:38:44 crc kubenswrapper[4953]: I1211 10:38:44.205548 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/novaapi339c-account-delete-l2kws" Dec 11 10:38:44 crc kubenswrapper[4953]: I1211 10:38:44.223342 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/10e32559-b465-4538-af8b-9dd3deedf2b9-operator-scripts\") pod \"10e32559-b465-4538-af8b-9dd3deedf2b9\" (UID: \"10e32559-b465-4538-af8b-9dd3deedf2b9\") " Dec 11 10:38:44 crc kubenswrapper[4953]: I1211 10:38:44.223481 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wcrgq\" (UniqueName: \"kubernetes.io/projected/10e32559-b465-4538-af8b-9dd3deedf2b9-kube-api-access-wcrgq\") pod \"10e32559-b465-4538-af8b-9dd3deedf2b9\" (UID: \"10e32559-b465-4538-af8b-9dd3deedf2b9\") " Dec 11 10:38:44 crc kubenswrapper[4953]: I1211 10:38:44.223732 4953 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/12df8687-e24e-47fb-802c-3ab978ed04fd-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:44 crc kubenswrapper[4953]: I1211 10:38:44.223753 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t47tk\" (UniqueName: \"kubernetes.io/projected/12df8687-e24e-47fb-802c-3ab978ed04fd-kube-api-access-t47tk\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:44 crc kubenswrapper[4953]: I1211 10:38:44.224064 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/10e32559-b465-4538-af8b-9dd3deedf2b9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "10e32559-b465-4538-af8b-9dd3deedf2b9" (UID: "10e32559-b465-4538-af8b-9dd3deedf2b9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:38:44 crc kubenswrapper[4953]: I1211 10:38:44.228802 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10e32559-b465-4538-af8b-9dd3deedf2b9-kube-api-access-wcrgq" (OuterVolumeSpecName: "kube-api-access-wcrgq") pod "10e32559-b465-4538-af8b-9dd3deedf2b9" (UID: "10e32559-b465-4538-af8b-9dd3deedf2b9"). InnerVolumeSpecName "kube-api-access-wcrgq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:38:44 crc kubenswrapper[4953]: I1211 10:38:44.324817 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wcrgq\" (UniqueName: \"kubernetes.io/projected/10e32559-b465-4538-af8b-9dd3deedf2b9-kube-api-access-wcrgq\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:44 crc kubenswrapper[4953]: I1211 10:38:44.324855 4953 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/10e32559-b465-4538-af8b-9dd3deedf2b9-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 10:38:44 crc kubenswrapper[4953]: I1211 10:38:44.591638 4953 generic.go:334] "Generic (PLEG): container finished" podID="10e32559-b465-4538-af8b-9dd3deedf2b9" containerID="724ef16300326cd83d936dac5cc2888490dcc1ad76a2a512c2133c22dfc295a2" exitCode=137 Dec 11 10:38:44 crc kubenswrapper[4953]: I1211 10:38:44.591715 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/novaapi339c-account-delete-l2kws" Dec 11 10:38:44 crc kubenswrapper[4953]: I1211 10:38:44.591708 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/novaapi339c-account-delete-l2kws" event={"ID":"10e32559-b465-4538-af8b-9dd3deedf2b9","Type":"ContainerDied","Data":"724ef16300326cd83d936dac5cc2888490dcc1ad76a2a512c2133c22dfc295a2"} Dec 11 10:38:44 crc kubenswrapper[4953]: I1211 10:38:44.591789 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/novaapi339c-account-delete-l2kws" event={"ID":"10e32559-b465-4538-af8b-9dd3deedf2b9","Type":"ContainerDied","Data":"4f316ac35b4df1a9f5195a54af7c20b83aa8d2cd8c9a6a360ce94ef77c733433"} Dec 11 10:38:44 crc kubenswrapper[4953]: I1211 10:38:44.591813 4953 scope.go:117] "RemoveContainer" containerID="724ef16300326cd83d936dac5cc2888490dcc1ad76a2a512c2133c22dfc295a2" Dec 11 10:38:44 crc kubenswrapper[4953]: I1211 10:38:44.597316 4953 generic.go:334] "Generic (PLEG): container finished" podID="12df8687-e24e-47fb-802c-3ab978ed04fd" containerID="a0459dab2bbbd23193b9976fdc006d97c50347d90ea60322c21cd8b6deef3262" exitCode=137 Dec 11 10:38:44 crc kubenswrapper[4953]: I1211 10:38:44.597395 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/novacell0caaa-account-delete-n4fck" event={"ID":"12df8687-e24e-47fb-802c-3ab978ed04fd","Type":"ContainerDied","Data":"a0459dab2bbbd23193b9976fdc006d97c50347d90ea60322c21cd8b6deef3262"} Dec 11 10:38:44 crc kubenswrapper[4953]: I1211 10:38:44.597409 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/novacell0caaa-account-delete-n4fck" Dec 11 10:38:44 crc kubenswrapper[4953]: I1211 10:38:44.597429 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/novacell0caaa-account-delete-n4fck" event={"ID":"12df8687-e24e-47fb-802c-3ab978ed04fd","Type":"ContainerDied","Data":"1f4a8b8fe5c62cfd729ea5bef5c02cbf9e6535458dd38a397dfc635e8f13b149"} Dec 11 10:38:44 crc kubenswrapper[4953]: I1211 10:38:44.600944 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placementa6a0-account-delete-vhpnd" event={"ID":"3aee1a2c-6a1e-48c0-9491-3f61371047eb","Type":"ContainerDied","Data":"50cdd6cfdaf7476788da5828eaafbaf677a388187dfb6ff3243df19709359e3e"} Dec 11 10:38:44 crc kubenswrapper[4953]: I1211 10:38:44.600959 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placementa6a0-account-delete-vhpnd" Dec 11 10:38:44 crc kubenswrapper[4953]: I1211 10:38:44.604351 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican3c8c-account-delete-kzsq8" event={"ID":"b09879bd-62c8-4810-ad58-09db28d6afb5","Type":"ContainerDied","Data":"56ab39730968e40fdc10da81605b560f0b5f5a4167aefccc2be6a303575bc685"} Dec 11 10:38:44 crc kubenswrapper[4953]: I1211 10:38:44.604427 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican3c8c-account-delete-kzsq8" Dec 11 10:38:44 crc kubenswrapper[4953]: I1211 10:38:44.617887 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/novaapi339c-account-delete-l2kws"] Dec 11 10:38:44 crc kubenswrapper[4953]: I1211 10:38:44.624930 4953 scope.go:117] "RemoveContainer" containerID="724ef16300326cd83d936dac5cc2888490dcc1ad76a2a512c2133c22dfc295a2" Dec 11 10:38:44 crc kubenswrapper[4953]: E1211 10:38:44.627639 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"724ef16300326cd83d936dac5cc2888490dcc1ad76a2a512c2133c22dfc295a2\": container with ID starting with 724ef16300326cd83d936dac5cc2888490dcc1ad76a2a512c2133c22dfc295a2 not found: ID does not exist" containerID="724ef16300326cd83d936dac5cc2888490dcc1ad76a2a512c2133c22dfc295a2" Dec 11 10:38:44 crc kubenswrapper[4953]: I1211 10:38:44.627679 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"724ef16300326cd83d936dac5cc2888490dcc1ad76a2a512c2133c22dfc295a2"} err="failed to get container status \"724ef16300326cd83d936dac5cc2888490dcc1ad76a2a512c2133c22dfc295a2\": rpc error: code = NotFound desc = could not find container \"724ef16300326cd83d936dac5cc2888490dcc1ad76a2a512c2133c22dfc295a2\": container with ID starting with 724ef16300326cd83d936dac5cc2888490dcc1ad76a2a512c2133c22dfc295a2 not found: ID does not exist" Dec 11 10:38:44 crc kubenswrapper[4953]: I1211 10:38:44.627715 4953 scope.go:117] "RemoveContainer" containerID="a0459dab2bbbd23193b9976fdc006d97c50347d90ea60322c21cd8b6deef3262" Dec 11 10:38:44 crc kubenswrapper[4953]: I1211 10:38:44.631743 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/novaapi339c-account-delete-l2kws"] Dec 11 10:38:44 crc kubenswrapper[4953]: I1211 10:38:44.644558 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/novacell0caaa-account-delete-n4fck"] Dec 11 10:38:44 crc kubenswrapper[4953]: I1211 10:38:44.649985 4953 scope.go:117] "RemoveContainer" containerID="a0459dab2bbbd23193b9976fdc006d97c50347d90ea60322c21cd8b6deef3262" Dec 11 10:38:44 crc kubenswrapper[4953]: E1211 10:38:44.653804 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a0459dab2bbbd23193b9976fdc006d97c50347d90ea60322c21cd8b6deef3262\": container with ID starting with a0459dab2bbbd23193b9976fdc006d97c50347d90ea60322c21cd8b6deef3262 not found: ID does not exist" containerID="a0459dab2bbbd23193b9976fdc006d97c50347d90ea60322c21cd8b6deef3262" Dec 11 10:38:44 crc kubenswrapper[4953]: I1211 10:38:44.653852 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0459dab2bbbd23193b9976fdc006d97c50347d90ea60322c21cd8b6deef3262"} err="failed to get container status \"a0459dab2bbbd23193b9976fdc006d97c50347d90ea60322c21cd8b6deef3262\": rpc error: code = NotFound desc = could not find container \"a0459dab2bbbd23193b9976fdc006d97c50347d90ea60322c21cd8b6deef3262\": container with ID starting with a0459dab2bbbd23193b9976fdc006d97c50347d90ea60322c21cd8b6deef3262 not found: ID does not exist" Dec 11 10:38:44 crc kubenswrapper[4953]: I1211 10:38:44.653882 4953 scope.go:117] "RemoveContainer" containerID="fdc5b9b474ad12ca3867351b611c8f54d3bb3368f91df95bfe30268fd52088fe" Dec 11 10:38:44 crc kubenswrapper[4953]: I1211 10:38:44.656032 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/novacell0caaa-account-delete-n4fck"] Dec 11 10:38:44 crc kubenswrapper[4953]: I1211 10:38:44.662471 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placementa6a0-account-delete-vhpnd"] Dec 11 10:38:44 crc kubenswrapper[4953]: I1211 10:38:44.691979 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placementa6a0-account-delete-vhpnd"] Dec 11 10:38:44 crc kubenswrapper[4953]: I1211 10:38:44.833812 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican3c8c-account-delete-kzsq8"] Dec 11 10:38:44 crc kubenswrapper[4953]: I1211 10:38:44.836519 4953 scope.go:117] "RemoveContainer" containerID="b6d2cd8785b03d254d03f3c737ce94fe0726a8177ca0464c1ca95a86c4f2ae0c" Dec 11 10:38:44 crc kubenswrapper[4953]: I1211 10:38:44.839276 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican3c8c-account-delete-kzsq8"] Dec 11 10:38:46 crc kubenswrapper[4953]: I1211 10:38:46.474631 4953 scope.go:117] "RemoveContainer" containerID="53d5bf4beeeacbda3dba3d57562ea4385d09cf6341585a459bb0c495199b914c" Dec 11 10:38:46 crc kubenswrapper[4953]: E1211 10:38:46.475431 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2898_openshift-machine-config-operator(ed741fb7-1326-48b7-a713-17c9f0243eac)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" Dec 11 10:38:46 crc kubenswrapper[4953]: I1211 10:38:46.484379 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10e32559-b465-4538-af8b-9dd3deedf2b9" path="/var/lib/kubelet/pods/10e32559-b465-4538-af8b-9dd3deedf2b9/volumes" Dec 11 10:38:46 crc kubenswrapper[4953]: I1211 10:38:46.485209 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12df8687-e24e-47fb-802c-3ab978ed04fd" path="/var/lib/kubelet/pods/12df8687-e24e-47fb-802c-3ab978ed04fd/volumes" Dec 11 10:38:46 crc kubenswrapper[4953]: I1211 10:38:46.485946 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3aee1a2c-6a1e-48c0-9491-3f61371047eb" path="/var/lib/kubelet/pods/3aee1a2c-6a1e-48c0-9491-3f61371047eb/volumes" Dec 11 10:38:46 crc kubenswrapper[4953]: I1211 10:38:46.486668 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b09879bd-62c8-4810-ad58-09db28d6afb5" path="/var/lib/kubelet/pods/b09879bd-62c8-4810-ad58-09db28d6afb5/volumes" Dec 11 10:39:01 crc kubenswrapper[4953]: I1211 10:39:01.474597 4953 scope.go:117] "RemoveContainer" containerID="53d5bf4beeeacbda3dba3d57562ea4385d09cf6341585a459bb0c495199b914c" Dec 11 10:39:01 crc kubenswrapper[4953]: E1211 10:39:01.475497 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2898_openshift-machine-config-operator(ed741fb7-1326-48b7-a713-17c9f0243eac)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" Dec 11 10:39:14 crc kubenswrapper[4953]: I1211 10:39:14.473270 4953 scope.go:117] "RemoveContainer" containerID="53d5bf4beeeacbda3dba3d57562ea4385d09cf6341585a459bb0c495199b914c" Dec 11 10:39:14 crc kubenswrapper[4953]: E1211 10:39:14.474054 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2898_openshift-machine-config-operator(ed741fb7-1326-48b7-a713-17c9f0243eac)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" Dec 11 10:39:28 crc kubenswrapper[4953]: I1211 10:39:28.473742 4953 scope.go:117] "RemoveContainer" containerID="53d5bf4beeeacbda3dba3d57562ea4385d09cf6341585a459bb0c495199b914c" Dec 11 10:39:28 crc kubenswrapper[4953]: E1211 10:39:28.474693 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2898_openshift-machine-config-operator(ed741fb7-1326-48b7-a713-17c9f0243eac)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" Dec 11 10:39:31 crc kubenswrapper[4953]: I1211 10:39:31.070696 4953 scope.go:117] "RemoveContainer" containerID="55b113bd75a4484bbac3106dd545ff87217763b1e7a5646cc0bcaf98c719fdee" Dec 11 10:39:31 crc kubenswrapper[4953]: I1211 10:39:31.103419 4953 scope.go:117] "RemoveContainer" containerID="ff4df84b5455a06234056225db00cca3e71fae62243dad9881fe1f0298afdb96" Dec 11 10:39:31 crc kubenswrapper[4953]: I1211 10:39:31.137770 4953 scope.go:117] "RemoveContainer" containerID="c0ec3c53a1addac2b6eaec9e38e1601216e6fa7d2457dfa7d6b2ce8322a36b97" Dec 11 10:39:31 crc kubenswrapper[4953]: I1211 10:39:31.170899 4953 scope.go:117] "RemoveContainer" containerID="e24028122b0541ca1dcdd9f22fbc323c08f54b9267d8e6861c4cf08008ec58e7" Dec 11 10:39:31 crc kubenswrapper[4953]: I1211 10:39:31.211030 4953 scope.go:117] "RemoveContainer" containerID="eea627d5e97aa1a1057dbb22ff6c90022c45a3a65b7f437101642242dc20cf70" Dec 11 10:39:31 crc kubenswrapper[4953]: I1211 10:39:31.252950 4953 scope.go:117] "RemoveContainer" containerID="e7f9da89bc4cc69fcfc29c86025b95c1e7102d882393aafef545cd1a55d66176" Dec 11 10:39:31 crc kubenswrapper[4953]: I1211 10:39:31.284398 4953 scope.go:117] "RemoveContainer" containerID="6cb63fc021abbf51294f3340998ca79c270fb7c016342e5021294c8868f4f534" Dec 11 10:39:31 crc kubenswrapper[4953]: I1211 10:39:31.321059 4953 scope.go:117] "RemoveContainer" containerID="92e616415816ed37c9fe41bce3b5cf2b458cff13348e5f0516d84a6c59d4c830" Dec 11 10:39:31 crc kubenswrapper[4953]: I1211 10:39:31.358985 4953 scope.go:117] "RemoveContainer" containerID="a54a95763bb0d56b10459a347095618414ea8f3db4ecc2dcde6e2baafdcdf4c0" Dec 11 10:39:31 crc kubenswrapper[4953]: I1211 10:39:31.379089 4953 scope.go:117] "RemoveContainer" containerID="d94ea683beee7d4145a352f9da743957bfb974fd3dcdfc07f2199ee8dca67ba5" Dec 11 10:39:31 crc kubenswrapper[4953]: I1211 10:39:31.395601 4953 scope.go:117] "RemoveContainer" containerID="ed0ad3252bc35dde5add6a8556d73829cf5ce860401ea68746bcbdd08f771641" Dec 11 10:39:31 crc kubenswrapper[4953]: I1211 10:39:31.413856 4953 scope.go:117] "RemoveContainer" containerID="ce35d9c8db3f099f3512933e98a8fa5956b4c1181fcb2c19d2c45adeaab76074" Dec 11 10:39:31 crc kubenswrapper[4953]: I1211 10:39:31.432479 4953 scope.go:117] "RemoveContainer" containerID="72ef224d6c1c02029434c22876ede6fd4c8207724a4168eb2cdb7e194df7370b" Dec 11 10:39:31 crc kubenswrapper[4953]: I1211 10:39:31.450205 4953 scope.go:117] "RemoveContainer" containerID="6f26c9a908150dcfa2ac5012a20a7a3945672bf67dd3cf13f78bf68bf18ade63" Dec 11 10:39:31 crc kubenswrapper[4953]: I1211 10:39:31.474247 4953 scope.go:117] "RemoveContainer" containerID="7589af4710335884c74da5d91b7244e992b3ef4b45efdaec7dc45eeb9e4bf09c" Dec 11 10:39:31 crc kubenswrapper[4953]: I1211 10:39:31.496937 4953 scope.go:117] "RemoveContainer" containerID="0cfe0bd98f32db174fde1333af2c3108717607f2c93978857021eab34e2c9d4e" Dec 11 10:39:31 crc kubenswrapper[4953]: I1211 10:39:31.533043 4953 scope.go:117] "RemoveContainer" containerID="79cc47d9dc3c03e712eaad55e52c68d02d784451419037cdd7fbdbf61ac6149e" Dec 11 10:39:37 crc kubenswrapper[4953]: I1211 10:39:37.415360 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rvj9c"] Dec 11 10:39:37 crc kubenswrapper[4953]: E1211 10:39:37.417757 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6515789-e6f6-4aa3-83f3-4fc58f862dc9" containerName="mariadb-account-delete" Dec 11 10:39:37 crc kubenswrapper[4953]: I1211 10:39:37.417877 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6515789-e6f6-4aa3-83f3-4fc58f862dc9" containerName="mariadb-account-delete" Dec 11 10:39:37 crc kubenswrapper[4953]: E1211 10:39:37.417967 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b09879bd-62c8-4810-ad58-09db28d6afb5" containerName="mariadb-account-delete" Dec 11 10:39:37 crc kubenswrapper[4953]: I1211 10:39:37.418051 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="b09879bd-62c8-4810-ad58-09db28d6afb5" containerName="mariadb-account-delete" Dec 11 10:39:37 crc kubenswrapper[4953]: E1211 10:39:37.418130 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="544e1955-4316-4587-90a8-94bac4f81ae5" containerName="barbican-worker" Dec 11 10:39:37 crc kubenswrapper[4953]: I1211 10:39:37.418200 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="544e1955-4316-4587-90a8-94bac4f81ae5" containerName="barbican-worker" Dec 11 10:39:37 crc kubenswrapper[4953]: E1211 10:39:37.418802 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4e64ea9-3129-46a7-8197-bdd7730ad3f1" containerName="keystone-api" Dec 11 10:39:37 crc kubenswrapper[4953]: I1211 10:39:37.418899 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4e64ea9-3129-46a7-8197-bdd7730ad3f1" containerName="keystone-api" Dec 11 10:39:37 crc kubenswrapper[4953]: E1211 10:39:37.418994 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="544e1955-4316-4587-90a8-94bac4f81ae5" containerName="barbican-worker-log" Dec 11 10:39:37 crc kubenswrapper[4953]: I1211 10:39:37.419070 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="544e1955-4316-4587-90a8-94bac4f81ae5" containerName="barbican-worker-log" Dec 11 10:39:37 crc kubenswrapper[4953]: E1211 10:39:37.419151 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b1b7520-f52c-4a2a-98e5-16ac7460bade" containerName="cinder-api-log" Dec 11 10:39:37 crc kubenswrapper[4953]: I1211 10:39:37.419236 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b1b7520-f52c-4a2a-98e5-16ac7460bade" containerName="cinder-api-log" Dec 11 10:39:37 crc kubenswrapper[4953]: E1211 10:39:37.419324 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4287349e-ff2e-483c-9ede-08ec5e03a2b4" containerName="openstack-network-exporter" Dec 11 10:39:37 crc kubenswrapper[4953]: I1211 10:39:37.419413 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="4287349e-ff2e-483c-9ede-08ec5e03a2b4" containerName="openstack-network-exporter" Dec 11 10:39:37 crc kubenswrapper[4953]: E1211 10:39:37.419493 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="261b522a-b786-4b2b-975c-43f1cc0d8ccf" containerName="neutron-api" Dec 11 10:39:37 crc kubenswrapper[4953]: I1211 10:39:37.419594 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="261b522a-b786-4b2b-975c-43f1cc0d8ccf" containerName="neutron-api" Dec 11 10:39:37 crc kubenswrapper[4953]: E1211 10:39:37.419691 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7be1c768-78bb-476b-b51d-8e4fe80b8500" containerName="account-reaper" Dec 11 10:39:37 crc kubenswrapper[4953]: I1211 10:39:37.419769 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="7be1c768-78bb-476b-b51d-8e4fe80b8500" containerName="account-reaper" Dec 11 10:39:37 crc kubenswrapper[4953]: E1211 10:39:37.421666 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="261b522a-b786-4b2b-975c-43f1cc0d8ccf" containerName="neutron-httpd" Dec 11 10:39:37 crc kubenswrapper[4953]: I1211 10:39:37.421862 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="261b522a-b786-4b2b-975c-43f1cc0d8ccf" containerName="neutron-httpd" Dec 11 10:39:37 crc kubenswrapper[4953]: E1211 10:39:37.421956 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b29c8985-0d8c-4382-9969-29422929136f" containerName="rabbitmq" Dec 11 10:39:37 crc kubenswrapper[4953]: I1211 10:39:37.422043 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="b29c8985-0d8c-4382-9969-29422929136f" containerName="rabbitmq" Dec 11 10:39:37 crc kubenswrapper[4953]: E1211 10:39:37.422134 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4287349e-ff2e-483c-9ede-08ec5e03a2b4" containerName="ovn-northd" Dec 11 10:39:37 crc kubenswrapper[4953]: I1211 10:39:37.422216 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="4287349e-ff2e-483c-9ede-08ec5e03a2b4" containerName="ovn-northd" Dec 11 10:39:37 crc kubenswrapper[4953]: E1211 10:39:37.422314 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8521d832-efe5-4653-8c0e-8921f916e10f" containerName="proxy-httpd" Dec 11 10:39:37 crc kubenswrapper[4953]: I1211 10:39:37.422407 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="8521d832-efe5-4653-8c0e-8921f916e10f" containerName="proxy-httpd" Dec 11 10:39:37 crc kubenswrapper[4953]: E1211 10:39:37.422494 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01196778-96de-4f79-b9ac-e01243f86ebb" containerName="rabbitmq" Dec 11 10:39:37 crc kubenswrapper[4953]: I1211 10:39:37.422590 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="01196778-96de-4f79-b9ac-e01243f86ebb" containerName="rabbitmq" Dec 11 10:39:37 crc kubenswrapper[4953]: E1211 10:39:37.422681 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b3d5c24-61f6-4926-94ec-0e3a462334df" containerName="nova-cell0-conductor-conductor" Dec 11 10:39:37 crc kubenswrapper[4953]: I1211 10:39:37.422769 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b3d5c24-61f6-4926-94ec-0e3a462334df" containerName="nova-cell0-conductor-conductor" Dec 11 10:39:37 crc kubenswrapper[4953]: E1211 10:39:37.423734 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7be1c768-78bb-476b-b51d-8e4fe80b8500" containerName="object-expirer" Dec 11 10:39:37 crc kubenswrapper[4953]: I1211 10:39:37.423835 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="7be1c768-78bb-476b-b51d-8e4fe80b8500" containerName="object-expirer" Dec 11 10:39:37 crc kubenswrapper[4953]: E1211 10:39:37.423921 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="767370a9-f8dd-4370-a2cc-f5baeff52c54" containerName="barbican-api-log" Dec 11 10:39:37 crc kubenswrapper[4953]: I1211 10:39:37.424011 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="767370a9-f8dd-4370-a2cc-f5baeff52c54" containerName="barbican-api-log" Dec 11 10:39:37 crc kubenswrapper[4953]: E1211 10:39:37.424096 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7af3727e-8096-420d-b8d0-95988a5d36db" containerName="nova-scheduler-scheduler" Dec 11 10:39:37 crc kubenswrapper[4953]: I1211 10:39:37.424179 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="7af3727e-8096-420d-b8d0-95988a5d36db" containerName="nova-scheduler-scheduler" Dec 11 10:39:37 crc kubenswrapper[4953]: E1211 10:39:37.424268 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b29c8985-0d8c-4382-9969-29422929136f" containerName="setup-container" Dec 11 10:39:37 crc kubenswrapper[4953]: I1211 10:39:37.424344 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="b29c8985-0d8c-4382-9969-29422929136f" containerName="setup-container" Dec 11 10:39:37 crc kubenswrapper[4953]: E1211 10:39:37.424427 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7be1c768-78bb-476b-b51d-8e4fe80b8500" containerName="swift-recon-cron" Dec 11 10:39:37 crc kubenswrapper[4953]: I1211 10:39:37.424502 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="7be1c768-78bb-476b-b51d-8e4fe80b8500" containerName="swift-recon-cron" Dec 11 10:39:37 crc kubenswrapper[4953]: E1211 10:39:37.424614 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e067a835-8a1a-4672-aaea-b8c101109018" containerName="glance-httpd" Dec 11 10:39:37 crc kubenswrapper[4953]: I1211 10:39:37.424702 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="e067a835-8a1a-4672-aaea-b8c101109018" containerName="glance-httpd" Dec 11 10:39:37 crc kubenswrapper[4953]: E1211 10:39:37.424795 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cfd14e5-05e2-4cc5-ba83-259321c6f872" containerName="ovsdb-server-init" Dec 11 10:39:37 crc kubenswrapper[4953]: I1211 10:39:37.424906 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cfd14e5-05e2-4cc5-ba83-259321c6f872" containerName="ovsdb-server-init" Dec 11 10:39:37 crc kubenswrapper[4953]: E1211 10:39:37.424998 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b1b7520-f52c-4a2a-98e5-16ac7460bade" containerName="cinder-api" Dec 11 10:39:37 crc kubenswrapper[4953]: I1211 10:39:37.425151 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b1b7520-f52c-4a2a-98e5-16ac7460bade" containerName="cinder-api" Dec 11 10:39:37 crc kubenswrapper[4953]: E1211 10:39:37.425297 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c566b6b-16f8-422c-acda-0325e36103e6" containerName="nova-cell1-conductor-conductor" Dec 11 10:39:37 crc kubenswrapper[4953]: I1211 10:39:37.425383 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c566b6b-16f8-422c-acda-0325e36103e6" containerName="nova-cell1-conductor-conductor" Dec 11 10:39:37 crc kubenswrapper[4953]: E1211 10:39:37.425460 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7be1c768-78bb-476b-b51d-8e4fe80b8500" containerName="object-updater" Dec 11 10:39:37 crc kubenswrapper[4953]: I1211 10:39:37.425545 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="7be1c768-78bb-476b-b51d-8e4fe80b8500" containerName="object-updater" Dec 11 10:39:37 crc kubenswrapper[4953]: E1211 10:39:37.425665 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7be1c768-78bb-476b-b51d-8e4fe80b8500" containerName="account-server" Dec 11 10:39:37 crc kubenswrapper[4953]: I1211 10:39:37.425757 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="7be1c768-78bb-476b-b51d-8e4fe80b8500" containerName="account-server" Dec 11 10:39:37 crc kubenswrapper[4953]: E1211 10:39:37.426560 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7be1c768-78bb-476b-b51d-8e4fe80b8500" containerName="object-auditor" Dec 11 10:39:37 crc kubenswrapper[4953]: I1211 10:39:37.426702 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="7be1c768-78bb-476b-b51d-8e4fe80b8500" containerName="object-auditor" Dec 11 10:39:37 crc kubenswrapper[4953]: E1211 10:39:37.426851 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd4593de-19d2-47c1-b6b0-b9c0e46e1107" containerName="nova-metadata-metadata" Dec 11 10:39:37 crc kubenswrapper[4953]: I1211 10:39:37.426944 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd4593de-19d2-47c1-b6b0-b9c0e46e1107" containerName="nova-metadata-metadata" Dec 11 10:39:37 crc kubenswrapper[4953]: E1211 10:39:37.427024 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1833793-1408-450f-8a7e-e01e6048edd5" containerName="cinder-scheduler" Dec 11 10:39:37 crc kubenswrapper[4953]: I1211 10:39:37.427102 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1833793-1408-450f-8a7e-e01e6048edd5" containerName="cinder-scheduler" Dec 11 10:39:37 crc kubenswrapper[4953]: E1211 10:39:37.427197 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cfd14e5-05e2-4cc5-ba83-259321c6f872" containerName="ovsdb-server" Dec 11 10:39:37 crc kubenswrapper[4953]: I1211 10:39:37.427277 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cfd14e5-05e2-4cc5-ba83-259321c6f872" containerName="ovsdb-server" Dec 11 10:39:37 crc kubenswrapper[4953]: E1211 10:39:37.427359 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="345a513a-93a0-4e23-9266-3eeaf3ff0c10" containerName="placement-api" Dec 11 10:39:37 crc kubenswrapper[4953]: I1211 10:39:37.427428 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="345a513a-93a0-4e23-9266-3eeaf3ff0c10" containerName="placement-api" Dec 11 10:39:37 crc kubenswrapper[4953]: E1211 10:39:37.427499 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab07f951-5c8d-428b-9b26-52ea2284ee52" containerName="memcached" Dec 11 10:39:37 crc kubenswrapper[4953]: I1211 10:39:37.427599 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab07f951-5c8d-428b-9b26-52ea2284ee52" containerName="memcached" Dec 11 10:39:37 crc kubenswrapper[4953]: E1211 10:39:37.427702 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="caec0159-12b1-46f9-952c-10f229948036" containerName="barbican-keystone-listener" Dec 11 10:39:37 crc kubenswrapper[4953]: I1211 10:39:37.427783 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="caec0159-12b1-46f9-952c-10f229948036" containerName="barbican-keystone-listener" Dec 11 10:39:37 crc kubenswrapper[4953]: E1211 10:39:37.427999 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7be1c768-78bb-476b-b51d-8e4fe80b8500" containerName="container-replicator" Dec 11 10:39:37 crc kubenswrapper[4953]: I1211 10:39:37.428149 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="7be1c768-78bb-476b-b51d-8e4fe80b8500" containerName="container-replicator" Dec 11 10:39:37 crc kubenswrapper[4953]: E1211 10:39:37.428235 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b77681a-0823-42e6-b0a4-2af1ce955970" containerName="glance-log" Dec 11 10:39:37 crc kubenswrapper[4953]: I1211 10:39:37.428320 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b77681a-0823-42e6-b0a4-2af1ce955970" containerName="glance-log" Dec 11 10:39:37 crc kubenswrapper[4953]: E1211 10:39:37.428399 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12df8687-e24e-47fb-802c-3ab978ed04fd" containerName="mariadb-account-delete" Dec 11 10:39:37 crc kubenswrapper[4953]: I1211 10:39:37.428474 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="12df8687-e24e-47fb-802c-3ab978ed04fd" containerName="mariadb-account-delete" Dec 11 10:39:37 crc kubenswrapper[4953]: E1211 10:39:37.428552 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7be1c768-78bb-476b-b51d-8e4fe80b8500" containerName="account-replicator" Dec 11 10:39:37 crc kubenswrapper[4953]: I1211 10:39:37.428658 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="7be1c768-78bb-476b-b51d-8e4fe80b8500" containerName="account-replicator" Dec 11 10:39:37 crc kubenswrapper[4953]: E1211 10:39:37.428742 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9da03c89-b3fb-431e-bef0-eb8f6d0b180e" containerName="kube-state-metrics" Dec 11 10:39:37 crc kubenswrapper[4953]: I1211 10:39:37.428835 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="9da03c89-b3fb-431e-bef0-eb8f6d0b180e" containerName="kube-state-metrics" Dec 11 10:39:37 crc kubenswrapper[4953]: E1211 10:39:37.428935 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27258186-4cab-45b4-a20c-a4c3ddc82f76" containerName="mysql-bootstrap" Dec 11 10:39:37 crc kubenswrapper[4953]: I1211 10:39:37.429020 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="27258186-4cab-45b4-a20c-a4c3ddc82f76" containerName="mysql-bootstrap" Dec 11 10:39:37 crc kubenswrapper[4953]: E1211 10:39:37.429158 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23f99edb-3870-42f3-bdef-ec4db335ba35" containerName="mysql-bootstrap" Dec 11 10:39:37 crc kubenswrapper[4953]: I1211 10:39:37.429299 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="23f99edb-3870-42f3-bdef-ec4db335ba35" containerName="mysql-bootstrap" Dec 11 10:39:37 crc kubenswrapper[4953]: E1211 10:39:37.429390 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b66dbe7-edd9-4e23-a3d0-0661efe89ac6" containerName="nova-api-api" Dec 11 10:39:37 crc kubenswrapper[4953]: I1211 10:39:37.429473 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b66dbe7-edd9-4e23-a3d0-0661efe89ac6" containerName="nova-api-api" Dec 11 10:39:37 crc kubenswrapper[4953]: E1211 10:39:37.429561 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1833793-1408-450f-8a7e-e01e6048edd5" containerName="probe" Dec 11 10:39:37 crc kubenswrapper[4953]: I1211 10:39:37.429668 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1833793-1408-450f-8a7e-e01e6048edd5" containerName="probe" Dec 11 10:39:37 crc kubenswrapper[4953]: E1211 10:39:37.429769 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fdfbbe2-a3b8-4834-9920-114c40de67dc" containerName="ceilometer-central-agent" Dec 11 10:39:37 crc kubenswrapper[4953]: I1211 10:39:37.429842 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fdfbbe2-a3b8-4834-9920-114c40de67dc" containerName="ceilometer-central-agent" Dec 11 10:39:37 crc kubenswrapper[4953]: E1211 10:39:37.430008 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fdfbbe2-a3b8-4834-9920-114c40de67dc" containerName="ceilometer-notification-agent" Dec 11 10:39:37 crc kubenswrapper[4953]: I1211 10:39:37.430093 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fdfbbe2-a3b8-4834-9920-114c40de67dc" containerName="ceilometer-notification-agent" Dec 11 10:39:37 crc kubenswrapper[4953]: E1211 10:39:37.430216 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7be1c768-78bb-476b-b51d-8e4fe80b8500" containerName="account-auditor" Dec 11 10:39:37 crc kubenswrapper[4953]: I1211 10:39:37.430345 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="7be1c768-78bb-476b-b51d-8e4fe80b8500" containerName="account-auditor" Dec 11 10:39:37 crc kubenswrapper[4953]: E1211 10:39:37.430418 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23f99edb-3870-42f3-bdef-ec4db335ba35" containerName="galera" Dec 11 10:39:37 crc kubenswrapper[4953]: I1211 10:39:37.430500 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="23f99edb-3870-42f3-bdef-ec4db335ba35" containerName="galera" Dec 11 10:39:37 crc kubenswrapper[4953]: E1211 10:39:37.430582 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8521d832-efe5-4653-8c0e-8921f916e10f" containerName="proxy-server" Dec 11 10:39:37 crc kubenswrapper[4953]: I1211 10:39:37.430662 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="8521d832-efe5-4653-8c0e-8921f916e10f" containerName="proxy-server" Dec 11 10:39:37 crc kubenswrapper[4953]: E1211 10:39:37.430747 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b66dbe7-edd9-4e23-a3d0-0661efe89ac6" containerName="nova-api-log" Dec 11 10:39:37 crc kubenswrapper[4953]: I1211 10:39:37.430817 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b66dbe7-edd9-4e23-a3d0-0661efe89ac6" containerName="nova-api-log" Dec 11 10:39:37 crc kubenswrapper[4953]: E1211 10:39:37.430896 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79a93889-ae40-4bd1-a697-5797e065231b" containerName="nova-cell1-novncproxy-novncproxy" Dec 11 10:39:37 crc kubenswrapper[4953]: I1211 10:39:37.430972 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="79a93889-ae40-4bd1-a697-5797e065231b" containerName="nova-cell1-novncproxy-novncproxy" Dec 11 10:39:37 crc kubenswrapper[4953]: E1211 10:39:37.431043 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7be1c768-78bb-476b-b51d-8e4fe80b8500" containerName="container-auditor" Dec 11 10:39:37 crc kubenswrapper[4953]: I1211 10:39:37.431112 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="7be1c768-78bb-476b-b51d-8e4fe80b8500" containerName="container-auditor" Dec 11 10:39:37 crc kubenswrapper[4953]: E1211 10:39:37.431229 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e067a835-8a1a-4672-aaea-b8c101109018" containerName="glance-log" Dec 11 10:39:37 crc kubenswrapper[4953]: I1211 10:39:37.431337 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="e067a835-8a1a-4672-aaea-b8c101109018" containerName="glance-log" Dec 11 10:39:37 crc kubenswrapper[4953]: E1211 10:39:37.431409 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cfd14e5-05e2-4cc5-ba83-259321c6f872" containerName="ovs-vswitchd" Dec 11 10:39:37 crc kubenswrapper[4953]: I1211 10:39:37.431476 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cfd14e5-05e2-4cc5-ba83-259321c6f872" containerName="ovs-vswitchd" Dec 11 10:39:37 crc kubenswrapper[4953]: E1211 10:39:37.431658 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27258186-4cab-45b4-a20c-a4c3ddc82f76" containerName="galera" Dec 11 10:39:37 crc kubenswrapper[4953]: I1211 10:39:37.431741 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="27258186-4cab-45b4-a20c-a4c3ddc82f76" containerName="galera" Dec 11 10:39:37 crc kubenswrapper[4953]: E1211 10:39:37.431816 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46cd550e-17c8-4cd2-a5e0-9746edf42836" containerName="mariadb-account-delete" Dec 11 10:39:37 crc kubenswrapper[4953]: I1211 10:39:37.431896 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="46cd550e-17c8-4cd2-a5e0-9746edf42836" containerName="mariadb-account-delete" Dec 11 10:39:37 crc kubenswrapper[4953]: E1211 10:39:37.431971 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3aee1a2c-6a1e-48c0-9491-3f61371047eb" containerName="mariadb-account-delete" Dec 11 10:39:37 crc kubenswrapper[4953]: I1211 10:39:37.432041 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="3aee1a2c-6a1e-48c0-9491-3f61371047eb" containerName="mariadb-account-delete" Dec 11 10:39:37 crc kubenswrapper[4953]: E1211 10:39:37.432112 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fdfbbe2-a3b8-4834-9920-114c40de67dc" containerName="proxy-httpd" Dec 11 10:39:37 crc kubenswrapper[4953]: I1211 10:39:37.432184 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fdfbbe2-a3b8-4834-9920-114c40de67dc" containerName="proxy-httpd" Dec 11 10:39:37 crc kubenswrapper[4953]: E1211 10:39:37.432257 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="767370a9-f8dd-4370-a2cc-f5baeff52c54" containerName="barbican-api" Dec 11 10:39:37 crc kubenswrapper[4953]: I1211 10:39:37.432329 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="767370a9-f8dd-4370-a2cc-f5baeff52c54" containerName="barbican-api" Dec 11 10:39:37 crc kubenswrapper[4953]: E1211 10:39:37.432408 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fdfbbe2-a3b8-4834-9920-114c40de67dc" containerName="sg-core" Dec 11 10:39:37 crc kubenswrapper[4953]: I1211 10:39:37.432490 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fdfbbe2-a3b8-4834-9920-114c40de67dc" containerName="sg-core" Dec 11 10:39:37 crc kubenswrapper[4953]: E1211 10:39:37.432566 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd4593de-19d2-47c1-b6b0-b9c0e46e1107" containerName="nova-metadata-log" Dec 11 10:39:37 crc kubenswrapper[4953]: I1211 10:39:37.432668 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd4593de-19d2-47c1-b6b0-b9c0e46e1107" containerName="nova-metadata-log" Dec 11 10:39:37 crc kubenswrapper[4953]: E1211 10:39:37.432744 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7be1c768-78bb-476b-b51d-8e4fe80b8500" containerName="rsync" Dec 11 10:39:37 crc kubenswrapper[4953]: I1211 10:39:37.432817 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="7be1c768-78bb-476b-b51d-8e4fe80b8500" containerName="rsync" Dec 11 10:39:37 crc kubenswrapper[4953]: E1211 10:39:37.432897 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7be1c768-78bb-476b-b51d-8e4fe80b8500" containerName="container-updater" Dec 11 10:39:37 crc kubenswrapper[4953]: I1211 10:39:37.432969 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="7be1c768-78bb-476b-b51d-8e4fe80b8500" containerName="container-updater" Dec 11 10:39:37 crc kubenswrapper[4953]: E1211 10:39:37.433051 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="345a513a-93a0-4e23-9266-3eeaf3ff0c10" containerName="placement-log" Dec 11 10:39:37 crc kubenswrapper[4953]: I1211 10:39:37.433131 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="345a513a-93a0-4e23-9266-3eeaf3ff0c10" containerName="placement-log" Dec 11 10:39:37 crc kubenswrapper[4953]: E1211 10:39:37.433215 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01196778-96de-4f79-b9ac-e01243f86ebb" containerName="setup-container" Dec 11 10:39:37 crc kubenswrapper[4953]: I1211 10:39:37.433308 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="01196778-96de-4f79-b9ac-e01243f86ebb" containerName="setup-container" Dec 11 10:39:37 crc kubenswrapper[4953]: E1211 10:39:37.433397 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10e32559-b465-4538-af8b-9dd3deedf2b9" containerName="mariadb-account-delete" Dec 11 10:39:37 crc kubenswrapper[4953]: I1211 10:39:37.433481 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="10e32559-b465-4538-af8b-9dd3deedf2b9" containerName="mariadb-account-delete" Dec 11 10:39:37 crc kubenswrapper[4953]: E1211 10:39:37.433559 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7be1c768-78bb-476b-b51d-8e4fe80b8500" containerName="object-replicator" Dec 11 10:39:37 crc kubenswrapper[4953]: I1211 10:39:37.433658 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="7be1c768-78bb-476b-b51d-8e4fe80b8500" containerName="object-replicator" Dec 11 10:39:37 crc kubenswrapper[4953]: E1211 10:39:37.433747 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="caec0159-12b1-46f9-952c-10f229948036" containerName="barbican-keystone-listener-log" Dec 11 10:39:37 crc kubenswrapper[4953]: I1211 10:39:37.433822 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="caec0159-12b1-46f9-952c-10f229948036" containerName="barbican-keystone-listener-log" Dec 11 10:39:37 crc kubenswrapper[4953]: E1211 10:39:37.433908 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7be1c768-78bb-476b-b51d-8e4fe80b8500" containerName="container-server" Dec 11 10:39:37 crc kubenswrapper[4953]: I1211 10:39:37.433981 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="7be1c768-78bb-476b-b51d-8e4fe80b8500" containerName="container-server" Dec 11 10:39:37 crc kubenswrapper[4953]: E1211 10:39:37.434064 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="992b7c13-c6c6-4641-9c9a-3d8bfbd5029c" containerName="mariadb-account-delete" Dec 11 10:39:37 crc kubenswrapper[4953]: I1211 10:39:37.434132 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="992b7c13-c6c6-4641-9c9a-3d8bfbd5029c" containerName="mariadb-account-delete" Dec 11 10:39:37 crc kubenswrapper[4953]: E1211 10:39:37.434219 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7be1c768-78bb-476b-b51d-8e4fe80b8500" containerName="object-server" Dec 11 10:39:37 crc kubenswrapper[4953]: I1211 10:39:37.434312 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="7be1c768-78bb-476b-b51d-8e4fe80b8500" containerName="object-server" Dec 11 10:39:37 crc kubenswrapper[4953]: E1211 10:39:37.434393 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b77681a-0823-42e6-b0a4-2af1ce955970" containerName="glance-httpd" Dec 11 10:39:37 crc kubenswrapper[4953]: I1211 10:39:37.434463 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b77681a-0823-42e6-b0a4-2af1ce955970" containerName="glance-httpd" Dec 11 10:39:37 crc kubenswrapper[4953]: I1211 10:39:37.434855 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab07f951-5c8d-428b-9b26-52ea2284ee52" containerName="memcached" Dec 11 10:39:37 crc kubenswrapper[4953]: I1211 10:39:37.434968 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="79a93889-ae40-4bd1-a697-5797e065231b" containerName="nova-cell1-novncproxy-novncproxy" Dec 11 10:39:37 crc kubenswrapper[4953]: I1211 10:39:37.435066 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="b09879bd-62c8-4810-ad58-09db28d6afb5" containerName="mariadb-account-delete" Dec 11 10:39:37 crc kubenswrapper[4953]: I1211 10:39:37.435141 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd4593de-19d2-47c1-b6b0-b9c0e46e1107" containerName="nova-metadata-metadata" Dec 11 10:39:37 crc kubenswrapper[4953]: I1211 10:39:37.435218 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="8521d832-efe5-4653-8c0e-8921f916e10f" containerName="proxy-server" Dec 11 10:39:37 crc kubenswrapper[4953]: I1211 10:39:37.435293 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="7be1c768-78bb-476b-b51d-8e4fe80b8500" containerName="object-server" Dec 11 10:39:37 crc kubenswrapper[4953]: I1211 10:39:37.435358 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="261b522a-b786-4b2b-975c-43f1cc0d8ccf" containerName="neutron-api" Dec 11 10:39:37 crc kubenswrapper[4953]: I1211 10:39:37.435418 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b77681a-0823-42e6-b0a4-2af1ce955970" containerName="glance-httpd" Dec 11 10:39:37 crc kubenswrapper[4953]: I1211 10:39:37.435474 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="7be1c768-78bb-476b-b51d-8e4fe80b8500" containerName="container-replicator" Dec 11 10:39:37 crc kubenswrapper[4953]: I1211 10:39:37.435548 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="4287349e-ff2e-483c-9ede-08ec5e03a2b4" containerName="openstack-network-exporter" Dec 11 10:39:37 crc kubenswrapper[4953]: I1211 10:39:37.435667 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="544e1955-4316-4587-90a8-94bac4f81ae5" containerName="barbican-worker" Dec 11 10:39:37 crc kubenswrapper[4953]: I1211 10:39:37.435755 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="9da03c89-b3fb-431e-bef0-eb8f6d0b180e" containerName="kube-state-metrics" Dec 11 10:39:37 crc kubenswrapper[4953]: I1211 10:39:37.435845 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="7af3727e-8096-420d-b8d0-95988a5d36db" containerName="nova-scheduler-scheduler" Dec 11 10:39:37 crc kubenswrapper[4953]: I1211 10:39:37.435915 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1833793-1408-450f-8a7e-e01e6048edd5" containerName="probe" Dec 11 10:39:37 crc kubenswrapper[4953]: I1211 10:39:37.435988 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="caec0159-12b1-46f9-952c-10f229948036" containerName="barbican-keystone-listener" Dec 11 10:39:37 crc kubenswrapper[4953]: I1211 10:39:37.436058 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="7be1c768-78bb-476b-b51d-8e4fe80b8500" containerName="object-updater" Dec 11 10:39:37 crc kubenswrapper[4953]: I1211 10:39:37.436169 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b1b7520-f52c-4a2a-98e5-16ac7460bade" containerName="cinder-api" Dec 11 10:39:37 crc kubenswrapper[4953]: I1211 10:39:37.436305 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd4593de-19d2-47c1-b6b0-b9c0e46e1107" containerName="nova-metadata-log" Dec 11 10:39:37 crc kubenswrapper[4953]: I1211 10:39:37.436388 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="7be1c768-78bb-476b-b51d-8e4fe80b8500" containerName="object-auditor" Dec 11 10:39:37 crc kubenswrapper[4953]: I1211 10:39:37.436447 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="46cd550e-17c8-4cd2-a5e0-9746edf42836" containerName="mariadb-account-delete" Dec 11 10:39:37 crc kubenswrapper[4953]: I1211 10:39:37.436502 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="7be1c768-78bb-476b-b51d-8e4fe80b8500" containerName="container-auditor" Dec 11 10:39:37 crc kubenswrapper[4953]: I1211 10:39:37.445673 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6515789-e6f6-4aa3-83f3-4fc58f862dc9" containerName="mariadb-account-delete" Dec 11 10:39:37 crc kubenswrapper[4953]: I1211 10:39:37.445728 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="4287349e-ff2e-483c-9ede-08ec5e03a2b4" containerName="ovn-northd" Dec 11 10:39:37 crc kubenswrapper[4953]: I1211 10:39:37.445755 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="767370a9-f8dd-4370-a2cc-f5baeff52c54" containerName="barbican-api-log" Dec 11 10:39:37 crc kubenswrapper[4953]: I1211 10:39:37.445774 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b66dbe7-edd9-4e23-a3d0-0661efe89ac6" containerName="nova-api-api" Dec 11 10:39:37 crc kubenswrapper[4953]: I1211 10:39:37.445790 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="992b7c13-c6c6-4641-9c9a-3d8bfbd5029c" containerName="mariadb-account-delete" Dec 11 10:39:37 crc kubenswrapper[4953]: I1211 10:39:37.445801 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="7be1c768-78bb-476b-b51d-8e4fe80b8500" containerName="swift-recon-cron" Dec 11 10:39:37 crc kubenswrapper[4953]: I1211 10:39:37.445813 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="7be1c768-78bb-476b-b51d-8e4fe80b8500" containerName="account-replicator" Dec 11 10:39:37 crc kubenswrapper[4953]: I1211 10:39:37.445825 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="7be1c768-78bb-476b-b51d-8e4fe80b8500" containerName="account-auditor" Dec 11 10:39:37 crc kubenswrapper[4953]: I1211 10:39:37.445837 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b3d5c24-61f6-4926-94ec-0e3a462334df" containerName="nova-cell0-conductor-conductor" Dec 11 10:39:37 crc kubenswrapper[4953]: I1211 10:39:37.445850 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fdfbbe2-a3b8-4834-9920-114c40de67dc" containerName="ceilometer-central-agent" Dec 11 10:39:37 crc kubenswrapper[4953]: I1211 10:39:37.445867 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="10e32559-b465-4538-af8b-9dd3deedf2b9" containerName="mariadb-account-delete" Dec 11 10:39:37 crc kubenswrapper[4953]: I1211 10:39:37.445887 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="b29c8985-0d8c-4382-9969-29422929136f" containerName="rabbitmq" Dec 11 10:39:37 crc kubenswrapper[4953]: I1211 10:39:37.445902 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b77681a-0823-42e6-b0a4-2af1ce955970" containerName="glance-log" Dec 11 10:39:37 crc kubenswrapper[4953]: I1211 10:39:37.445920 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1833793-1408-450f-8a7e-e01e6048edd5" containerName="cinder-scheduler" Dec 11 10:39:37 crc kubenswrapper[4953]: I1211 10:39:37.445938 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fdfbbe2-a3b8-4834-9920-114c40de67dc" containerName="proxy-httpd" Dec 11 10:39:37 crc kubenswrapper[4953]: I1211 10:39:37.445947 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="12df8687-e24e-47fb-802c-3ab978ed04fd" containerName="mariadb-account-delete" Dec 11 10:39:37 crc kubenswrapper[4953]: I1211 10:39:37.445961 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c566b6b-16f8-422c-acda-0325e36103e6" containerName="nova-cell1-conductor-conductor" Dec 11 10:39:37 crc kubenswrapper[4953]: I1211 10:39:37.445979 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="7be1c768-78bb-476b-b51d-8e4fe80b8500" containerName="container-server" Dec 11 10:39:37 crc kubenswrapper[4953]: I1211 10:39:37.445990 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="caec0159-12b1-46f9-952c-10f229948036" containerName="barbican-keystone-listener-log" Dec 11 10:39:37 crc kubenswrapper[4953]: I1211 10:39:37.446003 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="544e1955-4316-4587-90a8-94bac4f81ae5" containerName="barbican-worker-log" Dec 11 10:39:37 crc kubenswrapper[4953]: I1211 10:39:37.446015 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="e067a835-8a1a-4672-aaea-b8c101109018" containerName="glance-httpd" Dec 11 10:39:37 crc kubenswrapper[4953]: I1211 10:39:37.446029 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="7be1c768-78bb-476b-b51d-8e4fe80b8500" containerName="account-server" Dec 11 10:39:37 crc kubenswrapper[4953]: I1211 10:39:37.446041 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="5cfd14e5-05e2-4cc5-ba83-259321c6f872" containerName="ovs-vswitchd" Dec 11 10:39:37 crc kubenswrapper[4953]: I1211 10:39:37.446052 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="27258186-4cab-45b4-a20c-a4c3ddc82f76" containerName="galera" Dec 11 10:39:37 crc kubenswrapper[4953]: I1211 10:39:37.446063 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="3aee1a2c-6a1e-48c0-9491-3f61371047eb" containerName="mariadb-account-delete" Dec 11 10:39:37 crc kubenswrapper[4953]: I1211 10:39:37.446077 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b66dbe7-edd9-4e23-a3d0-0661efe89ac6" containerName="nova-api-log" Dec 11 10:39:37 crc kubenswrapper[4953]: I1211 10:39:37.446092 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="7be1c768-78bb-476b-b51d-8e4fe80b8500" containerName="rsync" Dec 11 10:39:37 crc kubenswrapper[4953]: I1211 10:39:37.446109 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="767370a9-f8dd-4370-a2cc-f5baeff52c54" containerName="barbican-api" Dec 11 10:39:37 crc kubenswrapper[4953]: I1211 10:39:37.446128 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="7be1c768-78bb-476b-b51d-8e4fe80b8500" containerName="object-expirer" Dec 11 10:39:37 crc kubenswrapper[4953]: I1211 10:39:37.446141 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b1b7520-f52c-4a2a-98e5-16ac7460bade" containerName="cinder-api-log" Dec 11 10:39:37 crc kubenswrapper[4953]: I1211 10:39:37.446152 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fdfbbe2-a3b8-4834-9920-114c40de67dc" containerName="ceilometer-notification-agent" Dec 11 10:39:37 crc kubenswrapper[4953]: I1211 10:39:37.446167 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="23f99edb-3870-42f3-bdef-ec4db335ba35" containerName="galera" Dec 11 10:39:37 crc kubenswrapper[4953]: I1211 10:39:37.446177 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="7be1c768-78bb-476b-b51d-8e4fe80b8500" containerName="object-replicator" Dec 11 10:39:37 crc kubenswrapper[4953]: I1211 10:39:37.446190 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="e067a835-8a1a-4672-aaea-b8c101109018" containerName="glance-log" Dec 11 10:39:37 crc kubenswrapper[4953]: I1211 10:39:37.446205 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="5cfd14e5-05e2-4cc5-ba83-259321c6f872" containerName="ovsdb-server" Dec 11 10:39:37 crc kubenswrapper[4953]: I1211 10:39:37.446215 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="7be1c768-78bb-476b-b51d-8e4fe80b8500" containerName="container-updater" Dec 11 10:39:37 crc kubenswrapper[4953]: I1211 10:39:37.446224 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="345a513a-93a0-4e23-9266-3eeaf3ff0c10" containerName="placement-log" Dec 11 10:39:37 crc kubenswrapper[4953]: I1211 10:39:37.446232 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="261b522a-b786-4b2b-975c-43f1cc0d8ccf" containerName="neutron-httpd" Dec 11 10:39:37 crc kubenswrapper[4953]: I1211 10:39:37.446245 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="8521d832-efe5-4653-8c0e-8921f916e10f" containerName="proxy-httpd" Dec 11 10:39:37 crc kubenswrapper[4953]: I1211 10:39:37.446258 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fdfbbe2-a3b8-4834-9920-114c40de67dc" containerName="sg-core" Dec 11 10:39:37 crc kubenswrapper[4953]: I1211 10:39:37.446272 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="7be1c768-78bb-476b-b51d-8e4fe80b8500" containerName="account-reaper" Dec 11 10:39:37 crc kubenswrapper[4953]: I1211 10:39:37.446283 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="345a513a-93a0-4e23-9266-3eeaf3ff0c10" containerName="placement-api" Dec 11 10:39:37 crc kubenswrapper[4953]: I1211 10:39:37.446297 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="01196778-96de-4f79-b9ac-e01243f86ebb" containerName="rabbitmq" Dec 11 10:39:37 crc kubenswrapper[4953]: I1211 10:39:37.446315 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4e64ea9-3129-46a7-8197-bdd7730ad3f1" containerName="keystone-api" Dec 11 10:39:37 crc kubenswrapper[4953]: I1211 10:39:37.447786 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rvj9c"] Dec 11 10:39:37 crc kubenswrapper[4953]: I1211 10:39:37.447898 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rvj9c" Dec 11 10:39:37 crc kubenswrapper[4953]: I1211 10:39:37.572315 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60f8769a-d46e-4783-bda4-6583080f5eff-utilities\") pod \"redhat-marketplace-rvj9c\" (UID: \"60f8769a-d46e-4783-bda4-6583080f5eff\") " pod="openshift-marketplace/redhat-marketplace-rvj9c" Dec 11 10:39:37 crc kubenswrapper[4953]: I1211 10:39:37.572726 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60f8769a-d46e-4783-bda4-6583080f5eff-catalog-content\") pod \"redhat-marketplace-rvj9c\" (UID: \"60f8769a-d46e-4783-bda4-6583080f5eff\") " pod="openshift-marketplace/redhat-marketplace-rvj9c" Dec 11 10:39:37 crc kubenswrapper[4953]: I1211 10:39:37.572767 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pz4z\" (UniqueName: \"kubernetes.io/projected/60f8769a-d46e-4783-bda4-6583080f5eff-kube-api-access-9pz4z\") pod \"redhat-marketplace-rvj9c\" (UID: \"60f8769a-d46e-4783-bda4-6583080f5eff\") " pod="openshift-marketplace/redhat-marketplace-rvj9c" Dec 11 10:39:37 crc kubenswrapper[4953]: I1211 10:39:37.674472 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60f8769a-d46e-4783-bda4-6583080f5eff-utilities\") pod \"redhat-marketplace-rvj9c\" (UID: \"60f8769a-d46e-4783-bda4-6583080f5eff\") " pod="openshift-marketplace/redhat-marketplace-rvj9c" Dec 11 10:39:37 crc kubenswrapper[4953]: I1211 10:39:37.674558 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60f8769a-d46e-4783-bda4-6583080f5eff-catalog-content\") pod \"redhat-marketplace-rvj9c\" (UID: \"60f8769a-d46e-4783-bda4-6583080f5eff\") " pod="openshift-marketplace/redhat-marketplace-rvj9c" Dec 11 10:39:37 crc kubenswrapper[4953]: I1211 10:39:37.674653 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9pz4z\" (UniqueName: \"kubernetes.io/projected/60f8769a-d46e-4783-bda4-6583080f5eff-kube-api-access-9pz4z\") pod \"redhat-marketplace-rvj9c\" (UID: \"60f8769a-d46e-4783-bda4-6583080f5eff\") " pod="openshift-marketplace/redhat-marketplace-rvj9c" Dec 11 10:39:37 crc kubenswrapper[4953]: I1211 10:39:37.675187 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60f8769a-d46e-4783-bda4-6583080f5eff-utilities\") pod \"redhat-marketplace-rvj9c\" (UID: \"60f8769a-d46e-4783-bda4-6583080f5eff\") " pod="openshift-marketplace/redhat-marketplace-rvj9c" Dec 11 10:39:37 crc kubenswrapper[4953]: I1211 10:39:37.675253 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60f8769a-d46e-4783-bda4-6583080f5eff-catalog-content\") pod \"redhat-marketplace-rvj9c\" (UID: \"60f8769a-d46e-4783-bda4-6583080f5eff\") " pod="openshift-marketplace/redhat-marketplace-rvj9c" Dec 11 10:39:37 crc kubenswrapper[4953]: I1211 10:39:37.697828 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9pz4z\" (UniqueName: \"kubernetes.io/projected/60f8769a-d46e-4783-bda4-6583080f5eff-kube-api-access-9pz4z\") pod \"redhat-marketplace-rvj9c\" (UID: \"60f8769a-d46e-4783-bda4-6583080f5eff\") " pod="openshift-marketplace/redhat-marketplace-rvj9c" Dec 11 10:39:37 crc kubenswrapper[4953]: I1211 10:39:37.766998 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rvj9c" Dec 11 10:39:38 crc kubenswrapper[4953]: I1211 10:39:38.097657 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rvj9c"] Dec 11 10:39:38 crc kubenswrapper[4953]: I1211 10:39:38.432190 4953 generic.go:334] "Generic (PLEG): container finished" podID="60f8769a-d46e-4783-bda4-6583080f5eff" containerID="1191a0cd3c90c2fc2ab96f3b1e894f28ea4231c1045a6f6170ed7aa4578e0ca9" exitCode=0 Dec 11 10:39:38 crc kubenswrapper[4953]: I1211 10:39:38.432974 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rvj9c" event={"ID":"60f8769a-d46e-4783-bda4-6583080f5eff","Type":"ContainerDied","Data":"1191a0cd3c90c2fc2ab96f3b1e894f28ea4231c1045a6f6170ed7aa4578e0ca9"} Dec 11 10:39:38 crc kubenswrapper[4953]: I1211 10:39:38.433049 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rvj9c" event={"ID":"60f8769a-d46e-4783-bda4-6583080f5eff","Type":"ContainerStarted","Data":"ecb295a92caec6b89fc4253fc85ae05a0b83a44fbe1bdbd0d19ae1f693fef70a"} Dec 11 10:39:39 crc kubenswrapper[4953]: I1211 10:39:39.443425 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rvj9c" event={"ID":"60f8769a-d46e-4783-bda4-6583080f5eff","Type":"ContainerStarted","Data":"0619b6f632462805680c7f459f26cbf9a1cfd7056c98ee0b4dfeaa01ab3fcbb3"} Dec 11 10:39:40 crc kubenswrapper[4953]: I1211 10:39:40.454732 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rvj9c" event={"ID":"60f8769a-d46e-4783-bda4-6583080f5eff","Type":"ContainerDied","Data":"0619b6f632462805680c7f459f26cbf9a1cfd7056c98ee0b4dfeaa01ab3fcbb3"} Dec 11 10:39:40 crc kubenswrapper[4953]: I1211 10:39:40.455396 4953 generic.go:334] "Generic (PLEG): container finished" podID="60f8769a-d46e-4783-bda4-6583080f5eff" containerID="0619b6f632462805680c7f459f26cbf9a1cfd7056c98ee0b4dfeaa01ab3fcbb3" exitCode=0 Dec 11 10:39:41 crc kubenswrapper[4953]: I1211 10:39:41.470116 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rvj9c" event={"ID":"60f8769a-d46e-4783-bda4-6583080f5eff","Type":"ContainerStarted","Data":"03bb4ae5d6a10fb70f6be5ff573d31ccd19a505df2454e0e6824a3674105f279"} Dec 11 10:39:41 crc kubenswrapper[4953]: I1211 10:39:41.492020 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rvj9c" podStartSLOduration=1.622114089 podStartE2EDuration="4.492000236s" podCreationTimestamp="2025-12-11 10:39:37 +0000 UTC" firstStartedPulling="2025-12-11 10:39:38.434484015 +0000 UTC m=+1696.458343048" lastFinishedPulling="2025-12-11 10:39:41.304370152 +0000 UTC m=+1699.328229195" observedRunningTime="2025-12-11 10:39:41.488356342 +0000 UTC m=+1699.512215375" watchObservedRunningTime="2025-12-11 10:39:41.492000236 +0000 UTC m=+1699.515859279" Dec 11 10:39:42 crc kubenswrapper[4953]: I1211 10:39:42.478496 4953 scope.go:117] "RemoveContainer" containerID="53d5bf4beeeacbda3dba3d57562ea4385d09cf6341585a459bb0c495199b914c" Dec 11 10:39:42 crc kubenswrapper[4953]: E1211 10:39:42.479153 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2898_openshift-machine-config-operator(ed741fb7-1326-48b7-a713-17c9f0243eac)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" Dec 11 10:39:47 crc kubenswrapper[4953]: I1211 10:39:47.767788 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rvj9c" Dec 11 10:39:47 crc kubenswrapper[4953]: I1211 10:39:47.768212 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rvj9c" Dec 11 10:39:47 crc kubenswrapper[4953]: I1211 10:39:47.919094 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rvj9c" Dec 11 10:39:48 crc kubenswrapper[4953]: I1211 10:39:48.572801 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rvj9c" Dec 11 10:39:48 crc kubenswrapper[4953]: I1211 10:39:48.614440 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rvj9c"] Dec 11 10:39:50 crc kubenswrapper[4953]: I1211 10:39:50.545815 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rvj9c" podUID="60f8769a-d46e-4783-bda4-6583080f5eff" containerName="registry-server" containerID="cri-o://03bb4ae5d6a10fb70f6be5ff573d31ccd19a505df2454e0e6824a3674105f279" gracePeriod=2 Dec 11 10:39:51 crc kubenswrapper[4953]: I1211 10:39:51.455326 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rvj9c" Dec 11 10:39:51 crc kubenswrapper[4953]: I1211 10:39:51.554706 4953 generic.go:334] "Generic (PLEG): container finished" podID="60f8769a-d46e-4783-bda4-6583080f5eff" containerID="03bb4ae5d6a10fb70f6be5ff573d31ccd19a505df2454e0e6824a3674105f279" exitCode=0 Dec 11 10:39:51 crc kubenswrapper[4953]: I1211 10:39:51.554751 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rvj9c" event={"ID":"60f8769a-d46e-4783-bda4-6583080f5eff","Type":"ContainerDied","Data":"03bb4ae5d6a10fb70f6be5ff573d31ccd19a505df2454e0e6824a3674105f279"} Dec 11 10:39:51 crc kubenswrapper[4953]: I1211 10:39:51.554776 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rvj9c" event={"ID":"60f8769a-d46e-4783-bda4-6583080f5eff","Type":"ContainerDied","Data":"ecb295a92caec6b89fc4253fc85ae05a0b83a44fbe1bdbd0d19ae1f693fef70a"} Dec 11 10:39:51 crc kubenswrapper[4953]: I1211 10:39:51.554794 4953 scope.go:117] "RemoveContainer" containerID="03bb4ae5d6a10fb70f6be5ff573d31ccd19a505df2454e0e6824a3674105f279" Dec 11 10:39:51 crc kubenswrapper[4953]: I1211 10:39:51.554878 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rvj9c" Dec 11 10:39:51 crc kubenswrapper[4953]: I1211 10:39:51.573327 4953 scope.go:117] "RemoveContainer" containerID="0619b6f632462805680c7f459f26cbf9a1cfd7056c98ee0b4dfeaa01ab3fcbb3" Dec 11 10:39:51 crc kubenswrapper[4953]: I1211 10:39:51.576847 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9pz4z\" (UniqueName: \"kubernetes.io/projected/60f8769a-d46e-4783-bda4-6583080f5eff-kube-api-access-9pz4z\") pod \"60f8769a-d46e-4783-bda4-6583080f5eff\" (UID: \"60f8769a-d46e-4783-bda4-6583080f5eff\") " Dec 11 10:39:51 crc kubenswrapper[4953]: I1211 10:39:51.577077 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60f8769a-d46e-4783-bda4-6583080f5eff-utilities\") pod \"60f8769a-d46e-4783-bda4-6583080f5eff\" (UID: \"60f8769a-d46e-4783-bda4-6583080f5eff\") " Dec 11 10:39:51 crc kubenswrapper[4953]: I1211 10:39:51.577133 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60f8769a-d46e-4783-bda4-6583080f5eff-catalog-content\") pod \"60f8769a-d46e-4783-bda4-6583080f5eff\" (UID: \"60f8769a-d46e-4783-bda4-6583080f5eff\") " Dec 11 10:39:51 crc kubenswrapper[4953]: I1211 10:39:51.577952 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/60f8769a-d46e-4783-bda4-6583080f5eff-utilities" (OuterVolumeSpecName: "utilities") pod "60f8769a-d46e-4783-bda4-6583080f5eff" (UID: "60f8769a-d46e-4783-bda4-6583080f5eff"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:39:51 crc kubenswrapper[4953]: I1211 10:39:51.594091 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60f8769a-d46e-4783-bda4-6583080f5eff-kube-api-access-9pz4z" (OuterVolumeSpecName: "kube-api-access-9pz4z") pod "60f8769a-d46e-4783-bda4-6583080f5eff" (UID: "60f8769a-d46e-4783-bda4-6583080f5eff"). InnerVolumeSpecName "kube-api-access-9pz4z". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:39:51 crc kubenswrapper[4953]: I1211 10:39:51.614508 4953 scope.go:117] "RemoveContainer" containerID="1191a0cd3c90c2fc2ab96f3b1e894f28ea4231c1045a6f6170ed7aa4578e0ca9" Dec 11 10:39:51 crc kubenswrapper[4953]: I1211 10:39:51.630806 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/60f8769a-d46e-4783-bda4-6583080f5eff-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "60f8769a-d46e-4783-bda4-6583080f5eff" (UID: "60f8769a-d46e-4783-bda4-6583080f5eff"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:39:51 crc kubenswrapper[4953]: I1211 10:39:51.642920 4953 scope.go:117] "RemoveContainer" containerID="03bb4ae5d6a10fb70f6be5ff573d31ccd19a505df2454e0e6824a3674105f279" Dec 11 10:39:51 crc kubenswrapper[4953]: E1211 10:39:51.643437 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"03bb4ae5d6a10fb70f6be5ff573d31ccd19a505df2454e0e6824a3674105f279\": container with ID starting with 03bb4ae5d6a10fb70f6be5ff573d31ccd19a505df2454e0e6824a3674105f279 not found: ID does not exist" containerID="03bb4ae5d6a10fb70f6be5ff573d31ccd19a505df2454e0e6824a3674105f279" Dec 11 10:39:51 crc kubenswrapper[4953]: I1211 10:39:51.643478 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03bb4ae5d6a10fb70f6be5ff573d31ccd19a505df2454e0e6824a3674105f279"} err="failed to get container status \"03bb4ae5d6a10fb70f6be5ff573d31ccd19a505df2454e0e6824a3674105f279\": rpc error: code = NotFound desc = could not find container \"03bb4ae5d6a10fb70f6be5ff573d31ccd19a505df2454e0e6824a3674105f279\": container with ID starting with 03bb4ae5d6a10fb70f6be5ff573d31ccd19a505df2454e0e6824a3674105f279 not found: ID does not exist" Dec 11 10:39:51 crc kubenswrapper[4953]: I1211 10:39:51.643513 4953 scope.go:117] "RemoveContainer" containerID="0619b6f632462805680c7f459f26cbf9a1cfd7056c98ee0b4dfeaa01ab3fcbb3" Dec 11 10:39:51 crc kubenswrapper[4953]: E1211 10:39:51.643894 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0619b6f632462805680c7f459f26cbf9a1cfd7056c98ee0b4dfeaa01ab3fcbb3\": container with ID starting with 0619b6f632462805680c7f459f26cbf9a1cfd7056c98ee0b4dfeaa01ab3fcbb3 not found: ID does not exist" containerID="0619b6f632462805680c7f459f26cbf9a1cfd7056c98ee0b4dfeaa01ab3fcbb3" Dec 11 10:39:51 crc kubenswrapper[4953]: I1211 10:39:51.643923 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0619b6f632462805680c7f459f26cbf9a1cfd7056c98ee0b4dfeaa01ab3fcbb3"} err="failed to get container status \"0619b6f632462805680c7f459f26cbf9a1cfd7056c98ee0b4dfeaa01ab3fcbb3\": rpc error: code = NotFound desc = could not find container \"0619b6f632462805680c7f459f26cbf9a1cfd7056c98ee0b4dfeaa01ab3fcbb3\": container with ID starting with 0619b6f632462805680c7f459f26cbf9a1cfd7056c98ee0b4dfeaa01ab3fcbb3 not found: ID does not exist" Dec 11 10:39:51 crc kubenswrapper[4953]: I1211 10:39:51.643942 4953 scope.go:117] "RemoveContainer" containerID="1191a0cd3c90c2fc2ab96f3b1e894f28ea4231c1045a6f6170ed7aa4578e0ca9" Dec 11 10:39:51 crc kubenswrapper[4953]: E1211 10:39:51.644168 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1191a0cd3c90c2fc2ab96f3b1e894f28ea4231c1045a6f6170ed7aa4578e0ca9\": container with ID starting with 1191a0cd3c90c2fc2ab96f3b1e894f28ea4231c1045a6f6170ed7aa4578e0ca9 not found: ID does not exist" containerID="1191a0cd3c90c2fc2ab96f3b1e894f28ea4231c1045a6f6170ed7aa4578e0ca9" Dec 11 10:39:51 crc kubenswrapper[4953]: I1211 10:39:51.644205 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1191a0cd3c90c2fc2ab96f3b1e894f28ea4231c1045a6f6170ed7aa4578e0ca9"} err="failed to get container status \"1191a0cd3c90c2fc2ab96f3b1e894f28ea4231c1045a6f6170ed7aa4578e0ca9\": rpc error: code = NotFound desc = could not find container \"1191a0cd3c90c2fc2ab96f3b1e894f28ea4231c1045a6f6170ed7aa4578e0ca9\": container with ID starting with 1191a0cd3c90c2fc2ab96f3b1e894f28ea4231c1045a6f6170ed7aa4578e0ca9 not found: ID does not exist" Dec 11 10:39:51 crc kubenswrapper[4953]: I1211 10:39:51.678293 4953 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60f8769a-d46e-4783-bda4-6583080f5eff-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 10:39:51 crc kubenswrapper[4953]: I1211 10:39:51.678331 4953 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60f8769a-d46e-4783-bda4-6583080f5eff-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 10:39:51 crc kubenswrapper[4953]: I1211 10:39:51.678346 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9pz4z\" (UniqueName: \"kubernetes.io/projected/60f8769a-d46e-4783-bda4-6583080f5eff-kube-api-access-9pz4z\") on node \"crc\" DevicePath \"\"" Dec 11 10:39:51 crc kubenswrapper[4953]: I1211 10:39:51.970664 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rvj9c"] Dec 11 10:39:51 crc kubenswrapper[4953]: I1211 10:39:51.981321 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rvj9c"] Dec 11 10:39:52 crc kubenswrapper[4953]: I1211 10:39:52.481721 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60f8769a-d46e-4783-bda4-6583080f5eff" path="/var/lib/kubelet/pods/60f8769a-d46e-4783-bda4-6583080f5eff/volumes" Dec 11 10:39:53 crc kubenswrapper[4953]: I1211 10:39:53.473028 4953 scope.go:117] "RemoveContainer" containerID="53d5bf4beeeacbda3dba3d57562ea4385d09cf6341585a459bb0c495199b914c" Dec 11 10:39:53 crc kubenswrapper[4953]: E1211 10:39:53.473262 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2898_openshift-machine-config-operator(ed741fb7-1326-48b7-a713-17c9f0243eac)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" Dec 11 10:40:07 crc kubenswrapper[4953]: I1211 10:40:07.473730 4953 scope.go:117] "RemoveContainer" containerID="53d5bf4beeeacbda3dba3d57562ea4385d09cf6341585a459bb0c495199b914c" Dec 11 10:40:07 crc kubenswrapper[4953]: E1211 10:40:07.474511 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2898_openshift-machine-config-operator(ed741fb7-1326-48b7-a713-17c9f0243eac)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" Dec 11 10:40:18 crc kubenswrapper[4953]: I1211 10:40:18.473962 4953 scope.go:117] "RemoveContainer" containerID="53d5bf4beeeacbda3dba3d57562ea4385d09cf6341585a459bb0c495199b914c" Dec 11 10:40:18 crc kubenswrapper[4953]: E1211 10:40:18.477882 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2898_openshift-machine-config-operator(ed741fb7-1326-48b7-a713-17c9f0243eac)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" Dec 11 10:40:31 crc kubenswrapper[4953]: I1211 10:40:31.473982 4953 scope.go:117] "RemoveContainer" containerID="53d5bf4beeeacbda3dba3d57562ea4385d09cf6341585a459bb0c495199b914c" Dec 11 10:40:31 crc kubenswrapper[4953]: E1211 10:40:31.474907 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2898_openshift-machine-config-operator(ed741fb7-1326-48b7-a713-17c9f0243eac)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" Dec 11 10:40:31 crc kubenswrapper[4953]: I1211 10:40:31.952105 4953 scope.go:117] "RemoveContainer" containerID="7ed21a9491280858f26ed7624cc4024f4c01f69b9e05e955a2aacf00f9ef4ee2" Dec 11 10:40:31 crc kubenswrapper[4953]: I1211 10:40:31.992306 4953 scope.go:117] "RemoveContainer" containerID="a0f8dccdb79158a6e8b0a17545e3aa563f2f8c468c204b94766b32ede580daf7" Dec 11 10:40:32 crc kubenswrapper[4953]: I1211 10:40:32.030107 4953 scope.go:117] "RemoveContainer" containerID="176a6175d3f004b18252df1170910cef32683d1646cc584d7160b5f2877d0bf5" Dec 11 10:40:32 crc kubenswrapper[4953]: I1211 10:40:32.068393 4953 scope.go:117] "RemoveContainer" containerID="dfc2a9a94740a5c1e7c18669633ef308479efaeb144cead2c91d20383752f603" Dec 11 10:40:32 crc kubenswrapper[4953]: I1211 10:40:32.087160 4953 scope.go:117] "RemoveContainer" containerID="18da5e592b7c52416a16674c9366cc1d74bf9348703b8485835f8bd76a25aaba" Dec 11 10:40:32 crc kubenswrapper[4953]: I1211 10:40:32.112928 4953 scope.go:117] "RemoveContainer" containerID="b01cdea6f093563c7ea84139090a75d43cc46e5f226de5ff7edc4ba180c43ab7" Dec 11 10:40:32 crc kubenswrapper[4953]: I1211 10:40:32.151277 4953 scope.go:117] "RemoveContainer" containerID="034d6a32090ba212ff6b84ead3f44683fd598c601b40628759991ada8819812b" Dec 11 10:40:32 crc kubenswrapper[4953]: I1211 10:40:32.188497 4953 scope.go:117] "RemoveContainer" containerID="5de87eeb054b473acfa2ae00d395cdca8c1df68037366bf5b76babf98bca8bea" Dec 11 10:40:32 crc kubenswrapper[4953]: I1211 10:40:32.233959 4953 scope.go:117] "RemoveContainer" containerID="111e78ea1225285d6f9cf9e61ccddd3adee93f71a7ea5c5159526554c821ed7c" Dec 11 10:40:45 crc kubenswrapper[4953]: I1211 10:40:45.473856 4953 scope.go:117] "RemoveContainer" containerID="53d5bf4beeeacbda3dba3d57562ea4385d09cf6341585a459bb0c495199b914c" Dec 11 10:40:45 crc kubenswrapper[4953]: E1211 10:40:45.475127 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2898_openshift-machine-config-operator(ed741fb7-1326-48b7-a713-17c9f0243eac)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" Dec 11 10:40:56 crc kubenswrapper[4953]: I1211 10:40:56.473192 4953 scope.go:117] "RemoveContainer" containerID="53d5bf4beeeacbda3dba3d57562ea4385d09cf6341585a459bb0c495199b914c" Dec 11 10:40:56 crc kubenswrapper[4953]: E1211 10:40:56.474164 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2898_openshift-machine-config-operator(ed741fb7-1326-48b7-a713-17c9f0243eac)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" Dec 11 10:41:08 crc kubenswrapper[4953]: I1211 10:41:08.473396 4953 scope.go:117] "RemoveContainer" containerID="53d5bf4beeeacbda3dba3d57562ea4385d09cf6341585a459bb0c495199b914c" Dec 11 10:41:08 crc kubenswrapper[4953]: E1211 10:41:08.474078 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2898_openshift-machine-config-operator(ed741fb7-1326-48b7-a713-17c9f0243eac)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" Dec 11 10:41:21 crc kubenswrapper[4953]: I1211 10:41:21.473390 4953 scope.go:117] "RemoveContainer" containerID="53d5bf4beeeacbda3dba3d57562ea4385d09cf6341585a459bb0c495199b914c" Dec 11 10:41:21 crc kubenswrapper[4953]: E1211 10:41:21.476285 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2898_openshift-machine-config-operator(ed741fb7-1326-48b7-a713-17c9f0243eac)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" Dec 11 10:41:32 crc kubenswrapper[4953]: I1211 10:41:32.375557 4953 scope.go:117] "RemoveContainer" containerID="d900251d830ca62ae055c9f9a2f8078dbd3d1545f50142030a3209a17a071070" Dec 11 10:41:32 crc kubenswrapper[4953]: I1211 10:41:32.403887 4953 scope.go:117] "RemoveContainer" containerID="6cb07fdb5e67db9e16c8125784b8b3014f71452b7d478333ae5ae1ede91ec6ff" Dec 11 10:41:32 crc kubenswrapper[4953]: I1211 10:41:32.423167 4953 scope.go:117] "RemoveContainer" containerID="10c8d2ad3cc0892332f1c89eaa6623a95eee3fe76e3f7a9daa5675267b2e9091" Dec 11 10:41:32 crc kubenswrapper[4953]: I1211 10:41:32.442403 4953 scope.go:117] "RemoveContainer" containerID="30c55b9a63cff189be97f461ce82cf19d069820c204b72f08733751b6e4d8e3b" Dec 11 10:41:32 crc kubenswrapper[4953]: I1211 10:41:32.473252 4953 scope.go:117] "RemoveContainer" containerID="2e9a60ec1684ff881133bf906166805dce055256199aa98702401b39a20c68d8" Dec 11 10:41:32 crc kubenswrapper[4953]: I1211 10:41:32.491859 4953 scope.go:117] "RemoveContainer" containerID="7be2bbaefa3e689cb3eb71687b4eaaaa7ace9bf5c6191bc5de9d655c138598a0" Dec 11 10:41:32 crc kubenswrapper[4953]: I1211 10:41:32.512211 4953 scope.go:117] "RemoveContainer" containerID="2267e211bfce5ffc305d093bf44d566700acca94813d4aea6430f83c0ecb326d" Dec 11 10:41:32 crc kubenswrapper[4953]: I1211 10:41:32.531189 4953 scope.go:117] "RemoveContainer" containerID="6633d2d60118f289461651ca377abc04f8eae490967bd314f612d43a8c179596" Dec 11 10:41:32 crc kubenswrapper[4953]: I1211 10:41:32.560717 4953 scope.go:117] "RemoveContainer" containerID="120e662c3201d0f81e55488f64c74d01e67c74d5af04b0ca903d4ba77213d505" Dec 11 10:41:32 crc kubenswrapper[4953]: I1211 10:41:32.582717 4953 scope.go:117] "RemoveContainer" containerID="a1dd894fb738f43b760b8725bd438e6786b826d8bd5ea6ec40ebf1c67bee2cc0" Dec 11 10:41:32 crc kubenswrapper[4953]: I1211 10:41:32.602102 4953 scope.go:117] "RemoveContainer" containerID="7d7961ffaf0fa5639d3e96bbbb7ff1815fd8017ed09d51fdb3f868fc15297c07" Dec 11 10:41:32 crc kubenswrapper[4953]: I1211 10:41:32.627507 4953 scope.go:117] "RemoveContainer" containerID="ae1625ae9b7343e79bf1b390eabfcbbde5a933a354f8b01e304d5b2edc571afd" Dec 11 10:41:32 crc kubenswrapper[4953]: I1211 10:41:32.644119 4953 scope.go:117] "RemoveContainer" containerID="9bcdd67ff3f27b165dca3277b206f20442bbecd9d522b5435dc8a058e29f8375" Dec 11 10:41:32 crc kubenswrapper[4953]: I1211 10:41:32.659114 4953 scope.go:117] "RemoveContainer" containerID="2225e689693ec9440a39dbcbbd5349461e4303b1a07c6561f0168f098dab8191" Dec 11 10:41:32 crc kubenswrapper[4953]: I1211 10:41:32.677249 4953 scope.go:117] "RemoveContainer" containerID="abbf1dca057a3008bc1e0fc9376d99eca8728bfe7f2c6e01f3f4573f09a97a8a" Dec 11 10:41:32 crc kubenswrapper[4953]: I1211 10:41:32.706653 4953 scope.go:117] "RemoveContainer" containerID="6b9b936ddf7f45582285d8d2e0da2428665ce0beadde06342951de718bfb19dc" Dec 11 10:41:32 crc kubenswrapper[4953]: I1211 10:41:32.736101 4953 scope.go:117] "RemoveContainer" containerID="3dd428abe094a4785fe247c46053c25a62247016c38cd9af55762bbf581ab80f" Dec 11 10:41:32 crc kubenswrapper[4953]: I1211 10:41:32.752174 4953 scope.go:117] "RemoveContainer" containerID="0491ffc5f1bc0bc44582efeba6a5a0935a60c935c046c46e19e369d2e4913539" Dec 11 10:41:32 crc kubenswrapper[4953]: I1211 10:41:32.775466 4953 scope.go:117] "RemoveContainer" containerID="fdd070f6d6ae0ce7f82371346d196e9b99e064f7ba7340450355520e870d3e65" Dec 11 10:41:32 crc kubenswrapper[4953]: I1211 10:41:32.795123 4953 scope.go:117] "RemoveContainer" containerID="4221eaf86758a08993df7de85552e51a217b8b7260281a70c92cd1a666135bc7" Dec 11 10:41:36 crc kubenswrapper[4953]: I1211 10:41:36.473967 4953 scope.go:117] "RemoveContainer" containerID="53d5bf4beeeacbda3dba3d57562ea4385d09cf6341585a459bb0c495199b914c" Dec 11 10:41:36 crc kubenswrapper[4953]: E1211 10:41:36.474991 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2898_openshift-machine-config-operator(ed741fb7-1326-48b7-a713-17c9f0243eac)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" Dec 11 10:41:48 crc kubenswrapper[4953]: I1211 10:41:48.472805 4953 scope.go:117] "RemoveContainer" containerID="53d5bf4beeeacbda3dba3d57562ea4385d09cf6341585a459bb0c495199b914c" Dec 11 10:41:48 crc kubenswrapper[4953]: E1211 10:41:48.473592 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2898_openshift-machine-config-operator(ed741fb7-1326-48b7-a713-17c9f0243eac)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" Dec 11 10:41:59 crc kubenswrapper[4953]: I1211 10:41:59.472796 4953 scope.go:117] "RemoveContainer" containerID="53d5bf4beeeacbda3dba3d57562ea4385d09cf6341585a459bb0c495199b914c" Dec 11 10:41:59 crc kubenswrapper[4953]: E1211 10:41:59.473699 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2898_openshift-machine-config-operator(ed741fb7-1326-48b7-a713-17c9f0243eac)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" Dec 11 10:42:12 crc kubenswrapper[4953]: I1211 10:42:12.481268 4953 scope.go:117] "RemoveContainer" containerID="53d5bf4beeeacbda3dba3d57562ea4385d09cf6341585a459bb0c495199b914c" Dec 11 10:42:12 crc kubenswrapper[4953]: E1211 10:42:12.482261 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2898_openshift-machine-config-operator(ed741fb7-1326-48b7-a713-17c9f0243eac)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" Dec 11 10:42:26 crc kubenswrapper[4953]: I1211 10:42:26.473974 4953 scope.go:117] "RemoveContainer" containerID="53d5bf4beeeacbda3dba3d57562ea4385d09cf6341585a459bb0c495199b914c" Dec 11 10:42:26 crc kubenswrapper[4953]: E1211 10:42:26.475056 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2898_openshift-machine-config-operator(ed741fb7-1326-48b7-a713-17c9f0243eac)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" Dec 11 10:42:33 crc kubenswrapper[4953]: I1211 10:42:33.026363 4953 scope.go:117] "RemoveContainer" containerID="12197dc212f018a71653f4be4db791ba82015fdb5a956452eb7226315756bede" Dec 11 10:42:33 crc kubenswrapper[4953]: I1211 10:42:33.080923 4953 scope.go:117] "RemoveContainer" containerID="e77c7ae1c87e7949e1f82009c61668e44597f8128ff13d56a0b924b074388ac2" Dec 11 10:42:33 crc kubenswrapper[4953]: I1211 10:42:33.098338 4953 scope.go:117] "RemoveContainer" containerID="884bf7209c56c792aa6f6119e59a431349030aeb6799dab773a8e92ad0b9f5b2" Dec 11 10:42:33 crc kubenswrapper[4953]: I1211 10:42:33.138371 4953 scope.go:117] "RemoveContainer" containerID="14cefb58e0c43b389056f4cf8bb308599dd6cf25bab0e3a4846b0b83bde66613" Dec 11 10:42:41 crc kubenswrapper[4953]: I1211 10:42:41.473775 4953 scope.go:117] "RemoveContainer" containerID="53d5bf4beeeacbda3dba3d57562ea4385d09cf6341585a459bb0c495199b914c" Dec 11 10:42:41 crc kubenswrapper[4953]: E1211 10:42:41.474651 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2898_openshift-machine-config-operator(ed741fb7-1326-48b7-a713-17c9f0243eac)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" Dec 11 10:42:56 crc kubenswrapper[4953]: I1211 10:42:56.473272 4953 scope.go:117] "RemoveContainer" containerID="53d5bf4beeeacbda3dba3d57562ea4385d09cf6341585a459bb0c495199b914c" Dec 11 10:42:56 crc kubenswrapper[4953]: E1211 10:42:56.474161 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2898_openshift-machine-config-operator(ed741fb7-1326-48b7-a713-17c9f0243eac)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" Dec 11 10:43:08 crc kubenswrapper[4953]: I1211 10:43:08.474309 4953 scope.go:117] "RemoveContainer" containerID="53d5bf4beeeacbda3dba3d57562ea4385d09cf6341585a459bb0c495199b914c" Dec 11 10:43:08 crc kubenswrapper[4953]: E1211 10:43:08.475067 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2898_openshift-machine-config-operator(ed741fb7-1326-48b7-a713-17c9f0243eac)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" Dec 11 10:43:22 crc kubenswrapper[4953]: I1211 10:43:22.478724 4953 scope.go:117] "RemoveContainer" containerID="53d5bf4beeeacbda3dba3d57562ea4385d09cf6341585a459bb0c495199b914c" Dec 11 10:43:22 crc kubenswrapper[4953]: I1211 10:43:22.916611 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q2898" event={"ID":"ed741fb7-1326-48b7-a713-17c9f0243eac","Type":"ContainerStarted","Data":"94deffc6236f608b04d94b2a32d4ef8e60aa95b5ef694b77f8c376b292561de2"} Dec 11 10:43:33 crc kubenswrapper[4953]: I1211 10:43:33.240791 4953 scope.go:117] "RemoveContainer" containerID="1bc9138327f744de0ebda2b78524028f949e14cc2e96f94f06d591c861c22632" Dec 11 10:43:33 crc kubenswrapper[4953]: I1211 10:43:33.304062 4953 scope.go:117] "RemoveContainer" containerID="bc5af3bb14085fb06d4fbb19425f7956a6394356771a345adb23fe49da34c4ef" Dec 11 10:43:33 crc kubenswrapper[4953]: I1211 10:43:33.324022 4953 scope.go:117] "RemoveContainer" containerID="39ba09432c8d47141f48eb0a06529b605d51f099d8537d288c6ec875cebae528" Dec 11 10:43:33 crc kubenswrapper[4953]: I1211 10:43:33.345246 4953 scope.go:117] "RemoveContainer" containerID="4b50ba4bbcce41a88a593c3c973f004487f31dd61dfe5a71e64a83db8f9a9c2f" Dec 11 10:43:33 crc kubenswrapper[4953]: I1211 10:43:33.369666 4953 scope.go:117] "RemoveContainer" containerID="76b1adf1ecb9cc73cce6fab14903ebf309e0061c7db3b0247296d4d28611c686" Dec 11 10:43:33 crc kubenswrapper[4953]: I1211 10:43:33.387041 4953 scope.go:117] "RemoveContainer" containerID="33acb5b8399e690c332876bb46d0a8aa9f480f6d6435312361f99da160bb499a" Dec 11 10:43:33 crc kubenswrapper[4953]: I1211 10:43:33.401633 4953 scope.go:117] "RemoveContainer" containerID="f7ee9bc67732ebd8c3394ac44e0ed0e4085ee206951e3925b501367392391bbe" Dec 11 10:44:33 crc kubenswrapper[4953]: I1211 10:44:33.488063 4953 scope.go:117] "RemoveContainer" containerID="52a8a81efc76dd9871260a4f51ba575eeb502a0c5bcdca997ff392a64c988a8a" Dec 11 10:44:33 crc kubenswrapper[4953]: I1211 10:44:33.584722 4953 scope.go:117] "RemoveContainer" containerID="48707275b7bce1fa32e26a8593896bb5249bdc8d8437a70dc6fc21de8d0d7886" Dec 11 10:44:33 crc kubenswrapper[4953]: I1211 10:44:33.603966 4953 scope.go:117] "RemoveContainer" containerID="c01dfb17fd1a4b83ee5eb0990cc0f19e9a731d964693ce7da348597be82d9a2f" Dec 11 10:45:00 crc kubenswrapper[4953]: I1211 10:45:00.173038 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424165-mdqxf"] Dec 11 10:45:00 crc kubenswrapper[4953]: E1211 10:45:00.173990 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60f8769a-d46e-4783-bda4-6583080f5eff" containerName="extract-utilities" Dec 11 10:45:00 crc kubenswrapper[4953]: I1211 10:45:00.174011 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="60f8769a-d46e-4783-bda4-6583080f5eff" containerName="extract-utilities" Dec 11 10:45:00 crc kubenswrapper[4953]: E1211 10:45:00.174055 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60f8769a-d46e-4783-bda4-6583080f5eff" containerName="registry-server" Dec 11 10:45:00 crc kubenswrapper[4953]: I1211 10:45:00.174061 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="60f8769a-d46e-4783-bda4-6583080f5eff" containerName="registry-server" Dec 11 10:45:00 crc kubenswrapper[4953]: E1211 10:45:00.174072 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60f8769a-d46e-4783-bda4-6583080f5eff" containerName="extract-content" Dec 11 10:45:00 crc kubenswrapper[4953]: I1211 10:45:00.174081 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="60f8769a-d46e-4783-bda4-6583080f5eff" containerName="extract-content" Dec 11 10:45:00 crc kubenswrapper[4953]: I1211 10:45:00.174249 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="60f8769a-d46e-4783-bda4-6583080f5eff" containerName="registry-server" Dec 11 10:45:00 crc kubenswrapper[4953]: I1211 10:45:00.175012 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424165-mdqxf" Dec 11 10:45:00 crc kubenswrapper[4953]: I1211 10:45:00.177319 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 11 10:45:00 crc kubenswrapper[4953]: I1211 10:45:00.177654 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 11 10:45:00 crc kubenswrapper[4953]: I1211 10:45:00.190962 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424165-mdqxf"] Dec 11 10:45:00 crc kubenswrapper[4953]: I1211 10:45:00.276453 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-scm6h\" (UniqueName: \"kubernetes.io/projected/bfb5011e-d0ec-46b3-ae64-fbdf81f24461-kube-api-access-scm6h\") pod \"collect-profiles-29424165-mdqxf\" (UID: \"bfb5011e-d0ec-46b3-ae64-fbdf81f24461\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424165-mdqxf" Dec 11 10:45:00 crc kubenswrapper[4953]: I1211 10:45:00.276808 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bfb5011e-d0ec-46b3-ae64-fbdf81f24461-config-volume\") pod \"collect-profiles-29424165-mdqxf\" (UID: \"bfb5011e-d0ec-46b3-ae64-fbdf81f24461\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424165-mdqxf" Dec 11 10:45:00 crc kubenswrapper[4953]: I1211 10:45:00.277266 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bfb5011e-d0ec-46b3-ae64-fbdf81f24461-secret-volume\") pod \"collect-profiles-29424165-mdqxf\" (UID: \"bfb5011e-d0ec-46b3-ae64-fbdf81f24461\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424165-mdqxf" Dec 11 10:45:00 crc kubenswrapper[4953]: I1211 10:45:00.379794 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-scm6h\" (UniqueName: \"kubernetes.io/projected/bfb5011e-d0ec-46b3-ae64-fbdf81f24461-kube-api-access-scm6h\") pod \"collect-profiles-29424165-mdqxf\" (UID: \"bfb5011e-d0ec-46b3-ae64-fbdf81f24461\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424165-mdqxf" Dec 11 10:45:00 crc kubenswrapper[4953]: I1211 10:45:00.379904 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bfb5011e-d0ec-46b3-ae64-fbdf81f24461-config-volume\") pod \"collect-profiles-29424165-mdqxf\" (UID: \"bfb5011e-d0ec-46b3-ae64-fbdf81f24461\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424165-mdqxf" Dec 11 10:45:00 crc kubenswrapper[4953]: I1211 10:45:00.379945 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bfb5011e-d0ec-46b3-ae64-fbdf81f24461-secret-volume\") pod \"collect-profiles-29424165-mdqxf\" (UID: \"bfb5011e-d0ec-46b3-ae64-fbdf81f24461\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424165-mdqxf" Dec 11 10:45:00 crc kubenswrapper[4953]: I1211 10:45:00.381171 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bfb5011e-d0ec-46b3-ae64-fbdf81f24461-config-volume\") pod \"collect-profiles-29424165-mdqxf\" (UID: \"bfb5011e-d0ec-46b3-ae64-fbdf81f24461\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424165-mdqxf" Dec 11 10:45:00 crc kubenswrapper[4953]: I1211 10:45:00.392241 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bfb5011e-d0ec-46b3-ae64-fbdf81f24461-secret-volume\") pod \"collect-profiles-29424165-mdqxf\" (UID: \"bfb5011e-d0ec-46b3-ae64-fbdf81f24461\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424165-mdqxf" Dec 11 10:45:00 crc kubenswrapper[4953]: I1211 10:45:00.399680 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-scm6h\" (UniqueName: \"kubernetes.io/projected/bfb5011e-d0ec-46b3-ae64-fbdf81f24461-kube-api-access-scm6h\") pod \"collect-profiles-29424165-mdqxf\" (UID: \"bfb5011e-d0ec-46b3-ae64-fbdf81f24461\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424165-mdqxf" Dec 11 10:45:00 crc kubenswrapper[4953]: I1211 10:45:00.515794 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424165-mdqxf" Dec 11 10:45:01 crc kubenswrapper[4953]: I1211 10:45:01.271591 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424165-mdqxf"] Dec 11 10:45:01 crc kubenswrapper[4953]: I1211 10:45:01.783080 4953 generic.go:334] "Generic (PLEG): container finished" podID="bfb5011e-d0ec-46b3-ae64-fbdf81f24461" containerID="56ebdf6585838df8ae69a142f953c34c4c160dbb5a86cd58fc4df875b981a04a" exitCode=0 Dec 11 10:45:01 crc kubenswrapper[4953]: I1211 10:45:01.783188 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29424165-mdqxf" event={"ID":"bfb5011e-d0ec-46b3-ae64-fbdf81f24461","Type":"ContainerDied","Data":"56ebdf6585838df8ae69a142f953c34c4c160dbb5a86cd58fc4df875b981a04a"} Dec 11 10:45:01 crc kubenswrapper[4953]: I1211 10:45:01.783230 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29424165-mdqxf" event={"ID":"bfb5011e-d0ec-46b3-ae64-fbdf81f24461","Type":"ContainerStarted","Data":"7c054d9e42e6b254424a5f6b17c4d0405bc86543fa7d24c91e3081495be0b65f"} Dec 11 10:45:03 crc kubenswrapper[4953]: I1211 10:45:03.062742 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424165-mdqxf" Dec 11 10:45:03 crc kubenswrapper[4953]: I1211 10:45:03.191107 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-scm6h\" (UniqueName: \"kubernetes.io/projected/bfb5011e-d0ec-46b3-ae64-fbdf81f24461-kube-api-access-scm6h\") pod \"bfb5011e-d0ec-46b3-ae64-fbdf81f24461\" (UID: \"bfb5011e-d0ec-46b3-ae64-fbdf81f24461\") " Dec 11 10:45:03 crc kubenswrapper[4953]: I1211 10:45:03.191196 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bfb5011e-d0ec-46b3-ae64-fbdf81f24461-secret-volume\") pod \"bfb5011e-d0ec-46b3-ae64-fbdf81f24461\" (UID: \"bfb5011e-d0ec-46b3-ae64-fbdf81f24461\") " Dec 11 10:45:03 crc kubenswrapper[4953]: I1211 10:45:03.191263 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bfb5011e-d0ec-46b3-ae64-fbdf81f24461-config-volume\") pod \"bfb5011e-d0ec-46b3-ae64-fbdf81f24461\" (UID: \"bfb5011e-d0ec-46b3-ae64-fbdf81f24461\") " Dec 11 10:45:03 crc kubenswrapper[4953]: I1211 10:45:03.192194 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bfb5011e-d0ec-46b3-ae64-fbdf81f24461-config-volume" (OuterVolumeSpecName: "config-volume") pod "bfb5011e-d0ec-46b3-ae64-fbdf81f24461" (UID: "bfb5011e-d0ec-46b3-ae64-fbdf81f24461"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:45:03 crc kubenswrapper[4953]: I1211 10:45:03.196975 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bfb5011e-d0ec-46b3-ae64-fbdf81f24461-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "bfb5011e-d0ec-46b3-ae64-fbdf81f24461" (UID: "bfb5011e-d0ec-46b3-ae64-fbdf81f24461"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:45:03 crc kubenswrapper[4953]: I1211 10:45:03.197607 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bfb5011e-d0ec-46b3-ae64-fbdf81f24461-kube-api-access-scm6h" (OuterVolumeSpecName: "kube-api-access-scm6h") pod "bfb5011e-d0ec-46b3-ae64-fbdf81f24461" (UID: "bfb5011e-d0ec-46b3-ae64-fbdf81f24461"). InnerVolumeSpecName "kube-api-access-scm6h". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:45:03 crc kubenswrapper[4953]: I1211 10:45:03.293068 4953 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bfb5011e-d0ec-46b3-ae64-fbdf81f24461-config-volume\") on node \"crc\" DevicePath \"\"" Dec 11 10:45:03 crc kubenswrapper[4953]: I1211 10:45:03.293127 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-scm6h\" (UniqueName: \"kubernetes.io/projected/bfb5011e-d0ec-46b3-ae64-fbdf81f24461-kube-api-access-scm6h\") on node \"crc\" DevicePath \"\"" Dec 11 10:45:03 crc kubenswrapper[4953]: I1211 10:45:03.293143 4953 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bfb5011e-d0ec-46b3-ae64-fbdf81f24461-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 11 10:45:03 crc kubenswrapper[4953]: I1211 10:45:03.797414 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29424165-mdqxf" event={"ID":"bfb5011e-d0ec-46b3-ae64-fbdf81f24461","Type":"ContainerDied","Data":"7c054d9e42e6b254424a5f6b17c4d0405bc86543fa7d24c91e3081495be0b65f"} Dec 11 10:45:03 crc kubenswrapper[4953]: I1211 10:45:03.797465 4953 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7c054d9e42e6b254424a5f6b17c4d0405bc86543fa7d24c91e3081495be0b65f" Dec 11 10:45:03 crc kubenswrapper[4953]: I1211 10:45:03.797471 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424165-mdqxf" Dec 11 10:45:04 crc kubenswrapper[4953]: I1211 10:45:04.206955 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424120-hdqwl"] Dec 11 10:45:04 crc kubenswrapper[4953]: I1211 10:45:04.211802 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424120-hdqwl"] Dec 11 10:45:04 crc kubenswrapper[4953]: I1211 10:45:04.483846 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88498e28-0a15-43a5-b157-5a3baccfaaaf" path="/var/lib/kubelet/pods/88498e28-0a15-43a5-b157-5a3baccfaaaf/volumes" Dec 11 10:45:33 crc kubenswrapper[4953]: I1211 10:45:33.679821 4953 scope.go:117] "RemoveContainer" containerID="1d08248671906f09dfebb27a3caa1268bf31d38878e09af2bf48efe79e0f1eef" Dec 11 10:45:48 crc kubenswrapper[4953]: I1211 10:45:48.194309 4953 patch_prober.go:28] interesting pod/machine-config-daemon-q2898 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 10:45:48 crc kubenswrapper[4953]: I1211 10:45:48.195020 4953 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 10:46:12 crc kubenswrapper[4953]: I1211 10:46:12.676820 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-kq94l"] Dec 11 10:46:12 crc kubenswrapper[4953]: E1211 10:46:12.677775 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfb5011e-d0ec-46b3-ae64-fbdf81f24461" containerName="collect-profiles" Dec 11 10:46:12 crc kubenswrapper[4953]: I1211 10:46:12.677805 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfb5011e-d0ec-46b3-ae64-fbdf81f24461" containerName="collect-profiles" Dec 11 10:46:12 crc kubenswrapper[4953]: I1211 10:46:12.677986 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="bfb5011e-d0ec-46b3-ae64-fbdf81f24461" containerName="collect-profiles" Dec 11 10:46:12 crc kubenswrapper[4953]: I1211 10:46:12.679090 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kq94l" Dec 11 10:46:12 crc kubenswrapper[4953]: I1211 10:46:12.693355 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kq94l"] Dec 11 10:46:12 crc kubenswrapper[4953]: I1211 10:46:12.802104 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5lxs\" (UniqueName: \"kubernetes.io/projected/e49acecc-30fb-4529-862d-85d4a8f50935-kube-api-access-f5lxs\") pod \"community-operators-kq94l\" (UID: \"e49acecc-30fb-4529-862d-85d4a8f50935\") " pod="openshift-marketplace/community-operators-kq94l" Dec 11 10:46:12 crc kubenswrapper[4953]: I1211 10:46:12.802431 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e49acecc-30fb-4529-862d-85d4a8f50935-catalog-content\") pod \"community-operators-kq94l\" (UID: \"e49acecc-30fb-4529-862d-85d4a8f50935\") " pod="openshift-marketplace/community-operators-kq94l" Dec 11 10:46:12 crc kubenswrapper[4953]: I1211 10:46:12.802689 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e49acecc-30fb-4529-862d-85d4a8f50935-utilities\") pod \"community-operators-kq94l\" (UID: \"e49acecc-30fb-4529-862d-85d4a8f50935\") " pod="openshift-marketplace/community-operators-kq94l" Dec 11 10:46:12 crc kubenswrapper[4953]: I1211 10:46:12.904148 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5lxs\" (UniqueName: \"kubernetes.io/projected/e49acecc-30fb-4529-862d-85d4a8f50935-kube-api-access-f5lxs\") pod \"community-operators-kq94l\" (UID: \"e49acecc-30fb-4529-862d-85d4a8f50935\") " pod="openshift-marketplace/community-operators-kq94l" Dec 11 10:46:12 crc kubenswrapper[4953]: I1211 10:46:12.904236 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e49acecc-30fb-4529-862d-85d4a8f50935-catalog-content\") pod \"community-operators-kq94l\" (UID: \"e49acecc-30fb-4529-862d-85d4a8f50935\") " pod="openshift-marketplace/community-operators-kq94l" Dec 11 10:46:12 crc kubenswrapper[4953]: I1211 10:46:12.904285 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e49acecc-30fb-4529-862d-85d4a8f50935-utilities\") pod \"community-operators-kq94l\" (UID: \"e49acecc-30fb-4529-862d-85d4a8f50935\") " pod="openshift-marketplace/community-operators-kq94l" Dec 11 10:46:12 crc kubenswrapper[4953]: I1211 10:46:12.904783 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e49acecc-30fb-4529-862d-85d4a8f50935-utilities\") pod \"community-operators-kq94l\" (UID: \"e49acecc-30fb-4529-862d-85d4a8f50935\") " pod="openshift-marketplace/community-operators-kq94l" Dec 11 10:46:12 crc kubenswrapper[4953]: I1211 10:46:12.904971 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e49acecc-30fb-4529-862d-85d4a8f50935-catalog-content\") pod \"community-operators-kq94l\" (UID: \"e49acecc-30fb-4529-862d-85d4a8f50935\") " pod="openshift-marketplace/community-operators-kq94l" Dec 11 10:46:12 crc kubenswrapper[4953]: I1211 10:46:12.927761 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5lxs\" (UniqueName: \"kubernetes.io/projected/e49acecc-30fb-4529-862d-85d4a8f50935-kube-api-access-f5lxs\") pod \"community-operators-kq94l\" (UID: \"e49acecc-30fb-4529-862d-85d4a8f50935\") " pod="openshift-marketplace/community-operators-kq94l" Dec 11 10:46:13 crc kubenswrapper[4953]: I1211 10:46:13.004009 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kq94l" Dec 11 10:46:13 crc kubenswrapper[4953]: I1211 10:46:13.608055 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kq94l"] Dec 11 10:46:13 crc kubenswrapper[4953]: W1211 10:46:13.614807 4953 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode49acecc_30fb_4529_862d_85d4a8f50935.slice/crio-a8b3d855db0da177c6dd6c7b925fbe430f7c858f3215fdeec7f74c14ce506d79 WatchSource:0}: Error finding container a8b3d855db0da177c6dd6c7b925fbe430f7c858f3215fdeec7f74c14ce506d79: Status 404 returned error can't find the container with id a8b3d855db0da177c6dd6c7b925fbe430f7c858f3215fdeec7f74c14ce506d79 Dec 11 10:46:14 crc kubenswrapper[4953]: I1211 10:46:14.446507 4953 generic.go:334] "Generic (PLEG): container finished" podID="e49acecc-30fb-4529-862d-85d4a8f50935" containerID="2b1253edb114c37ca7fc9afe4993134c8035d3322679e4f3999b3b0b9490c487" exitCode=0 Dec 11 10:46:14 crc kubenswrapper[4953]: I1211 10:46:14.446843 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kq94l" event={"ID":"e49acecc-30fb-4529-862d-85d4a8f50935","Type":"ContainerDied","Data":"2b1253edb114c37ca7fc9afe4993134c8035d3322679e4f3999b3b0b9490c487"} Dec 11 10:46:14 crc kubenswrapper[4953]: I1211 10:46:14.446898 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kq94l" event={"ID":"e49acecc-30fb-4529-862d-85d4a8f50935","Type":"ContainerStarted","Data":"a8b3d855db0da177c6dd6c7b925fbe430f7c858f3215fdeec7f74c14ce506d79"} Dec 11 10:46:14 crc kubenswrapper[4953]: I1211 10:46:14.450073 4953 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 11 10:46:14 crc kubenswrapper[4953]: I1211 10:46:14.498044 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vkqkk"] Dec 11 10:46:14 crc kubenswrapper[4953]: I1211 10:46:14.507701 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vkqkk"] Dec 11 10:46:14 crc kubenswrapper[4953]: I1211 10:46:14.507821 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vkqkk" Dec 11 10:46:14 crc kubenswrapper[4953]: I1211 10:46:14.586621 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2tq5k\" (UniqueName: \"kubernetes.io/projected/05428f33-bd25-4df1-8237-2dd558f9b054-kube-api-access-2tq5k\") pod \"certified-operators-vkqkk\" (UID: \"05428f33-bd25-4df1-8237-2dd558f9b054\") " pod="openshift-marketplace/certified-operators-vkqkk" Dec 11 10:46:14 crc kubenswrapper[4953]: I1211 10:46:14.586689 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05428f33-bd25-4df1-8237-2dd558f9b054-utilities\") pod \"certified-operators-vkqkk\" (UID: \"05428f33-bd25-4df1-8237-2dd558f9b054\") " pod="openshift-marketplace/certified-operators-vkqkk" Dec 11 10:46:14 crc kubenswrapper[4953]: I1211 10:46:14.586948 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05428f33-bd25-4df1-8237-2dd558f9b054-catalog-content\") pod \"certified-operators-vkqkk\" (UID: \"05428f33-bd25-4df1-8237-2dd558f9b054\") " pod="openshift-marketplace/certified-operators-vkqkk" Dec 11 10:46:14 crc kubenswrapper[4953]: I1211 10:46:14.687986 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05428f33-bd25-4df1-8237-2dd558f9b054-catalog-content\") pod \"certified-operators-vkqkk\" (UID: \"05428f33-bd25-4df1-8237-2dd558f9b054\") " pod="openshift-marketplace/certified-operators-vkqkk" Dec 11 10:46:14 crc kubenswrapper[4953]: I1211 10:46:14.688062 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2tq5k\" (UniqueName: \"kubernetes.io/projected/05428f33-bd25-4df1-8237-2dd558f9b054-kube-api-access-2tq5k\") pod \"certified-operators-vkqkk\" (UID: \"05428f33-bd25-4df1-8237-2dd558f9b054\") " pod="openshift-marketplace/certified-operators-vkqkk" Dec 11 10:46:14 crc kubenswrapper[4953]: I1211 10:46:14.688085 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05428f33-bd25-4df1-8237-2dd558f9b054-utilities\") pod \"certified-operators-vkqkk\" (UID: \"05428f33-bd25-4df1-8237-2dd558f9b054\") " pod="openshift-marketplace/certified-operators-vkqkk" Dec 11 10:46:14 crc kubenswrapper[4953]: I1211 10:46:14.688664 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05428f33-bd25-4df1-8237-2dd558f9b054-utilities\") pod \"certified-operators-vkqkk\" (UID: \"05428f33-bd25-4df1-8237-2dd558f9b054\") " pod="openshift-marketplace/certified-operators-vkqkk" Dec 11 10:46:14 crc kubenswrapper[4953]: I1211 10:46:14.688916 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05428f33-bd25-4df1-8237-2dd558f9b054-catalog-content\") pod \"certified-operators-vkqkk\" (UID: \"05428f33-bd25-4df1-8237-2dd558f9b054\") " pod="openshift-marketplace/certified-operators-vkqkk" Dec 11 10:46:14 crc kubenswrapper[4953]: I1211 10:46:14.715513 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2tq5k\" (UniqueName: \"kubernetes.io/projected/05428f33-bd25-4df1-8237-2dd558f9b054-kube-api-access-2tq5k\") pod \"certified-operators-vkqkk\" (UID: \"05428f33-bd25-4df1-8237-2dd558f9b054\") " pod="openshift-marketplace/certified-operators-vkqkk" Dec 11 10:46:14 crc kubenswrapper[4953]: I1211 10:46:14.836660 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vkqkk" Dec 11 10:46:15 crc kubenswrapper[4953]: I1211 10:46:15.195045 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vkqkk"] Dec 11 10:46:15 crc kubenswrapper[4953]: I1211 10:46:15.454855 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kq94l" event={"ID":"e49acecc-30fb-4529-862d-85d4a8f50935","Type":"ContainerStarted","Data":"ba36283611d983c1036e4701d06927ff8d38e7742c2d8a9aaf54bccc3d5fcf9a"} Dec 11 10:46:15 crc kubenswrapper[4953]: I1211 10:46:15.456486 4953 generic.go:334] "Generic (PLEG): container finished" podID="05428f33-bd25-4df1-8237-2dd558f9b054" containerID="0ff79af66e2db30a8acf7b62fd63630ed8b5791de356968c9138a8f280e16bfc" exitCode=0 Dec 11 10:46:15 crc kubenswrapper[4953]: I1211 10:46:15.456527 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vkqkk" event={"ID":"05428f33-bd25-4df1-8237-2dd558f9b054","Type":"ContainerDied","Data":"0ff79af66e2db30a8acf7b62fd63630ed8b5791de356968c9138a8f280e16bfc"} Dec 11 10:46:15 crc kubenswrapper[4953]: I1211 10:46:15.456551 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vkqkk" event={"ID":"05428f33-bd25-4df1-8237-2dd558f9b054","Type":"ContainerStarted","Data":"e90173d9a7e9aeb32c35f550d2ce9d1abf2d7cd704ca4572d873a6268a8da895"} Dec 11 10:46:15 crc kubenswrapper[4953]: I1211 10:46:15.861008 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-44sdb"] Dec 11 10:46:15 crc kubenswrapper[4953]: I1211 10:46:15.862723 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-44sdb" Dec 11 10:46:15 crc kubenswrapper[4953]: I1211 10:46:15.874005 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-44sdb"] Dec 11 10:46:16 crc kubenswrapper[4953]: I1211 10:46:16.007518 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v74gh\" (UniqueName: \"kubernetes.io/projected/0908d506-4937-4766-a409-9b538c205c2d-kube-api-access-v74gh\") pod \"redhat-operators-44sdb\" (UID: \"0908d506-4937-4766-a409-9b538c205c2d\") " pod="openshift-marketplace/redhat-operators-44sdb" Dec 11 10:46:16 crc kubenswrapper[4953]: I1211 10:46:16.007614 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0908d506-4937-4766-a409-9b538c205c2d-catalog-content\") pod \"redhat-operators-44sdb\" (UID: \"0908d506-4937-4766-a409-9b538c205c2d\") " pod="openshift-marketplace/redhat-operators-44sdb" Dec 11 10:46:16 crc kubenswrapper[4953]: I1211 10:46:16.007655 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0908d506-4937-4766-a409-9b538c205c2d-utilities\") pod \"redhat-operators-44sdb\" (UID: \"0908d506-4937-4766-a409-9b538c205c2d\") " pod="openshift-marketplace/redhat-operators-44sdb" Dec 11 10:46:16 crc kubenswrapper[4953]: I1211 10:46:16.108779 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0908d506-4937-4766-a409-9b538c205c2d-catalog-content\") pod \"redhat-operators-44sdb\" (UID: \"0908d506-4937-4766-a409-9b538c205c2d\") " pod="openshift-marketplace/redhat-operators-44sdb" Dec 11 10:46:16 crc kubenswrapper[4953]: I1211 10:46:16.109141 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0908d506-4937-4766-a409-9b538c205c2d-utilities\") pod \"redhat-operators-44sdb\" (UID: \"0908d506-4937-4766-a409-9b538c205c2d\") " pod="openshift-marketplace/redhat-operators-44sdb" Dec 11 10:46:16 crc kubenswrapper[4953]: I1211 10:46:16.109210 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v74gh\" (UniqueName: \"kubernetes.io/projected/0908d506-4937-4766-a409-9b538c205c2d-kube-api-access-v74gh\") pod \"redhat-operators-44sdb\" (UID: \"0908d506-4937-4766-a409-9b538c205c2d\") " pod="openshift-marketplace/redhat-operators-44sdb" Dec 11 10:46:16 crc kubenswrapper[4953]: I1211 10:46:16.109294 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0908d506-4937-4766-a409-9b538c205c2d-catalog-content\") pod \"redhat-operators-44sdb\" (UID: \"0908d506-4937-4766-a409-9b538c205c2d\") " pod="openshift-marketplace/redhat-operators-44sdb" Dec 11 10:46:16 crc kubenswrapper[4953]: I1211 10:46:16.109540 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0908d506-4937-4766-a409-9b538c205c2d-utilities\") pod \"redhat-operators-44sdb\" (UID: \"0908d506-4937-4766-a409-9b538c205c2d\") " pod="openshift-marketplace/redhat-operators-44sdb" Dec 11 10:46:16 crc kubenswrapper[4953]: I1211 10:46:16.135673 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v74gh\" (UniqueName: \"kubernetes.io/projected/0908d506-4937-4766-a409-9b538c205c2d-kube-api-access-v74gh\") pod \"redhat-operators-44sdb\" (UID: \"0908d506-4937-4766-a409-9b538c205c2d\") " pod="openshift-marketplace/redhat-operators-44sdb" Dec 11 10:46:16 crc kubenswrapper[4953]: I1211 10:46:16.178941 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-44sdb" Dec 11 10:46:16 crc kubenswrapper[4953]: I1211 10:46:16.469051 4953 generic.go:334] "Generic (PLEG): container finished" podID="e49acecc-30fb-4529-862d-85d4a8f50935" containerID="ba36283611d983c1036e4701d06927ff8d38e7742c2d8a9aaf54bccc3d5fcf9a" exitCode=0 Dec 11 10:46:16 crc kubenswrapper[4953]: I1211 10:46:16.469166 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kq94l" event={"ID":"e49acecc-30fb-4529-862d-85d4a8f50935","Type":"ContainerDied","Data":"ba36283611d983c1036e4701d06927ff8d38e7742c2d8a9aaf54bccc3d5fcf9a"} Dec 11 10:46:16 crc kubenswrapper[4953]: W1211 10:46:16.633348 4953 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0908d506_4937_4766_a409_9b538c205c2d.slice/crio-1b013f64fd66b97f19e805750014b3e4e5a02c9586967d024d4dd735f6236b55 WatchSource:0}: Error finding container 1b013f64fd66b97f19e805750014b3e4e5a02c9586967d024d4dd735f6236b55: Status 404 returned error can't find the container with id 1b013f64fd66b97f19e805750014b3e4e5a02c9586967d024d4dd735f6236b55 Dec 11 10:46:16 crc kubenswrapper[4953]: I1211 10:46:16.636039 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-44sdb"] Dec 11 10:46:17 crc kubenswrapper[4953]: I1211 10:46:17.487043 4953 generic.go:334] "Generic (PLEG): container finished" podID="0908d506-4937-4766-a409-9b538c205c2d" containerID="36051e18508665c3e3af8e3b917086dde6119293a36ec35b577a28eac7cfc016" exitCode=0 Dec 11 10:46:17 crc kubenswrapper[4953]: I1211 10:46:17.487463 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-44sdb" event={"ID":"0908d506-4937-4766-a409-9b538c205c2d","Type":"ContainerDied","Data":"36051e18508665c3e3af8e3b917086dde6119293a36ec35b577a28eac7cfc016"} Dec 11 10:46:17 crc kubenswrapper[4953]: I1211 10:46:17.487497 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-44sdb" event={"ID":"0908d506-4937-4766-a409-9b538c205c2d","Type":"ContainerStarted","Data":"1b013f64fd66b97f19e805750014b3e4e5a02c9586967d024d4dd735f6236b55"} Dec 11 10:46:17 crc kubenswrapper[4953]: I1211 10:46:17.510979 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vkqkk" event={"ID":"05428f33-bd25-4df1-8237-2dd558f9b054","Type":"ContainerStarted","Data":"2eac2e5160e97c71e70cb441ec2e0b26f3627a48321a1a3469ce2510f6dabfac"} Dec 11 10:46:17 crc kubenswrapper[4953]: I1211 10:46:17.516702 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kq94l" event={"ID":"e49acecc-30fb-4529-862d-85d4a8f50935","Type":"ContainerStarted","Data":"6fa1e908341a62307d34a738d9f035117d3f5c87e3584b3e446c9ad0c0b4d17a"} Dec 11 10:46:17 crc kubenswrapper[4953]: I1211 10:46:17.556280 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-kq94l" podStartSLOduration=2.94987838 podStartE2EDuration="5.556255109s" podCreationTimestamp="2025-12-11 10:46:12 +0000 UTC" firstStartedPulling="2025-12-11 10:46:14.449715505 +0000 UTC m=+2092.473574548" lastFinishedPulling="2025-12-11 10:46:17.056092244 +0000 UTC m=+2095.079951277" observedRunningTime="2025-12-11 10:46:17.546499985 +0000 UTC m=+2095.570359018" watchObservedRunningTime="2025-12-11 10:46:17.556255109 +0000 UTC m=+2095.580114142" Dec 11 10:46:18 crc kubenswrapper[4953]: I1211 10:46:18.193687 4953 patch_prober.go:28] interesting pod/machine-config-daemon-q2898 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 10:46:18 crc kubenswrapper[4953]: I1211 10:46:18.193829 4953 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 10:46:18 crc kubenswrapper[4953]: I1211 10:46:18.525727 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-44sdb" event={"ID":"0908d506-4937-4766-a409-9b538c205c2d","Type":"ContainerStarted","Data":"a0caa15e9e1666f110492b04ca1394a9fb13f2f53997d6646f69902f87cbd5c3"} Dec 11 10:46:18 crc kubenswrapper[4953]: I1211 10:46:18.527809 4953 generic.go:334] "Generic (PLEG): container finished" podID="05428f33-bd25-4df1-8237-2dd558f9b054" containerID="2eac2e5160e97c71e70cb441ec2e0b26f3627a48321a1a3469ce2510f6dabfac" exitCode=0 Dec 11 10:46:18 crc kubenswrapper[4953]: I1211 10:46:18.527860 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vkqkk" event={"ID":"05428f33-bd25-4df1-8237-2dd558f9b054","Type":"ContainerDied","Data":"2eac2e5160e97c71e70cb441ec2e0b26f3627a48321a1a3469ce2510f6dabfac"} Dec 11 10:46:19 crc kubenswrapper[4953]: I1211 10:46:19.535933 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vkqkk" event={"ID":"05428f33-bd25-4df1-8237-2dd558f9b054","Type":"ContainerStarted","Data":"3e8759c3ed1502d8afe5b072a7c7d02275cbf5ab1232fda577e2806f4963650f"} Dec 11 10:46:19 crc kubenswrapper[4953]: I1211 10:46:19.559774 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vkqkk" podStartSLOduration=2.0105694 podStartE2EDuration="5.559754644s" podCreationTimestamp="2025-12-11 10:46:14 +0000 UTC" firstStartedPulling="2025-12-11 10:46:15.457778169 +0000 UTC m=+2093.481637202" lastFinishedPulling="2025-12-11 10:46:19.006963413 +0000 UTC m=+2097.030822446" observedRunningTime="2025-12-11 10:46:19.553969143 +0000 UTC m=+2097.577828186" watchObservedRunningTime="2025-12-11 10:46:19.559754644 +0000 UTC m=+2097.583613687" Dec 11 10:46:20 crc kubenswrapper[4953]: I1211 10:46:20.545966 4953 generic.go:334] "Generic (PLEG): container finished" podID="0908d506-4937-4766-a409-9b538c205c2d" containerID="a0caa15e9e1666f110492b04ca1394a9fb13f2f53997d6646f69902f87cbd5c3" exitCode=0 Dec 11 10:46:20 crc kubenswrapper[4953]: I1211 10:46:20.546116 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-44sdb" event={"ID":"0908d506-4937-4766-a409-9b538c205c2d","Type":"ContainerDied","Data":"a0caa15e9e1666f110492b04ca1394a9fb13f2f53997d6646f69902f87cbd5c3"} Dec 11 10:46:22 crc kubenswrapper[4953]: I1211 10:46:22.577268 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-44sdb" event={"ID":"0908d506-4937-4766-a409-9b538c205c2d","Type":"ContainerStarted","Data":"e9f7deae5bcc95045b34d169d885300a209ac5adb704f054b795ef03faeb75df"} Dec 11 10:46:22 crc kubenswrapper[4953]: I1211 10:46:22.597009 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-44sdb" podStartSLOduration=3.48863281 podStartE2EDuration="7.596991214s" podCreationTimestamp="2025-12-11 10:46:15 +0000 UTC" firstStartedPulling="2025-12-11 10:46:17.492740505 +0000 UTC m=+2095.516599538" lastFinishedPulling="2025-12-11 10:46:21.601098909 +0000 UTC m=+2099.624957942" observedRunningTime="2025-12-11 10:46:22.595116635 +0000 UTC m=+2100.618975688" watchObservedRunningTime="2025-12-11 10:46:22.596991214 +0000 UTC m=+2100.620850257" Dec 11 10:46:23 crc kubenswrapper[4953]: I1211 10:46:23.006722 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-kq94l" Dec 11 10:46:23 crc kubenswrapper[4953]: I1211 10:46:23.006794 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-kq94l" Dec 11 10:46:23 crc kubenswrapper[4953]: I1211 10:46:23.058123 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-kq94l" Dec 11 10:46:23 crc kubenswrapper[4953]: I1211 10:46:23.630467 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-kq94l" Dec 11 10:46:24 crc kubenswrapper[4953]: I1211 10:46:24.837260 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vkqkk" Dec 11 10:46:24 crc kubenswrapper[4953]: I1211 10:46:24.837322 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-vkqkk" Dec 11 10:46:24 crc kubenswrapper[4953]: I1211 10:46:24.885523 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vkqkk" Dec 11 10:46:25 crc kubenswrapper[4953]: I1211 10:46:25.455452 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-kq94l"] Dec 11 10:46:25 crc kubenswrapper[4953]: I1211 10:46:25.597436 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-kq94l" podUID="e49acecc-30fb-4529-862d-85d4a8f50935" containerName="registry-server" containerID="cri-o://6fa1e908341a62307d34a738d9f035117d3f5c87e3584b3e446c9ad0c0b4d17a" gracePeriod=2 Dec 11 10:46:25 crc kubenswrapper[4953]: I1211 10:46:25.657477 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vkqkk" Dec 11 10:46:26 crc kubenswrapper[4953]: I1211 10:46:26.037949 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kq94l" Dec 11 10:46:26 crc kubenswrapper[4953]: I1211 10:46:26.125604 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f5lxs\" (UniqueName: \"kubernetes.io/projected/e49acecc-30fb-4529-862d-85d4a8f50935-kube-api-access-f5lxs\") pod \"e49acecc-30fb-4529-862d-85d4a8f50935\" (UID: \"e49acecc-30fb-4529-862d-85d4a8f50935\") " Dec 11 10:46:26 crc kubenswrapper[4953]: I1211 10:46:26.125705 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e49acecc-30fb-4529-862d-85d4a8f50935-catalog-content\") pod \"e49acecc-30fb-4529-862d-85d4a8f50935\" (UID: \"e49acecc-30fb-4529-862d-85d4a8f50935\") " Dec 11 10:46:26 crc kubenswrapper[4953]: I1211 10:46:26.125816 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e49acecc-30fb-4529-862d-85d4a8f50935-utilities\") pod \"e49acecc-30fb-4529-862d-85d4a8f50935\" (UID: \"e49acecc-30fb-4529-862d-85d4a8f50935\") " Dec 11 10:46:26 crc kubenswrapper[4953]: I1211 10:46:26.126839 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e49acecc-30fb-4529-862d-85d4a8f50935-utilities" (OuterVolumeSpecName: "utilities") pod "e49acecc-30fb-4529-862d-85d4a8f50935" (UID: "e49acecc-30fb-4529-862d-85d4a8f50935"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:46:26 crc kubenswrapper[4953]: I1211 10:46:26.131463 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e49acecc-30fb-4529-862d-85d4a8f50935-kube-api-access-f5lxs" (OuterVolumeSpecName: "kube-api-access-f5lxs") pod "e49acecc-30fb-4529-862d-85d4a8f50935" (UID: "e49acecc-30fb-4529-862d-85d4a8f50935"). InnerVolumeSpecName "kube-api-access-f5lxs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:46:26 crc kubenswrapper[4953]: I1211 10:46:26.179778 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-44sdb" Dec 11 10:46:26 crc kubenswrapper[4953]: I1211 10:46:26.180368 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-44sdb" Dec 11 10:46:26 crc kubenswrapper[4953]: I1211 10:46:26.187504 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e49acecc-30fb-4529-862d-85d4a8f50935-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e49acecc-30fb-4529-862d-85d4a8f50935" (UID: "e49acecc-30fb-4529-862d-85d4a8f50935"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:46:26 crc kubenswrapper[4953]: I1211 10:46:26.227682 4953 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e49acecc-30fb-4529-862d-85d4a8f50935-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 10:46:26 crc kubenswrapper[4953]: I1211 10:46:26.227732 4953 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e49acecc-30fb-4529-862d-85d4a8f50935-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 10:46:26 crc kubenswrapper[4953]: I1211 10:46:26.227747 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f5lxs\" (UniqueName: \"kubernetes.io/projected/e49acecc-30fb-4529-862d-85d4a8f50935-kube-api-access-f5lxs\") on node \"crc\" DevicePath \"\"" Dec 11 10:46:26 crc kubenswrapper[4953]: I1211 10:46:26.608178 4953 generic.go:334] "Generic (PLEG): container finished" podID="e49acecc-30fb-4529-862d-85d4a8f50935" containerID="6fa1e908341a62307d34a738d9f035117d3f5c87e3584b3e446c9ad0c0b4d17a" exitCode=0 Dec 11 10:46:26 crc kubenswrapper[4953]: I1211 10:46:26.608970 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kq94l" Dec 11 10:46:26 crc kubenswrapper[4953]: I1211 10:46:26.609350 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kq94l" event={"ID":"e49acecc-30fb-4529-862d-85d4a8f50935","Type":"ContainerDied","Data":"6fa1e908341a62307d34a738d9f035117d3f5c87e3584b3e446c9ad0c0b4d17a"} Dec 11 10:46:26 crc kubenswrapper[4953]: I1211 10:46:26.609378 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kq94l" event={"ID":"e49acecc-30fb-4529-862d-85d4a8f50935","Type":"ContainerDied","Data":"a8b3d855db0da177c6dd6c7b925fbe430f7c858f3215fdeec7f74c14ce506d79"} Dec 11 10:46:26 crc kubenswrapper[4953]: I1211 10:46:26.609398 4953 scope.go:117] "RemoveContainer" containerID="6fa1e908341a62307d34a738d9f035117d3f5c87e3584b3e446c9ad0c0b4d17a" Dec 11 10:46:26 crc kubenswrapper[4953]: I1211 10:46:26.638712 4953 scope.go:117] "RemoveContainer" containerID="ba36283611d983c1036e4701d06927ff8d38e7742c2d8a9aaf54bccc3d5fcf9a" Dec 11 10:46:26 crc kubenswrapper[4953]: I1211 10:46:26.649259 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-kq94l"] Dec 11 10:46:26 crc kubenswrapper[4953]: I1211 10:46:26.656052 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-kq94l"] Dec 11 10:46:26 crc kubenswrapper[4953]: I1211 10:46:26.666348 4953 scope.go:117] "RemoveContainer" containerID="2b1253edb114c37ca7fc9afe4993134c8035d3322679e4f3999b3b0b9490c487" Dec 11 10:46:26 crc kubenswrapper[4953]: I1211 10:46:26.687191 4953 scope.go:117] "RemoveContainer" containerID="6fa1e908341a62307d34a738d9f035117d3f5c87e3584b3e446c9ad0c0b4d17a" Dec 11 10:46:26 crc kubenswrapper[4953]: E1211 10:46:26.688075 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6fa1e908341a62307d34a738d9f035117d3f5c87e3584b3e446c9ad0c0b4d17a\": container with ID starting with 6fa1e908341a62307d34a738d9f035117d3f5c87e3584b3e446c9ad0c0b4d17a not found: ID does not exist" containerID="6fa1e908341a62307d34a738d9f035117d3f5c87e3584b3e446c9ad0c0b4d17a" Dec 11 10:46:26 crc kubenswrapper[4953]: I1211 10:46:26.688128 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6fa1e908341a62307d34a738d9f035117d3f5c87e3584b3e446c9ad0c0b4d17a"} err="failed to get container status \"6fa1e908341a62307d34a738d9f035117d3f5c87e3584b3e446c9ad0c0b4d17a\": rpc error: code = NotFound desc = could not find container \"6fa1e908341a62307d34a738d9f035117d3f5c87e3584b3e446c9ad0c0b4d17a\": container with ID starting with 6fa1e908341a62307d34a738d9f035117d3f5c87e3584b3e446c9ad0c0b4d17a not found: ID does not exist" Dec 11 10:46:26 crc kubenswrapper[4953]: I1211 10:46:26.688162 4953 scope.go:117] "RemoveContainer" containerID="ba36283611d983c1036e4701d06927ff8d38e7742c2d8a9aaf54bccc3d5fcf9a" Dec 11 10:46:26 crc kubenswrapper[4953]: E1211 10:46:26.688504 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba36283611d983c1036e4701d06927ff8d38e7742c2d8a9aaf54bccc3d5fcf9a\": container with ID starting with ba36283611d983c1036e4701d06927ff8d38e7742c2d8a9aaf54bccc3d5fcf9a not found: ID does not exist" containerID="ba36283611d983c1036e4701d06927ff8d38e7742c2d8a9aaf54bccc3d5fcf9a" Dec 11 10:46:26 crc kubenswrapper[4953]: I1211 10:46:26.688547 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba36283611d983c1036e4701d06927ff8d38e7742c2d8a9aaf54bccc3d5fcf9a"} err="failed to get container status \"ba36283611d983c1036e4701d06927ff8d38e7742c2d8a9aaf54bccc3d5fcf9a\": rpc error: code = NotFound desc = could not find container \"ba36283611d983c1036e4701d06927ff8d38e7742c2d8a9aaf54bccc3d5fcf9a\": container with ID starting with ba36283611d983c1036e4701d06927ff8d38e7742c2d8a9aaf54bccc3d5fcf9a not found: ID does not exist" Dec 11 10:46:26 crc kubenswrapper[4953]: I1211 10:46:26.688592 4953 scope.go:117] "RemoveContainer" containerID="2b1253edb114c37ca7fc9afe4993134c8035d3322679e4f3999b3b0b9490c487" Dec 11 10:46:26 crc kubenswrapper[4953]: E1211 10:46:26.689176 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b1253edb114c37ca7fc9afe4993134c8035d3322679e4f3999b3b0b9490c487\": container with ID starting with 2b1253edb114c37ca7fc9afe4993134c8035d3322679e4f3999b3b0b9490c487 not found: ID does not exist" containerID="2b1253edb114c37ca7fc9afe4993134c8035d3322679e4f3999b3b0b9490c487" Dec 11 10:46:26 crc kubenswrapper[4953]: I1211 10:46:26.689208 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b1253edb114c37ca7fc9afe4993134c8035d3322679e4f3999b3b0b9490c487"} err="failed to get container status \"2b1253edb114c37ca7fc9afe4993134c8035d3322679e4f3999b3b0b9490c487\": rpc error: code = NotFound desc = could not find container \"2b1253edb114c37ca7fc9afe4993134c8035d3322679e4f3999b3b0b9490c487\": container with ID starting with 2b1253edb114c37ca7fc9afe4993134c8035d3322679e4f3999b3b0b9490c487 not found: ID does not exist" Dec 11 10:46:27 crc kubenswrapper[4953]: I1211 10:46:27.218543 4953 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-44sdb" podUID="0908d506-4937-4766-a409-9b538c205c2d" containerName="registry-server" probeResult="failure" output=< Dec 11 10:46:27 crc kubenswrapper[4953]: timeout: failed to connect service ":50051" within 1s Dec 11 10:46:27 crc kubenswrapper[4953]: > Dec 11 10:46:27 crc kubenswrapper[4953]: I1211 10:46:27.659217 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vkqkk"] Dec 11 10:46:27 crc kubenswrapper[4953]: I1211 10:46:27.659563 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-vkqkk" podUID="05428f33-bd25-4df1-8237-2dd558f9b054" containerName="registry-server" containerID="cri-o://3e8759c3ed1502d8afe5b072a7c7d02275cbf5ab1232fda577e2806f4963650f" gracePeriod=2 Dec 11 10:46:28 crc kubenswrapper[4953]: I1211 10:46:28.483105 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e49acecc-30fb-4529-862d-85d4a8f50935" path="/var/lib/kubelet/pods/e49acecc-30fb-4529-862d-85d4a8f50935/volumes" Dec 11 10:46:28 crc kubenswrapper[4953]: I1211 10:46:28.628019 4953 generic.go:334] "Generic (PLEG): container finished" podID="05428f33-bd25-4df1-8237-2dd558f9b054" containerID="3e8759c3ed1502d8afe5b072a7c7d02275cbf5ab1232fda577e2806f4963650f" exitCode=0 Dec 11 10:46:28 crc kubenswrapper[4953]: I1211 10:46:28.628066 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vkqkk" event={"ID":"05428f33-bd25-4df1-8237-2dd558f9b054","Type":"ContainerDied","Data":"3e8759c3ed1502d8afe5b072a7c7d02275cbf5ab1232fda577e2806f4963650f"} Dec 11 10:46:29 crc kubenswrapper[4953]: I1211 10:46:29.270023 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vkqkk" Dec 11 10:46:29 crc kubenswrapper[4953]: I1211 10:46:29.373255 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05428f33-bd25-4df1-8237-2dd558f9b054-utilities\") pod \"05428f33-bd25-4df1-8237-2dd558f9b054\" (UID: \"05428f33-bd25-4df1-8237-2dd558f9b054\") " Dec 11 10:46:29 crc kubenswrapper[4953]: I1211 10:46:29.373394 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2tq5k\" (UniqueName: \"kubernetes.io/projected/05428f33-bd25-4df1-8237-2dd558f9b054-kube-api-access-2tq5k\") pod \"05428f33-bd25-4df1-8237-2dd558f9b054\" (UID: \"05428f33-bd25-4df1-8237-2dd558f9b054\") " Dec 11 10:46:29 crc kubenswrapper[4953]: I1211 10:46:29.373436 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05428f33-bd25-4df1-8237-2dd558f9b054-catalog-content\") pod \"05428f33-bd25-4df1-8237-2dd558f9b054\" (UID: \"05428f33-bd25-4df1-8237-2dd558f9b054\") " Dec 11 10:46:29 crc kubenswrapper[4953]: I1211 10:46:29.374384 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/05428f33-bd25-4df1-8237-2dd558f9b054-utilities" (OuterVolumeSpecName: "utilities") pod "05428f33-bd25-4df1-8237-2dd558f9b054" (UID: "05428f33-bd25-4df1-8237-2dd558f9b054"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:46:29 crc kubenswrapper[4953]: I1211 10:46:29.378663 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05428f33-bd25-4df1-8237-2dd558f9b054-kube-api-access-2tq5k" (OuterVolumeSpecName: "kube-api-access-2tq5k") pod "05428f33-bd25-4df1-8237-2dd558f9b054" (UID: "05428f33-bd25-4df1-8237-2dd558f9b054"). InnerVolumeSpecName "kube-api-access-2tq5k". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:46:29 crc kubenswrapper[4953]: I1211 10:46:29.435708 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/05428f33-bd25-4df1-8237-2dd558f9b054-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "05428f33-bd25-4df1-8237-2dd558f9b054" (UID: "05428f33-bd25-4df1-8237-2dd558f9b054"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:46:29 crc kubenswrapper[4953]: I1211 10:46:29.475049 4953 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05428f33-bd25-4df1-8237-2dd558f9b054-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 10:46:29 crc kubenswrapper[4953]: I1211 10:46:29.475330 4953 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05428f33-bd25-4df1-8237-2dd558f9b054-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 10:46:29 crc kubenswrapper[4953]: I1211 10:46:29.475344 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2tq5k\" (UniqueName: \"kubernetes.io/projected/05428f33-bd25-4df1-8237-2dd558f9b054-kube-api-access-2tq5k\") on node \"crc\" DevicePath \"\"" Dec 11 10:46:29 crc kubenswrapper[4953]: I1211 10:46:29.640383 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vkqkk" event={"ID":"05428f33-bd25-4df1-8237-2dd558f9b054","Type":"ContainerDied","Data":"e90173d9a7e9aeb32c35f550d2ce9d1abf2d7cd704ca4572d873a6268a8da895"} Dec 11 10:46:29 crc kubenswrapper[4953]: I1211 10:46:29.640447 4953 scope.go:117] "RemoveContainer" containerID="3e8759c3ed1502d8afe5b072a7c7d02275cbf5ab1232fda577e2806f4963650f" Dec 11 10:46:29 crc kubenswrapper[4953]: I1211 10:46:29.640530 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vkqkk" Dec 11 10:46:29 crc kubenswrapper[4953]: I1211 10:46:29.660974 4953 scope.go:117] "RemoveContainer" containerID="2eac2e5160e97c71e70cb441ec2e0b26f3627a48321a1a3469ce2510f6dabfac" Dec 11 10:46:29 crc kubenswrapper[4953]: I1211 10:46:29.685450 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vkqkk"] Dec 11 10:46:29 crc kubenswrapper[4953]: I1211 10:46:29.692826 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-vkqkk"] Dec 11 10:46:29 crc kubenswrapper[4953]: I1211 10:46:29.699982 4953 scope.go:117] "RemoveContainer" containerID="0ff79af66e2db30a8acf7b62fd63630ed8b5791de356968c9138a8f280e16bfc" Dec 11 10:46:30 crc kubenswrapper[4953]: I1211 10:46:30.483903 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05428f33-bd25-4df1-8237-2dd558f9b054" path="/var/lib/kubelet/pods/05428f33-bd25-4df1-8237-2dd558f9b054/volumes" Dec 11 10:46:36 crc kubenswrapper[4953]: I1211 10:46:36.220745 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-44sdb" Dec 11 10:46:36 crc kubenswrapper[4953]: I1211 10:46:36.270521 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-44sdb" Dec 11 10:46:36 crc kubenswrapper[4953]: I1211 10:46:36.455313 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-44sdb"] Dec 11 10:46:37 crc kubenswrapper[4953]: I1211 10:46:37.698495 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-44sdb" podUID="0908d506-4937-4766-a409-9b538c205c2d" containerName="registry-server" containerID="cri-o://e9f7deae5bcc95045b34d169d885300a209ac5adb704f054b795ef03faeb75df" gracePeriod=2 Dec 11 10:46:39 crc kubenswrapper[4953]: I1211 10:46:39.187164 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-44sdb" Dec 11 10:46:39 crc kubenswrapper[4953]: I1211 10:46:39.317248 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0908d506-4937-4766-a409-9b538c205c2d-utilities\") pod \"0908d506-4937-4766-a409-9b538c205c2d\" (UID: \"0908d506-4937-4766-a409-9b538c205c2d\") " Dec 11 10:46:39 crc kubenswrapper[4953]: I1211 10:46:39.317325 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v74gh\" (UniqueName: \"kubernetes.io/projected/0908d506-4937-4766-a409-9b538c205c2d-kube-api-access-v74gh\") pod \"0908d506-4937-4766-a409-9b538c205c2d\" (UID: \"0908d506-4937-4766-a409-9b538c205c2d\") " Dec 11 10:46:39 crc kubenswrapper[4953]: I1211 10:46:39.317453 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0908d506-4937-4766-a409-9b538c205c2d-catalog-content\") pod \"0908d506-4937-4766-a409-9b538c205c2d\" (UID: \"0908d506-4937-4766-a409-9b538c205c2d\") " Dec 11 10:46:39 crc kubenswrapper[4953]: I1211 10:46:39.319055 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0908d506-4937-4766-a409-9b538c205c2d-utilities" (OuterVolumeSpecName: "utilities") pod "0908d506-4937-4766-a409-9b538c205c2d" (UID: "0908d506-4937-4766-a409-9b538c205c2d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:46:39 crc kubenswrapper[4953]: I1211 10:46:39.329948 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0908d506-4937-4766-a409-9b538c205c2d-kube-api-access-v74gh" (OuterVolumeSpecName: "kube-api-access-v74gh") pod "0908d506-4937-4766-a409-9b538c205c2d" (UID: "0908d506-4937-4766-a409-9b538c205c2d"). InnerVolumeSpecName "kube-api-access-v74gh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:46:39 crc kubenswrapper[4953]: I1211 10:46:39.418993 4953 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0908d506-4937-4766-a409-9b538c205c2d-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 10:46:39 crc kubenswrapper[4953]: I1211 10:46:39.419030 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v74gh\" (UniqueName: \"kubernetes.io/projected/0908d506-4937-4766-a409-9b538c205c2d-kube-api-access-v74gh\") on node \"crc\" DevicePath \"\"" Dec 11 10:46:39 crc kubenswrapper[4953]: I1211 10:46:39.451958 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0908d506-4937-4766-a409-9b538c205c2d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0908d506-4937-4766-a409-9b538c205c2d" (UID: "0908d506-4937-4766-a409-9b538c205c2d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:46:39 crc kubenswrapper[4953]: I1211 10:46:39.521416 4953 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0908d506-4937-4766-a409-9b538c205c2d-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 10:46:39 crc kubenswrapper[4953]: I1211 10:46:39.713527 4953 generic.go:334] "Generic (PLEG): container finished" podID="0908d506-4937-4766-a409-9b538c205c2d" containerID="e9f7deae5bcc95045b34d169d885300a209ac5adb704f054b795ef03faeb75df" exitCode=0 Dec 11 10:46:39 crc kubenswrapper[4953]: I1211 10:46:39.713605 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-44sdb" event={"ID":"0908d506-4937-4766-a409-9b538c205c2d","Type":"ContainerDied","Data":"e9f7deae5bcc95045b34d169d885300a209ac5adb704f054b795ef03faeb75df"} Dec 11 10:46:39 crc kubenswrapper[4953]: I1211 10:46:39.713669 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-44sdb" event={"ID":"0908d506-4937-4766-a409-9b538c205c2d","Type":"ContainerDied","Data":"1b013f64fd66b97f19e805750014b3e4e5a02c9586967d024d4dd735f6236b55"} Dec 11 10:46:39 crc kubenswrapper[4953]: I1211 10:46:39.713696 4953 scope.go:117] "RemoveContainer" containerID="e9f7deae5bcc95045b34d169d885300a209ac5adb704f054b795ef03faeb75df" Dec 11 10:46:39 crc kubenswrapper[4953]: I1211 10:46:39.714635 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-44sdb" Dec 11 10:46:39 crc kubenswrapper[4953]: I1211 10:46:39.734658 4953 scope.go:117] "RemoveContainer" containerID="a0caa15e9e1666f110492b04ca1394a9fb13f2f53997d6646f69902f87cbd5c3" Dec 11 10:46:39 crc kubenswrapper[4953]: I1211 10:46:39.749691 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-44sdb"] Dec 11 10:46:39 crc kubenswrapper[4953]: I1211 10:46:39.770062 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-44sdb"] Dec 11 10:46:39 crc kubenswrapper[4953]: I1211 10:46:39.789949 4953 scope.go:117] "RemoveContainer" containerID="36051e18508665c3e3af8e3b917086dde6119293a36ec35b577a28eac7cfc016" Dec 11 10:46:39 crc kubenswrapper[4953]: I1211 10:46:39.805987 4953 scope.go:117] "RemoveContainer" containerID="e9f7deae5bcc95045b34d169d885300a209ac5adb704f054b795ef03faeb75df" Dec 11 10:46:39 crc kubenswrapper[4953]: E1211 10:46:39.807673 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e9f7deae5bcc95045b34d169d885300a209ac5adb704f054b795ef03faeb75df\": container with ID starting with e9f7deae5bcc95045b34d169d885300a209ac5adb704f054b795ef03faeb75df not found: ID does not exist" containerID="e9f7deae5bcc95045b34d169d885300a209ac5adb704f054b795ef03faeb75df" Dec 11 10:46:39 crc kubenswrapper[4953]: I1211 10:46:39.807726 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9f7deae5bcc95045b34d169d885300a209ac5adb704f054b795ef03faeb75df"} err="failed to get container status \"e9f7deae5bcc95045b34d169d885300a209ac5adb704f054b795ef03faeb75df\": rpc error: code = NotFound desc = could not find container \"e9f7deae5bcc95045b34d169d885300a209ac5adb704f054b795ef03faeb75df\": container with ID starting with e9f7deae5bcc95045b34d169d885300a209ac5adb704f054b795ef03faeb75df not found: ID does not exist" Dec 11 10:46:39 crc kubenswrapper[4953]: I1211 10:46:39.807764 4953 scope.go:117] "RemoveContainer" containerID="a0caa15e9e1666f110492b04ca1394a9fb13f2f53997d6646f69902f87cbd5c3" Dec 11 10:46:39 crc kubenswrapper[4953]: E1211 10:46:39.808247 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a0caa15e9e1666f110492b04ca1394a9fb13f2f53997d6646f69902f87cbd5c3\": container with ID starting with a0caa15e9e1666f110492b04ca1394a9fb13f2f53997d6646f69902f87cbd5c3 not found: ID does not exist" containerID="a0caa15e9e1666f110492b04ca1394a9fb13f2f53997d6646f69902f87cbd5c3" Dec 11 10:46:39 crc kubenswrapper[4953]: I1211 10:46:39.808310 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0caa15e9e1666f110492b04ca1394a9fb13f2f53997d6646f69902f87cbd5c3"} err="failed to get container status \"a0caa15e9e1666f110492b04ca1394a9fb13f2f53997d6646f69902f87cbd5c3\": rpc error: code = NotFound desc = could not find container \"a0caa15e9e1666f110492b04ca1394a9fb13f2f53997d6646f69902f87cbd5c3\": container with ID starting with a0caa15e9e1666f110492b04ca1394a9fb13f2f53997d6646f69902f87cbd5c3 not found: ID does not exist" Dec 11 10:46:39 crc kubenswrapper[4953]: I1211 10:46:39.808361 4953 scope.go:117] "RemoveContainer" containerID="36051e18508665c3e3af8e3b917086dde6119293a36ec35b577a28eac7cfc016" Dec 11 10:46:39 crc kubenswrapper[4953]: E1211 10:46:39.808704 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"36051e18508665c3e3af8e3b917086dde6119293a36ec35b577a28eac7cfc016\": container with ID starting with 36051e18508665c3e3af8e3b917086dde6119293a36ec35b577a28eac7cfc016 not found: ID does not exist" containerID="36051e18508665c3e3af8e3b917086dde6119293a36ec35b577a28eac7cfc016" Dec 11 10:46:39 crc kubenswrapper[4953]: I1211 10:46:39.808756 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36051e18508665c3e3af8e3b917086dde6119293a36ec35b577a28eac7cfc016"} err="failed to get container status \"36051e18508665c3e3af8e3b917086dde6119293a36ec35b577a28eac7cfc016\": rpc error: code = NotFound desc = could not find container \"36051e18508665c3e3af8e3b917086dde6119293a36ec35b577a28eac7cfc016\": container with ID starting with 36051e18508665c3e3af8e3b917086dde6119293a36ec35b577a28eac7cfc016 not found: ID does not exist" Dec 11 10:46:40 crc kubenswrapper[4953]: I1211 10:46:40.494783 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0908d506-4937-4766-a409-9b538c205c2d" path="/var/lib/kubelet/pods/0908d506-4937-4766-a409-9b538c205c2d/volumes" Dec 11 10:46:48 crc kubenswrapper[4953]: I1211 10:46:48.194257 4953 patch_prober.go:28] interesting pod/machine-config-daemon-q2898 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 10:46:48 crc kubenswrapper[4953]: I1211 10:46:48.194992 4953 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 10:46:48 crc kubenswrapper[4953]: I1211 10:46:48.195068 4953 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q2898" Dec 11 10:46:48 crc kubenswrapper[4953]: I1211 10:46:48.195634 4953 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"94deffc6236f608b04d94b2a32d4ef8e60aa95b5ef694b77f8c376b292561de2"} pod="openshift-machine-config-operator/machine-config-daemon-q2898" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 11 10:46:48 crc kubenswrapper[4953]: I1211 10:46:48.195696 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" containerName="machine-config-daemon" containerID="cri-o://94deffc6236f608b04d94b2a32d4ef8e60aa95b5ef694b77f8c376b292561de2" gracePeriod=600 Dec 11 10:46:48 crc kubenswrapper[4953]: I1211 10:46:48.783224 4953 generic.go:334] "Generic (PLEG): container finished" podID="ed741fb7-1326-48b7-a713-17c9f0243eac" containerID="94deffc6236f608b04d94b2a32d4ef8e60aa95b5ef694b77f8c376b292561de2" exitCode=0 Dec 11 10:46:48 crc kubenswrapper[4953]: I1211 10:46:48.783338 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q2898" event={"ID":"ed741fb7-1326-48b7-a713-17c9f0243eac","Type":"ContainerDied","Data":"94deffc6236f608b04d94b2a32d4ef8e60aa95b5ef694b77f8c376b292561de2"} Dec 11 10:46:48 crc kubenswrapper[4953]: I1211 10:46:48.783620 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q2898" event={"ID":"ed741fb7-1326-48b7-a713-17c9f0243eac","Type":"ContainerStarted","Data":"db311be6007e706389a642343b1bda137f9065a87b1e374f50d74165612ba610"} Dec 11 10:46:48 crc kubenswrapper[4953]: I1211 10:46:48.783647 4953 scope.go:117] "RemoveContainer" containerID="53d5bf4beeeacbda3dba3d57562ea4385d09cf6341585a459bb0c495199b914c" Dec 11 10:48:48 crc kubenswrapper[4953]: I1211 10:48:48.260850 4953 patch_prober.go:28] interesting pod/machine-config-daemon-q2898 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 10:48:48 crc kubenswrapper[4953]: I1211 10:48:48.261377 4953 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 10:49:18 crc kubenswrapper[4953]: I1211 10:49:18.193440 4953 patch_prober.go:28] interesting pod/machine-config-daemon-q2898 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 10:49:18 crc kubenswrapper[4953]: I1211 10:49:18.194187 4953 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 10:49:20 crc kubenswrapper[4953]: I1211 10:49:20.467171 4953 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-gkctw" podUID="a27b4200-b26e-434d-be23-2940fe7a57c7" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.87:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 10:49:20 crc kubenswrapper[4953]: I1211 10:49:20.510854 4953 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-gkctw" podUID="a27b4200-b26e-434d-be23-2940fe7a57c7" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.87:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 10:49:20 crc kubenswrapper[4953]: I1211 10:49:20.582042 4953 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/certified-operators-w2cg9" podUID="7f4c8cd1-2ee1-4d74-aedf-f5da8fa20c9d" containerName="registry-server" probeResult="failure" output=< Dec 11 10:49:20 crc kubenswrapper[4953]: timeout: failed to connect service ":50051" within 1s Dec 11 10:49:20 crc kubenswrapper[4953]: > Dec 11 10:49:20 crc kubenswrapper[4953]: I1211 10:49:20.606811 4953 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/certified-operators-w2cg9" podUID="7f4c8cd1-2ee1-4d74-aedf-f5da8fa20c9d" containerName="registry-server" probeResult="failure" output=< Dec 11 10:49:20 crc kubenswrapper[4953]: timeout: failed to connect service ":50051" within 1s Dec 11 10:49:20 crc kubenswrapper[4953]: > Dec 11 10:49:48 crc kubenswrapper[4953]: I1211 10:49:48.193536 4953 patch_prober.go:28] interesting pod/machine-config-daemon-q2898 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 10:49:48 crc kubenswrapper[4953]: I1211 10:49:48.194154 4953 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 10:49:48 crc kubenswrapper[4953]: I1211 10:49:48.194214 4953 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q2898" Dec 11 10:49:48 crc kubenswrapper[4953]: I1211 10:49:48.194936 4953 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"db311be6007e706389a642343b1bda137f9065a87b1e374f50d74165612ba610"} pod="openshift-machine-config-operator/machine-config-daemon-q2898" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 11 10:49:48 crc kubenswrapper[4953]: I1211 10:49:48.195005 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" containerName="machine-config-daemon" containerID="cri-o://db311be6007e706389a642343b1bda137f9065a87b1e374f50d74165612ba610" gracePeriod=600 Dec 11 10:49:48 crc kubenswrapper[4953]: E1211 10:49:48.346278 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2898_openshift-machine-config-operator(ed741fb7-1326-48b7-a713-17c9f0243eac)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" Dec 11 10:49:48 crc kubenswrapper[4953]: I1211 10:49:48.732899 4953 generic.go:334] "Generic (PLEG): container finished" podID="ed741fb7-1326-48b7-a713-17c9f0243eac" containerID="db311be6007e706389a642343b1bda137f9065a87b1e374f50d74165612ba610" exitCode=0 Dec 11 10:49:48 crc kubenswrapper[4953]: I1211 10:49:48.732952 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q2898" event={"ID":"ed741fb7-1326-48b7-a713-17c9f0243eac","Type":"ContainerDied","Data":"db311be6007e706389a642343b1bda137f9065a87b1e374f50d74165612ba610"} Dec 11 10:49:48 crc kubenswrapper[4953]: I1211 10:49:48.732999 4953 scope.go:117] "RemoveContainer" containerID="94deffc6236f608b04d94b2a32d4ef8e60aa95b5ef694b77f8c376b292561de2" Dec 11 10:49:48 crc kubenswrapper[4953]: I1211 10:49:48.733755 4953 scope.go:117] "RemoveContainer" containerID="db311be6007e706389a642343b1bda137f9065a87b1e374f50d74165612ba610" Dec 11 10:49:48 crc kubenswrapper[4953]: E1211 10:49:48.734170 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2898_openshift-machine-config-operator(ed741fb7-1326-48b7-a713-17c9f0243eac)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" Dec 11 10:49:55 crc kubenswrapper[4953]: I1211 10:49:55.481544 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-wh95m"] Dec 11 10:49:55 crc kubenswrapper[4953]: E1211 10:49:55.482602 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0908d506-4937-4766-a409-9b538c205c2d" containerName="registry-server" Dec 11 10:49:55 crc kubenswrapper[4953]: I1211 10:49:55.482627 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="0908d506-4937-4766-a409-9b538c205c2d" containerName="registry-server" Dec 11 10:49:55 crc kubenswrapper[4953]: E1211 10:49:55.482651 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e49acecc-30fb-4529-862d-85d4a8f50935" containerName="extract-utilities" Dec 11 10:49:55 crc kubenswrapper[4953]: I1211 10:49:55.482660 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="e49acecc-30fb-4529-862d-85d4a8f50935" containerName="extract-utilities" Dec 11 10:49:55 crc kubenswrapper[4953]: E1211 10:49:55.482681 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e49acecc-30fb-4529-862d-85d4a8f50935" containerName="registry-server" Dec 11 10:49:55 crc kubenswrapper[4953]: I1211 10:49:55.482695 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="e49acecc-30fb-4529-862d-85d4a8f50935" containerName="registry-server" Dec 11 10:49:55 crc kubenswrapper[4953]: E1211 10:49:55.482703 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e49acecc-30fb-4529-862d-85d4a8f50935" containerName="extract-content" Dec 11 10:49:55 crc kubenswrapper[4953]: I1211 10:49:55.482710 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="e49acecc-30fb-4529-862d-85d4a8f50935" containerName="extract-content" Dec 11 10:49:55 crc kubenswrapper[4953]: E1211 10:49:55.482723 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0908d506-4937-4766-a409-9b538c205c2d" containerName="extract-content" Dec 11 10:49:55 crc kubenswrapper[4953]: I1211 10:49:55.482730 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="0908d506-4937-4766-a409-9b538c205c2d" containerName="extract-content" Dec 11 10:49:55 crc kubenswrapper[4953]: E1211 10:49:55.482742 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05428f33-bd25-4df1-8237-2dd558f9b054" containerName="extract-content" Dec 11 10:49:55 crc kubenswrapper[4953]: I1211 10:49:55.482749 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="05428f33-bd25-4df1-8237-2dd558f9b054" containerName="extract-content" Dec 11 10:49:55 crc kubenswrapper[4953]: E1211 10:49:55.482761 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05428f33-bd25-4df1-8237-2dd558f9b054" containerName="registry-server" Dec 11 10:49:55 crc kubenswrapper[4953]: I1211 10:49:55.482768 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="05428f33-bd25-4df1-8237-2dd558f9b054" containerName="registry-server" Dec 11 10:49:55 crc kubenswrapper[4953]: E1211 10:49:55.482785 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05428f33-bd25-4df1-8237-2dd558f9b054" containerName="extract-utilities" Dec 11 10:49:55 crc kubenswrapper[4953]: I1211 10:49:55.482791 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="05428f33-bd25-4df1-8237-2dd558f9b054" containerName="extract-utilities" Dec 11 10:49:55 crc kubenswrapper[4953]: E1211 10:49:55.482798 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0908d506-4937-4766-a409-9b538c205c2d" containerName="extract-utilities" Dec 11 10:49:55 crc kubenswrapper[4953]: I1211 10:49:55.482805 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="0908d506-4937-4766-a409-9b538c205c2d" containerName="extract-utilities" Dec 11 10:49:55 crc kubenswrapper[4953]: I1211 10:49:55.482975 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="0908d506-4937-4766-a409-9b538c205c2d" containerName="registry-server" Dec 11 10:49:55 crc kubenswrapper[4953]: I1211 10:49:55.482986 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="05428f33-bd25-4df1-8237-2dd558f9b054" containerName="registry-server" Dec 11 10:49:55 crc kubenswrapper[4953]: I1211 10:49:55.483002 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="e49acecc-30fb-4529-862d-85d4a8f50935" containerName="registry-server" Dec 11 10:49:55 crc kubenswrapper[4953]: I1211 10:49:55.484413 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wh95m" Dec 11 10:49:55 crc kubenswrapper[4953]: I1211 10:49:55.510059 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wh95m"] Dec 11 10:49:55 crc kubenswrapper[4953]: I1211 10:49:55.574592 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/146faa98-6251-4095-8cb9-693d7f6e9535-utilities\") pod \"redhat-marketplace-wh95m\" (UID: \"146faa98-6251-4095-8cb9-693d7f6e9535\") " pod="openshift-marketplace/redhat-marketplace-wh95m" Dec 11 10:49:55 crc kubenswrapper[4953]: I1211 10:49:55.574650 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/146faa98-6251-4095-8cb9-693d7f6e9535-catalog-content\") pod \"redhat-marketplace-wh95m\" (UID: \"146faa98-6251-4095-8cb9-693d7f6e9535\") " pod="openshift-marketplace/redhat-marketplace-wh95m" Dec 11 10:49:55 crc kubenswrapper[4953]: I1211 10:49:55.574719 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rpbr8\" (UniqueName: \"kubernetes.io/projected/146faa98-6251-4095-8cb9-693d7f6e9535-kube-api-access-rpbr8\") pod \"redhat-marketplace-wh95m\" (UID: \"146faa98-6251-4095-8cb9-693d7f6e9535\") " pod="openshift-marketplace/redhat-marketplace-wh95m" Dec 11 10:49:55 crc kubenswrapper[4953]: I1211 10:49:55.676452 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/146faa98-6251-4095-8cb9-693d7f6e9535-utilities\") pod \"redhat-marketplace-wh95m\" (UID: \"146faa98-6251-4095-8cb9-693d7f6e9535\") " pod="openshift-marketplace/redhat-marketplace-wh95m" Dec 11 10:49:55 crc kubenswrapper[4953]: I1211 10:49:55.676527 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/146faa98-6251-4095-8cb9-693d7f6e9535-catalog-content\") pod \"redhat-marketplace-wh95m\" (UID: \"146faa98-6251-4095-8cb9-693d7f6e9535\") " pod="openshift-marketplace/redhat-marketplace-wh95m" Dec 11 10:49:55 crc kubenswrapper[4953]: I1211 10:49:55.676612 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rpbr8\" (UniqueName: \"kubernetes.io/projected/146faa98-6251-4095-8cb9-693d7f6e9535-kube-api-access-rpbr8\") pod \"redhat-marketplace-wh95m\" (UID: \"146faa98-6251-4095-8cb9-693d7f6e9535\") " pod="openshift-marketplace/redhat-marketplace-wh95m" Dec 11 10:49:55 crc kubenswrapper[4953]: I1211 10:49:55.677750 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/146faa98-6251-4095-8cb9-693d7f6e9535-utilities\") pod \"redhat-marketplace-wh95m\" (UID: \"146faa98-6251-4095-8cb9-693d7f6e9535\") " pod="openshift-marketplace/redhat-marketplace-wh95m" Dec 11 10:49:55 crc kubenswrapper[4953]: I1211 10:49:55.678470 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/146faa98-6251-4095-8cb9-693d7f6e9535-catalog-content\") pod \"redhat-marketplace-wh95m\" (UID: \"146faa98-6251-4095-8cb9-693d7f6e9535\") " pod="openshift-marketplace/redhat-marketplace-wh95m" Dec 11 10:49:55 crc kubenswrapper[4953]: I1211 10:49:55.703994 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rpbr8\" (UniqueName: \"kubernetes.io/projected/146faa98-6251-4095-8cb9-693d7f6e9535-kube-api-access-rpbr8\") pod \"redhat-marketplace-wh95m\" (UID: \"146faa98-6251-4095-8cb9-693d7f6e9535\") " pod="openshift-marketplace/redhat-marketplace-wh95m" Dec 11 10:49:55 crc kubenswrapper[4953]: I1211 10:49:55.827161 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wh95m" Dec 11 10:49:56 crc kubenswrapper[4953]: I1211 10:49:56.140399 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wh95m"] Dec 11 10:49:56 crc kubenswrapper[4953]: I1211 10:49:56.802300 4953 generic.go:334] "Generic (PLEG): container finished" podID="146faa98-6251-4095-8cb9-693d7f6e9535" containerID="54930bc3318674a10485a0a3f19d19c985f26558c0a4ee9617b74d0fc21f34e8" exitCode=0 Dec 11 10:49:56 crc kubenswrapper[4953]: I1211 10:49:56.802368 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wh95m" event={"ID":"146faa98-6251-4095-8cb9-693d7f6e9535","Type":"ContainerDied","Data":"54930bc3318674a10485a0a3f19d19c985f26558c0a4ee9617b74d0fc21f34e8"} Dec 11 10:49:56 crc kubenswrapper[4953]: I1211 10:49:56.802696 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wh95m" event={"ID":"146faa98-6251-4095-8cb9-693d7f6e9535","Type":"ContainerStarted","Data":"7d43c13e49ff9e2fded3f0d6c93321b587abf3c44e59cf1a08edcf8450e0a710"} Dec 11 10:49:59 crc kubenswrapper[4953]: I1211 10:49:59.828155 4953 generic.go:334] "Generic (PLEG): container finished" podID="146faa98-6251-4095-8cb9-693d7f6e9535" containerID="ab8ea0672da8d331d8e3d131f8bc187659ddf460970c2e18b7d8cbec3d9b9f52" exitCode=0 Dec 11 10:49:59 crc kubenswrapper[4953]: I1211 10:49:59.828633 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wh95m" event={"ID":"146faa98-6251-4095-8cb9-693d7f6e9535","Type":"ContainerDied","Data":"ab8ea0672da8d331d8e3d131f8bc187659ddf460970c2e18b7d8cbec3d9b9f52"} Dec 11 10:50:00 crc kubenswrapper[4953]: I1211 10:50:00.473638 4953 scope.go:117] "RemoveContainer" containerID="db311be6007e706389a642343b1bda137f9065a87b1e374f50d74165612ba610" Dec 11 10:50:00 crc kubenswrapper[4953]: E1211 10:50:00.474003 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2898_openshift-machine-config-operator(ed741fb7-1326-48b7-a713-17c9f0243eac)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" Dec 11 10:50:00 crc kubenswrapper[4953]: I1211 10:50:00.839015 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wh95m" event={"ID":"146faa98-6251-4095-8cb9-693d7f6e9535","Type":"ContainerStarted","Data":"a438d2d4fcd94a922f2f2e994eeb5bf1512e143a5cb8a32b8ed9827668e45923"} Dec 11 10:50:00 crc kubenswrapper[4953]: I1211 10:50:00.865146 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-wh95m" podStartSLOduration=2.276279596 podStartE2EDuration="5.86512116s" podCreationTimestamp="2025-12-11 10:49:55 +0000 UTC" firstStartedPulling="2025-12-11 10:49:56.804248571 +0000 UTC m=+2314.828107604" lastFinishedPulling="2025-12-11 10:50:00.393090115 +0000 UTC m=+2318.416949168" observedRunningTime="2025-12-11 10:50:00.862127296 +0000 UTC m=+2318.885986329" watchObservedRunningTime="2025-12-11 10:50:00.86512116 +0000 UTC m=+2318.888980203" Dec 11 10:50:05 crc kubenswrapper[4953]: I1211 10:50:05.828412 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-wh95m" Dec 11 10:50:05 crc kubenswrapper[4953]: I1211 10:50:05.828810 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-wh95m" Dec 11 10:50:05 crc kubenswrapper[4953]: I1211 10:50:05.881290 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-wh95m" Dec 11 10:50:05 crc kubenswrapper[4953]: I1211 10:50:05.926923 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-wh95m" Dec 11 10:50:06 crc kubenswrapper[4953]: I1211 10:50:06.129103 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wh95m"] Dec 11 10:50:07 crc kubenswrapper[4953]: I1211 10:50:07.890037 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-wh95m" podUID="146faa98-6251-4095-8cb9-693d7f6e9535" containerName="registry-server" containerID="cri-o://a438d2d4fcd94a922f2f2e994eeb5bf1512e143a5cb8a32b8ed9827668e45923" gracePeriod=2 Dec 11 10:50:09 crc kubenswrapper[4953]: E1211 10:50:09.319532 4953 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod146faa98_6251_4095_8cb9_693d7f6e9535.slice/crio-conmon-a438d2d4fcd94a922f2f2e994eeb5bf1512e143a5cb8a32b8ed9827668e45923.scope\": RecentStats: unable to find data in memory cache]" Dec 11 10:50:09 crc kubenswrapper[4953]: I1211 10:50:09.907633 4953 generic.go:334] "Generic (PLEG): container finished" podID="146faa98-6251-4095-8cb9-693d7f6e9535" containerID="a438d2d4fcd94a922f2f2e994eeb5bf1512e143a5cb8a32b8ed9827668e45923" exitCode=0 Dec 11 10:50:09 crc kubenswrapper[4953]: I1211 10:50:09.907686 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wh95m" event={"ID":"146faa98-6251-4095-8cb9-693d7f6e9535","Type":"ContainerDied","Data":"a438d2d4fcd94a922f2f2e994eeb5bf1512e143a5cb8a32b8ed9827668e45923"} Dec 11 10:50:10 crc kubenswrapper[4953]: I1211 10:50:10.277028 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wh95m" Dec 11 10:50:10 crc kubenswrapper[4953]: I1211 10:50:10.315928 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rpbr8\" (UniqueName: \"kubernetes.io/projected/146faa98-6251-4095-8cb9-693d7f6e9535-kube-api-access-rpbr8\") pod \"146faa98-6251-4095-8cb9-693d7f6e9535\" (UID: \"146faa98-6251-4095-8cb9-693d7f6e9535\") " Dec 11 10:50:10 crc kubenswrapper[4953]: I1211 10:50:10.315983 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/146faa98-6251-4095-8cb9-693d7f6e9535-catalog-content\") pod \"146faa98-6251-4095-8cb9-693d7f6e9535\" (UID: \"146faa98-6251-4095-8cb9-693d7f6e9535\") " Dec 11 10:50:10 crc kubenswrapper[4953]: I1211 10:50:10.316007 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/146faa98-6251-4095-8cb9-693d7f6e9535-utilities\") pod \"146faa98-6251-4095-8cb9-693d7f6e9535\" (UID: \"146faa98-6251-4095-8cb9-693d7f6e9535\") " Dec 11 10:50:10 crc kubenswrapper[4953]: I1211 10:50:10.317166 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/146faa98-6251-4095-8cb9-693d7f6e9535-utilities" (OuterVolumeSpecName: "utilities") pod "146faa98-6251-4095-8cb9-693d7f6e9535" (UID: "146faa98-6251-4095-8cb9-693d7f6e9535"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:50:10 crc kubenswrapper[4953]: I1211 10:50:10.322253 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/146faa98-6251-4095-8cb9-693d7f6e9535-kube-api-access-rpbr8" (OuterVolumeSpecName: "kube-api-access-rpbr8") pod "146faa98-6251-4095-8cb9-693d7f6e9535" (UID: "146faa98-6251-4095-8cb9-693d7f6e9535"). InnerVolumeSpecName "kube-api-access-rpbr8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:50:10 crc kubenswrapper[4953]: I1211 10:50:10.338301 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/146faa98-6251-4095-8cb9-693d7f6e9535-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "146faa98-6251-4095-8cb9-693d7f6e9535" (UID: "146faa98-6251-4095-8cb9-693d7f6e9535"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:50:10 crc kubenswrapper[4953]: I1211 10:50:10.417501 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rpbr8\" (UniqueName: \"kubernetes.io/projected/146faa98-6251-4095-8cb9-693d7f6e9535-kube-api-access-rpbr8\") on node \"crc\" DevicePath \"\"" Dec 11 10:50:10 crc kubenswrapper[4953]: I1211 10:50:10.417557 4953 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/146faa98-6251-4095-8cb9-693d7f6e9535-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 10:50:10 crc kubenswrapper[4953]: I1211 10:50:10.417587 4953 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/146faa98-6251-4095-8cb9-693d7f6e9535-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 10:50:10 crc kubenswrapper[4953]: I1211 10:50:10.924204 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wh95m" event={"ID":"146faa98-6251-4095-8cb9-693d7f6e9535","Type":"ContainerDied","Data":"7d43c13e49ff9e2fded3f0d6c93321b587abf3c44e59cf1a08edcf8450e0a710"} Dec 11 10:50:10 crc kubenswrapper[4953]: I1211 10:50:10.924272 4953 scope.go:117] "RemoveContainer" containerID="a438d2d4fcd94a922f2f2e994eeb5bf1512e143a5cb8a32b8ed9827668e45923" Dec 11 10:50:10 crc kubenswrapper[4953]: I1211 10:50:10.924396 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wh95m" Dec 11 10:50:10 crc kubenswrapper[4953]: I1211 10:50:10.948768 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wh95m"] Dec 11 10:50:10 crc kubenswrapper[4953]: I1211 10:50:10.953931 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-wh95m"] Dec 11 10:50:10 crc kubenswrapper[4953]: I1211 10:50:10.956741 4953 scope.go:117] "RemoveContainer" containerID="ab8ea0672da8d331d8e3d131f8bc187659ddf460970c2e18b7d8cbec3d9b9f52" Dec 11 10:50:10 crc kubenswrapper[4953]: I1211 10:50:10.991117 4953 scope.go:117] "RemoveContainer" containerID="54930bc3318674a10485a0a3f19d19c985f26558c0a4ee9617b74d0fc21f34e8" Dec 11 10:50:12 crc kubenswrapper[4953]: I1211 10:50:12.479177 4953 scope.go:117] "RemoveContainer" containerID="db311be6007e706389a642343b1bda137f9065a87b1e374f50d74165612ba610" Dec 11 10:50:12 crc kubenswrapper[4953]: E1211 10:50:12.480742 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2898_openshift-machine-config-operator(ed741fb7-1326-48b7-a713-17c9f0243eac)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" Dec 11 10:50:12 crc kubenswrapper[4953]: I1211 10:50:12.483911 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="146faa98-6251-4095-8cb9-693d7f6e9535" path="/var/lib/kubelet/pods/146faa98-6251-4095-8cb9-693d7f6e9535/volumes" Dec 11 10:50:27 crc kubenswrapper[4953]: I1211 10:50:27.474539 4953 scope.go:117] "RemoveContainer" containerID="db311be6007e706389a642343b1bda137f9065a87b1e374f50d74165612ba610" Dec 11 10:50:27 crc kubenswrapper[4953]: E1211 10:50:27.476185 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2898_openshift-machine-config-operator(ed741fb7-1326-48b7-a713-17c9f0243eac)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" Dec 11 10:50:42 crc kubenswrapper[4953]: I1211 10:50:42.481305 4953 scope.go:117] "RemoveContainer" containerID="db311be6007e706389a642343b1bda137f9065a87b1e374f50d74165612ba610" Dec 11 10:50:42 crc kubenswrapper[4953]: E1211 10:50:42.482710 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2898_openshift-machine-config-operator(ed741fb7-1326-48b7-a713-17c9f0243eac)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" Dec 11 10:50:58 crc kubenswrapper[4953]: I1211 10:50:58.474402 4953 scope.go:117] "RemoveContainer" containerID="db311be6007e706389a642343b1bda137f9065a87b1e374f50d74165612ba610" Dec 11 10:50:58 crc kubenswrapper[4953]: E1211 10:50:58.475186 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2898_openshift-machine-config-operator(ed741fb7-1326-48b7-a713-17c9f0243eac)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" Dec 11 10:51:12 crc kubenswrapper[4953]: I1211 10:51:12.479419 4953 scope.go:117] "RemoveContainer" containerID="db311be6007e706389a642343b1bda137f9065a87b1e374f50d74165612ba610" Dec 11 10:51:12 crc kubenswrapper[4953]: E1211 10:51:12.480500 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2898_openshift-machine-config-operator(ed741fb7-1326-48b7-a713-17c9f0243eac)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" Dec 11 10:51:27 crc kubenswrapper[4953]: I1211 10:51:27.473543 4953 scope.go:117] "RemoveContainer" containerID="db311be6007e706389a642343b1bda137f9065a87b1e374f50d74165612ba610" Dec 11 10:51:27 crc kubenswrapper[4953]: E1211 10:51:27.474272 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2898_openshift-machine-config-operator(ed741fb7-1326-48b7-a713-17c9f0243eac)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" Dec 11 10:51:38 crc kubenswrapper[4953]: I1211 10:51:38.473770 4953 scope.go:117] "RemoveContainer" containerID="db311be6007e706389a642343b1bda137f9065a87b1e374f50d74165612ba610" Dec 11 10:51:38 crc kubenswrapper[4953]: E1211 10:51:38.474557 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2898_openshift-machine-config-operator(ed741fb7-1326-48b7-a713-17c9f0243eac)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" Dec 11 10:51:50 crc kubenswrapper[4953]: I1211 10:51:50.473541 4953 scope.go:117] "RemoveContainer" containerID="db311be6007e706389a642343b1bda137f9065a87b1e374f50d74165612ba610" Dec 11 10:51:50 crc kubenswrapper[4953]: E1211 10:51:50.474177 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2898_openshift-machine-config-operator(ed741fb7-1326-48b7-a713-17c9f0243eac)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" Dec 11 10:52:04 crc kubenswrapper[4953]: I1211 10:52:04.473693 4953 scope.go:117] "RemoveContainer" containerID="db311be6007e706389a642343b1bda137f9065a87b1e374f50d74165612ba610" Dec 11 10:52:04 crc kubenswrapper[4953]: E1211 10:52:04.474472 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2898_openshift-machine-config-operator(ed741fb7-1326-48b7-a713-17c9f0243eac)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" Dec 11 10:52:16 crc kubenswrapper[4953]: I1211 10:52:16.473427 4953 scope.go:117] "RemoveContainer" containerID="db311be6007e706389a642343b1bda137f9065a87b1e374f50d74165612ba610" Dec 11 10:52:16 crc kubenswrapper[4953]: E1211 10:52:16.474294 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2898_openshift-machine-config-operator(ed741fb7-1326-48b7-a713-17c9f0243eac)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" Dec 11 10:52:30 crc kubenswrapper[4953]: I1211 10:52:30.473624 4953 scope.go:117] "RemoveContainer" containerID="db311be6007e706389a642343b1bda137f9065a87b1e374f50d74165612ba610" Dec 11 10:52:30 crc kubenswrapper[4953]: E1211 10:52:30.474684 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2898_openshift-machine-config-operator(ed741fb7-1326-48b7-a713-17c9f0243eac)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" Dec 11 10:52:41 crc kubenswrapper[4953]: I1211 10:52:41.473908 4953 scope.go:117] "RemoveContainer" containerID="db311be6007e706389a642343b1bda137f9065a87b1e374f50d74165612ba610" Dec 11 10:52:41 crc kubenswrapper[4953]: E1211 10:52:41.474951 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2898_openshift-machine-config-operator(ed741fb7-1326-48b7-a713-17c9f0243eac)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" Dec 11 10:52:55 crc kubenswrapper[4953]: I1211 10:52:55.473116 4953 scope.go:117] "RemoveContainer" containerID="db311be6007e706389a642343b1bda137f9065a87b1e374f50d74165612ba610" Dec 11 10:52:55 crc kubenswrapper[4953]: E1211 10:52:55.474149 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2898_openshift-machine-config-operator(ed741fb7-1326-48b7-a713-17c9f0243eac)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" Dec 11 10:53:09 crc kubenswrapper[4953]: I1211 10:53:09.473844 4953 scope.go:117] "RemoveContainer" containerID="db311be6007e706389a642343b1bda137f9065a87b1e374f50d74165612ba610" Dec 11 10:53:09 crc kubenswrapper[4953]: E1211 10:53:09.474684 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2898_openshift-machine-config-operator(ed741fb7-1326-48b7-a713-17c9f0243eac)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" Dec 11 10:53:24 crc kubenswrapper[4953]: I1211 10:53:24.473827 4953 scope.go:117] "RemoveContainer" containerID="db311be6007e706389a642343b1bda137f9065a87b1e374f50d74165612ba610" Dec 11 10:53:24 crc kubenswrapper[4953]: E1211 10:53:24.474776 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2898_openshift-machine-config-operator(ed741fb7-1326-48b7-a713-17c9f0243eac)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" Dec 11 10:53:37 crc kubenswrapper[4953]: I1211 10:53:37.474298 4953 scope.go:117] "RemoveContainer" containerID="db311be6007e706389a642343b1bda137f9065a87b1e374f50d74165612ba610" Dec 11 10:53:37 crc kubenswrapper[4953]: E1211 10:53:37.475353 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2898_openshift-machine-config-operator(ed741fb7-1326-48b7-a713-17c9f0243eac)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" Dec 11 10:53:51 crc kubenswrapper[4953]: I1211 10:53:51.473780 4953 scope.go:117] "RemoveContainer" containerID="db311be6007e706389a642343b1bda137f9065a87b1e374f50d74165612ba610" Dec 11 10:53:51 crc kubenswrapper[4953]: E1211 10:53:51.474468 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2898_openshift-machine-config-operator(ed741fb7-1326-48b7-a713-17c9f0243eac)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" Dec 11 10:54:04 crc kubenswrapper[4953]: I1211 10:54:04.473633 4953 scope.go:117] "RemoveContainer" containerID="db311be6007e706389a642343b1bda137f9065a87b1e374f50d74165612ba610" Dec 11 10:54:04 crc kubenswrapper[4953]: E1211 10:54:04.474372 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2898_openshift-machine-config-operator(ed741fb7-1326-48b7-a713-17c9f0243eac)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" Dec 11 10:54:16 crc kubenswrapper[4953]: I1211 10:54:16.474407 4953 scope.go:117] "RemoveContainer" containerID="db311be6007e706389a642343b1bda137f9065a87b1e374f50d74165612ba610" Dec 11 10:54:16 crc kubenswrapper[4953]: E1211 10:54:16.475128 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2898_openshift-machine-config-operator(ed741fb7-1326-48b7-a713-17c9f0243eac)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" Dec 11 10:54:30 crc kubenswrapper[4953]: I1211 10:54:30.474490 4953 scope.go:117] "RemoveContainer" containerID="db311be6007e706389a642343b1bda137f9065a87b1e374f50d74165612ba610" Dec 11 10:54:30 crc kubenswrapper[4953]: E1211 10:54:30.476189 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2898_openshift-machine-config-operator(ed741fb7-1326-48b7-a713-17c9f0243eac)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" Dec 11 10:54:44 crc kubenswrapper[4953]: I1211 10:54:44.473543 4953 scope.go:117] "RemoveContainer" containerID="db311be6007e706389a642343b1bda137f9065a87b1e374f50d74165612ba610" Dec 11 10:54:44 crc kubenswrapper[4953]: E1211 10:54:44.506219 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2898_openshift-machine-config-operator(ed741fb7-1326-48b7-a713-17c9f0243eac)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" Dec 11 10:54:58 crc kubenswrapper[4953]: I1211 10:54:58.473655 4953 scope.go:117] "RemoveContainer" containerID="db311be6007e706389a642343b1bda137f9065a87b1e374f50d74165612ba610" Dec 11 10:54:58 crc kubenswrapper[4953]: I1211 10:54:58.961669 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q2898" event={"ID":"ed741fb7-1326-48b7-a713-17c9f0243eac","Type":"ContainerStarted","Data":"f51d1aefa9fd63384083ddae5740d863c31e6a3f5174b662a513773698844a30"} Dec 11 10:56:45 crc kubenswrapper[4953]: I1211 10:56:45.075846 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-hxhbq"] Dec 11 10:56:45 crc kubenswrapper[4953]: E1211 10:56:45.076893 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="146faa98-6251-4095-8cb9-693d7f6e9535" containerName="extract-content" Dec 11 10:56:45 crc kubenswrapper[4953]: I1211 10:56:45.076927 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="146faa98-6251-4095-8cb9-693d7f6e9535" containerName="extract-content" Dec 11 10:56:45 crc kubenswrapper[4953]: E1211 10:56:45.076955 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="146faa98-6251-4095-8cb9-693d7f6e9535" containerName="extract-utilities" Dec 11 10:56:45 crc kubenswrapper[4953]: I1211 10:56:45.076964 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="146faa98-6251-4095-8cb9-693d7f6e9535" containerName="extract-utilities" Dec 11 10:56:45 crc kubenswrapper[4953]: E1211 10:56:45.076975 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="146faa98-6251-4095-8cb9-693d7f6e9535" containerName="registry-server" Dec 11 10:56:45 crc kubenswrapper[4953]: I1211 10:56:45.076983 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="146faa98-6251-4095-8cb9-693d7f6e9535" containerName="registry-server" Dec 11 10:56:45 crc kubenswrapper[4953]: I1211 10:56:45.077193 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="146faa98-6251-4095-8cb9-693d7f6e9535" containerName="registry-server" Dec 11 10:56:45 crc kubenswrapper[4953]: I1211 10:56:45.078434 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hxhbq" Dec 11 10:56:45 crc kubenswrapper[4953]: I1211 10:56:45.101119 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hxhbq"] Dec 11 10:56:45 crc kubenswrapper[4953]: I1211 10:56:45.166550 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a2fafff8-1d94-4365-bef9-68f64c4e1523-catalog-content\") pod \"certified-operators-hxhbq\" (UID: \"a2fafff8-1d94-4365-bef9-68f64c4e1523\") " pod="openshift-marketplace/certified-operators-hxhbq" Dec 11 10:56:45 crc kubenswrapper[4953]: I1211 10:56:45.166606 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a2fafff8-1d94-4365-bef9-68f64c4e1523-utilities\") pod \"certified-operators-hxhbq\" (UID: \"a2fafff8-1d94-4365-bef9-68f64c4e1523\") " pod="openshift-marketplace/certified-operators-hxhbq" Dec 11 10:56:45 crc kubenswrapper[4953]: I1211 10:56:45.166639 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmzd6\" (UniqueName: \"kubernetes.io/projected/a2fafff8-1d94-4365-bef9-68f64c4e1523-kube-api-access-nmzd6\") pod \"certified-operators-hxhbq\" (UID: \"a2fafff8-1d94-4365-bef9-68f64c4e1523\") " pod="openshift-marketplace/certified-operators-hxhbq" Dec 11 10:56:45 crc kubenswrapper[4953]: I1211 10:56:45.267633 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a2fafff8-1d94-4365-bef9-68f64c4e1523-catalog-content\") pod \"certified-operators-hxhbq\" (UID: \"a2fafff8-1d94-4365-bef9-68f64c4e1523\") " pod="openshift-marketplace/certified-operators-hxhbq" Dec 11 10:56:45 crc kubenswrapper[4953]: I1211 10:56:45.267678 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a2fafff8-1d94-4365-bef9-68f64c4e1523-utilities\") pod \"certified-operators-hxhbq\" (UID: \"a2fafff8-1d94-4365-bef9-68f64c4e1523\") " pod="openshift-marketplace/certified-operators-hxhbq" Dec 11 10:56:45 crc kubenswrapper[4953]: I1211 10:56:45.267712 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmzd6\" (UniqueName: \"kubernetes.io/projected/a2fafff8-1d94-4365-bef9-68f64c4e1523-kube-api-access-nmzd6\") pod \"certified-operators-hxhbq\" (UID: \"a2fafff8-1d94-4365-bef9-68f64c4e1523\") " pod="openshift-marketplace/certified-operators-hxhbq" Dec 11 10:56:45 crc kubenswrapper[4953]: I1211 10:56:45.268169 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a2fafff8-1d94-4365-bef9-68f64c4e1523-catalog-content\") pod \"certified-operators-hxhbq\" (UID: \"a2fafff8-1d94-4365-bef9-68f64c4e1523\") " pod="openshift-marketplace/certified-operators-hxhbq" Dec 11 10:56:45 crc kubenswrapper[4953]: I1211 10:56:45.268294 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a2fafff8-1d94-4365-bef9-68f64c4e1523-utilities\") pod \"certified-operators-hxhbq\" (UID: \"a2fafff8-1d94-4365-bef9-68f64c4e1523\") " pod="openshift-marketplace/certified-operators-hxhbq" Dec 11 10:56:45 crc kubenswrapper[4953]: I1211 10:56:45.300087 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmzd6\" (UniqueName: \"kubernetes.io/projected/a2fafff8-1d94-4365-bef9-68f64c4e1523-kube-api-access-nmzd6\") pod \"certified-operators-hxhbq\" (UID: \"a2fafff8-1d94-4365-bef9-68f64c4e1523\") " pod="openshift-marketplace/certified-operators-hxhbq" Dec 11 10:56:45 crc kubenswrapper[4953]: I1211 10:56:45.400565 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hxhbq" Dec 11 10:56:45 crc kubenswrapper[4953]: I1211 10:56:45.929829 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hxhbq"] Dec 11 10:56:46 crc kubenswrapper[4953]: I1211 10:56:46.822908 4953 generic.go:334] "Generic (PLEG): container finished" podID="a2fafff8-1d94-4365-bef9-68f64c4e1523" containerID="8273a577385eddf7468a7e70930cfd03b8122651debd324379018287794ea1f7" exitCode=0 Dec 11 10:56:46 crc kubenswrapper[4953]: I1211 10:56:46.823234 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hxhbq" event={"ID":"a2fafff8-1d94-4365-bef9-68f64c4e1523","Type":"ContainerDied","Data":"8273a577385eddf7468a7e70930cfd03b8122651debd324379018287794ea1f7"} Dec 11 10:56:46 crc kubenswrapper[4953]: I1211 10:56:46.823267 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hxhbq" event={"ID":"a2fafff8-1d94-4365-bef9-68f64c4e1523","Type":"ContainerStarted","Data":"4b1fbca0962dd42d3c64e22ed2e32d08c54fee6fc5935fbbf94f6ff2d38631a3"} Dec 11 10:56:46 crc kubenswrapper[4953]: I1211 10:56:46.825244 4953 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 11 10:56:47 crc kubenswrapper[4953]: I1211 10:56:47.830798 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hxhbq" event={"ID":"a2fafff8-1d94-4365-bef9-68f64c4e1523","Type":"ContainerStarted","Data":"6e9e7458cb7228a285c36b07d62a0aaea1117c8fdabcace864ac497736e307ec"} Dec 11 10:56:48 crc kubenswrapper[4953]: E1211 10:56:48.096186 4953 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda2fafff8_1d94_4365_bef9_68f64c4e1523.slice/crio-conmon-6e9e7458cb7228a285c36b07d62a0aaea1117c8fdabcace864ac497736e307ec.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda2fafff8_1d94_4365_bef9_68f64c4e1523.slice/crio-6e9e7458cb7228a285c36b07d62a0aaea1117c8fdabcace864ac497736e307ec.scope\": RecentStats: unable to find data in memory cache]" Dec 11 10:56:48 crc kubenswrapper[4953]: I1211 10:56:48.838277 4953 generic.go:334] "Generic (PLEG): container finished" podID="a2fafff8-1d94-4365-bef9-68f64c4e1523" containerID="6e9e7458cb7228a285c36b07d62a0aaea1117c8fdabcace864ac497736e307ec" exitCode=0 Dec 11 10:56:48 crc kubenswrapper[4953]: I1211 10:56:48.838702 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hxhbq" event={"ID":"a2fafff8-1d94-4365-bef9-68f64c4e1523","Type":"ContainerDied","Data":"6e9e7458cb7228a285c36b07d62a0aaea1117c8fdabcace864ac497736e307ec"} Dec 11 10:56:50 crc kubenswrapper[4953]: I1211 10:56:50.869267 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hxhbq" event={"ID":"a2fafff8-1d94-4365-bef9-68f64c4e1523","Type":"ContainerStarted","Data":"e17f9ff7bf391509aa9bb54667b06cd210dc1e307cd1a233c3e5d3ee68d85be0"} Dec 11 10:56:50 crc kubenswrapper[4953]: I1211 10:56:50.887146 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-hxhbq" podStartSLOduration=2.803054349 podStartE2EDuration="5.887121427s" podCreationTimestamp="2025-12-11 10:56:45 +0000 UTC" firstStartedPulling="2025-12-11 10:56:46.825020397 +0000 UTC m=+2724.848879420" lastFinishedPulling="2025-12-11 10:56:49.909087465 +0000 UTC m=+2727.932946498" observedRunningTime="2025-12-11 10:56:50.885525327 +0000 UTC m=+2728.909384380" watchObservedRunningTime="2025-12-11 10:56:50.887121427 +0000 UTC m=+2728.910980460" Dec 11 10:56:55 crc kubenswrapper[4953]: I1211 10:56:55.401030 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-hxhbq" Dec 11 10:56:55 crc kubenswrapper[4953]: I1211 10:56:55.402519 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-hxhbq" Dec 11 10:56:55 crc kubenswrapper[4953]: I1211 10:56:55.450326 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-hxhbq" Dec 11 10:56:55 crc kubenswrapper[4953]: I1211 10:56:55.954121 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-hxhbq" Dec 11 10:56:56 crc kubenswrapper[4953]: I1211 10:56:56.002376 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hxhbq"] Dec 11 10:56:57 crc kubenswrapper[4953]: I1211 10:56:57.945944 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-hxhbq" podUID="a2fafff8-1d94-4365-bef9-68f64c4e1523" containerName="registry-server" containerID="cri-o://e17f9ff7bf391509aa9bb54667b06cd210dc1e307cd1a233c3e5d3ee68d85be0" gracePeriod=2 Dec 11 10:56:58 crc kubenswrapper[4953]: I1211 10:56:58.182764 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-ljthq"] Dec 11 10:56:58 crc kubenswrapper[4953]: I1211 10:56:58.186080 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ljthq" Dec 11 10:56:58 crc kubenswrapper[4953]: I1211 10:56:58.198895 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ljthq"] Dec 11 10:56:58 crc kubenswrapper[4953]: I1211 10:56:58.277977 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pfcx\" (UniqueName: \"kubernetes.io/projected/f6197a1a-b251-497e-9a61-902c5baa17fc-kube-api-access-5pfcx\") pod \"redhat-operators-ljthq\" (UID: \"f6197a1a-b251-497e-9a61-902c5baa17fc\") " pod="openshift-marketplace/redhat-operators-ljthq" Dec 11 10:56:58 crc kubenswrapper[4953]: I1211 10:56:58.278040 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6197a1a-b251-497e-9a61-902c5baa17fc-catalog-content\") pod \"redhat-operators-ljthq\" (UID: \"f6197a1a-b251-497e-9a61-902c5baa17fc\") " pod="openshift-marketplace/redhat-operators-ljthq" Dec 11 10:56:58 crc kubenswrapper[4953]: I1211 10:56:58.278072 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6197a1a-b251-497e-9a61-902c5baa17fc-utilities\") pod \"redhat-operators-ljthq\" (UID: \"f6197a1a-b251-497e-9a61-902c5baa17fc\") " pod="openshift-marketplace/redhat-operators-ljthq" Dec 11 10:56:58 crc kubenswrapper[4953]: I1211 10:56:58.379024 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5pfcx\" (UniqueName: \"kubernetes.io/projected/f6197a1a-b251-497e-9a61-902c5baa17fc-kube-api-access-5pfcx\") pod \"redhat-operators-ljthq\" (UID: \"f6197a1a-b251-497e-9a61-902c5baa17fc\") " pod="openshift-marketplace/redhat-operators-ljthq" Dec 11 10:56:58 crc kubenswrapper[4953]: I1211 10:56:58.379093 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6197a1a-b251-497e-9a61-902c5baa17fc-catalog-content\") pod \"redhat-operators-ljthq\" (UID: \"f6197a1a-b251-497e-9a61-902c5baa17fc\") " pod="openshift-marketplace/redhat-operators-ljthq" Dec 11 10:56:58 crc kubenswrapper[4953]: I1211 10:56:58.379154 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6197a1a-b251-497e-9a61-902c5baa17fc-utilities\") pod \"redhat-operators-ljthq\" (UID: \"f6197a1a-b251-497e-9a61-902c5baa17fc\") " pod="openshift-marketplace/redhat-operators-ljthq" Dec 11 10:56:58 crc kubenswrapper[4953]: I1211 10:56:58.379697 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6197a1a-b251-497e-9a61-902c5baa17fc-catalog-content\") pod \"redhat-operators-ljthq\" (UID: \"f6197a1a-b251-497e-9a61-902c5baa17fc\") " pod="openshift-marketplace/redhat-operators-ljthq" Dec 11 10:56:58 crc kubenswrapper[4953]: I1211 10:56:58.379800 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6197a1a-b251-497e-9a61-902c5baa17fc-utilities\") pod \"redhat-operators-ljthq\" (UID: \"f6197a1a-b251-497e-9a61-902c5baa17fc\") " pod="openshift-marketplace/redhat-operators-ljthq" Dec 11 10:56:58 crc kubenswrapper[4953]: I1211 10:56:58.407758 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5pfcx\" (UniqueName: \"kubernetes.io/projected/f6197a1a-b251-497e-9a61-902c5baa17fc-kube-api-access-5pfcx\") pod \"redhat-operators-ljthq\" (UID: \"f6197a1a-b251-497e-9a61-902c5baa17fc\") " pod="openshift-marketplace/redhat-operators-ljthq" Dec 11 10:56:58 crc kubenswrapper[4953]: I1211 10:56:58.510091 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ljthq" Dec 11 10:56:58 crc kubenswrapper[4953]: I1211 10:56:58.965248 4953 generic.go:334] "Generic (PLEG): container finished" podID="a2fafff8-1d94-4365-bef9-68f64c4e1523" containerID="e17f9ff7bf391509aa9bb54667b06cd210dc1e307cd1a233c3e5d3ee68d85be0" exitCode=0 Dec 11 10:56:58 crc kubenswrapper[4953]: I1211 10:56:58.965325 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hxhbq" event={"ID":"a2fafff8-1d94-4365-bef9-68f64c4e1523","Type":"ContainerDied","Data":"e17f9ff7bf391509aa9bb54667b06cd210dc1e307cd1a233c3e5d3ee68d85be0"} Dec 11 10:56:59 crc kubenswrapper[4953]: I1211 10:56:59.022560 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ljthq"] Dec 11 10:56:59 crc kubenswrapper[4953]: I1211 10:56:59.545876 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hxhbq" Dec 11 10:56:59 crc kubenswrapper[4953]: I1211 10:56:59.697320 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a2fafff8-1d94-4365-bef9-68f64c4e1523-utilities\") pod \"a2fafff8-1d94-4365-bef9-68f64c4e1523\" (UID: \"a2fafff8-1d94-4365-bef9-68f64c4e1523\") " Dec 11 10:56:59 crc kubenswrapper[4953]: I1211 10:56:59.697383 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nmzd6\" (UniqueName: \"kubernetes.io/projected/a2fafff8-1d94-4365-bef9-68f64c4e1523-kube-api-access-nmzd6\") pod \"a2fafff8-1d94-4365-bef9-68f64c4e1523\" (UID: \"a2fafff8-1d94-4365-bef9-68f64c4e1523\") " Dec 11 10:56:59 crc kubenswrapper[4953]: I1211 10:56:59.697533 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a2fafff8-1d94-4365-bef9-68f64c4e1523-catalog-content\") pod \"a2fafff8-1d94-4365-bef9-68f64c4e1523\" (UID: \"a2fafff8-1d94-4365-bef9-68f64c4e1523\") " Dec 11 10:56:59 crc kubenswrapper[4953]: I1211 10:56:59.698639 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a2fafff8-1d94-4365-bef9-68f64c4e1523-utilities" (OuterVolumeSpecName: "utilities") pod "a2fafff8-1d94-4365-bef9-68f64c4e1523" (UID: "a2fafff8-1d94-4365-bef9-68f64c4e1523"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:56:59 crc kubenswrapper[4953]: I1211 10:56:59.702987 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2fafff8-1d94-4365-bef9-68f64c4e1523-kube-api-access-nmzd6" (OuterVolumeSpecName: "kube-api-access-nmzd6") pod "a2fafff8-1d94-4365-bef9-68f64c4e1523" (UID: "a2fafff8-1d94-4365-bef9-68f64c4e1523"). InnerVolumeSpecName "kube-api-access-nmzd6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:56:59 crc kubenswrapper[4953]: I1211 10:56:59.799855 4953 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a2fafff8-1d94-4365-bef9-68f64c4e1523-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 10:56:59 crc kubenswrapper[4953]: I1211 10:56:59.799893 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nmzd6\" (UniqueName: \"kubernetes.io/projected/a2fafff8-1d94-4365-bef9-68f64c4e1523-kube-api-access-nmzd6\") on node \"crc\" DevicePath \"\"" Dec 11 10:56:59 crc kubenswrapper[4953]: I1211 10:56:59.801242 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a2fafff8-1d94-4365-bef9-68f64c4e1523-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a2fafff8-1d94-4365-bef9-68f64c4e1523" (UID: "a2fafff8-1d94-4365-bef9-68f64c4e1523"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:56:59 crc kubenswrapper[4953]: I1211 10:56:59.901219 4953 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a2fafff8-1d94-4365-bef9-68f64c4e1523-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 10:56:59 crc kubenswrapper[4953]: I1211 10:56:59.973622 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hxhbq" Dec 11 10:56:59 crc kubenswrapper[4953]: I1211 10:56:59.973861 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hxhbq" event={"ID":"a2fafff8-1d94-4365-bef9-68f64c4e1523","Type":"ContainerDied","Data":"4b1fbca0962dd42d3c64e22ed2e32d08c54fee6fc5935fbbf94f6ff2d38631a3"} Dec 11 10:56:59 crc kubenswrapper[4953]: I1211 10:56:59.973916 4953 scope.go:117] "RemoveContainer" containerID="e17f9ff7bf391509aa9bb54667b06cd210dc1e307cd1a233c3e5d3ee68d85be0" Dec 11 10:56:59 crc kubenswrapper[4953]: I1211 10:56:59.974945 4953 generic.go:334] "Generic (PLEG): container finished" podID="f6197a1a-b251-497e-9a61-902c5baa17fc" containerID="24e656fec8ab26e5e1ee731710dc103ea5c8e5ac57a470ca8c9d1c3b1d83da98" exitCode=0 Dec 11 10:56:59 crc kubenswrapper[4953]: I1211 10:56:59.974977 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ljthq" event={"ID":"f6197a1a-b251-497e-9a61-902c5baa17fc","Type":"ContainerDied","Data":"24e656fec8ab26e5e1ee731710dc103ea5c8e5ac57a470ca8c9d1c3b1d83da98"} Dec 11 10:56:59 crc kubenswrapper[4953]: I1211 10:56:59.974994 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ljthq" event={"ID":"f6197a1a-b251-497e-9a61-902c5baa17fc","Type":"ContainerStarted","Data":"08862718575b6303a29cbb5c381349348c0a21626ff6a9f7fbc033c60e902e3d"} Dec 11 10:56:59 crc kubenswrapper[4953]: I1211 10:56:59.993492 4953 scope.go:117] "RemoveContainer" containerID="6e9e7458cb7228a285c36b07d62a0aaea1117c8fdabcace864ac497736e307ec" Dec 11 10:57:00 crc kubenswrapper[4953]: I1211 10:57:00.032177 4953 scope.go:117] "RemoveContainer" containerID="8273a577385eddf7468a7e70930cfd03b8122651debd324379018287794ea1f7" Dec 11 10:57:00 crc kubenswrapper[4953]: I1211 10:57:00.033230 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hxhbq"] Dec 11 10:57:00 crc kubenswrapper[4953]: I1211 10:57:00.040395 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-hxhbq"] Dec 11 10:57:00 crc kubenswrapper[4953]: I1211 10:57:00.485695 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2fafff8-1d94-4365-bef9-68f64c4e1523" path="/var/lib/kubelet/pods/a2fafff8-1d94-4365-bef9-68f64c4e1523/volumes" Dec 11 10:57:00 crc kubenswrapper[4953]: I1211 10:57:00.991120 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ljthq" event={"ID":"f6197a1a-b251-497e-9a61-902c5baa17fc","Type":"ContainerStarted","Data":"bc7aab122f650620d020ae70d7d4b69237d5ac98ed901b1b8e0a14721cd269b5"} Dec 11 10:57:02 crc kubenswrapper[4953]: I1211 10:57:02.005389 4953 generic.go:334] "Generic (PLEG): container finished" podID="f6197a1a-b251-497e-9a61-902c5baa17fc" containerID="bc7aab122f650620d020ae70d7d4b69237d5ac98ed901b1b8e0a14721cd269b5" exitCode=0 Dec 11 10:57:02 crc kubenswrapper[4953]: I1211 10:57:02.005472 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ljthq" event={"ID":"f6197a1a-b251-497e-9a61-902c5baa17fc","Type":"ContainerDied","Data":"bc7aab122f650620d020ae70d7d4b69237d5ac98ed901b1b8e0a14721cd269b5"} Dec 11 10:57:03 crc kubenswrapper[4953]: I1211 10:57:03.014917 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ljthq" event={"ID":"f6197a1a-b251-497e-9a61-902c5baa17fc","Type":"ContainerStarted","Data":"7283335fca2b69e106937402d031dd2fee5482d638a9e2d591a5473a9469256c"} Dec 11 10:57:08 crc kubenswrapper[4953]: I1211 10:57:08.510748 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-ljthq" Dec 11 10:57:08 crc kubenswrapper[4953]: I1211 10:57:08.511840 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-ljthq" Dec 11 10:57:08 crc kubenswrapper[4953]: I1211 10:57:08.569827 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-ljthq" Dec 11 10:57:08 crc kubenswrapper[4953]: I1211 10:57:08.589334 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-ljthq" podStartSLOduration=7.951903426 podStartE2EDuration="10.589317346s" podCreationTimestamp="2025-12-11 10:56:58 +0000 UTC" firstStartedPulling="2025-12-11 10:56:59.976400295 +0000 UTC m=+2738.000259328" lastFinishedPulling="2025-12-11 10:57:02.613814215 +0000 UTC m=+2740.637673248" observedRunningTime="2025-12-11 10:57:03.035885728 +0000 UTC m=+2741.059744761" watchObservedRunningTime="2025-12-11 10:57:08.589317346 +0000 UTC m=+2746.613176379" Dec 11 10:57:09 crc kubenswrapper[4953]: I1211 10:57:09.119546 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-ljthq" Dec 11 10:57:09 crc kubenswrapper[4953]: I1211 10:57:09.164169 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ljthq"] Dec 11 10:57:11 crc kubenswrapper[4953]: I1211 10:57:11.081402 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-ljthq" podUID="f6197a1a-b251-497e-9a61-902c5baa17fc" containerName="registry-server" containerID="cri-o://7283335fca2b69e106937402d031dd2fee5482d638a9e2d591a5473a9469256c" gracePeriod=2 Dec 11 10:57:14 crc kubenswrapper[4953]: I1211 10:57:14.106843 4953 generic.go:334] "Generic (PLEG): container finished" podID="f6197a1a-b251-497e-9a61-902c5baa17fc" containerID="7283335fca2b69e106937402d031dd2fee5482d638a9e2d591a5473a9469256c" exitCode=0 Dec 11 10:57:14 crc kubenswrapper[4953]: I1211 10:57:14.107302 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ljthq" event={"ID":"f6197a1a-b251-497e-9a61-902c5baa17fc","Type":"ContainerDied","Data":"7283335fca2b69e106937402d031dd2fee5482d638a9e2d591a5473a9469256c"} Dec 11 10:57:14 crc kubenswrapper[4953]: I1211 10:57:14.174101 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ljthq" Dec 11 10:57:14 crc kubenswrapper[4953]: I1211 10:57:14.359948 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6197a1a-b251-497e-9a61-902c5baa17fc-catalog-content\") pod \"f6197a1a-b251-497e-9a61-902c5baa17fc\" (UID: \"f6197a1a-b251-497e-9a61-902c5baa17fc\") " Dec 11 10:57:14 crc kubenswrapper[4953]: I1211 10:57:14.360063 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6197a1a-b251-497e-9a61-902c5baa17fc-utilities\") pod \"f6197a1a-b251-497e-9a61-902c5baa17fc\" (UID: \"f6197a1a-b251-497e-9a61-902c5baa17fc\") " Dec 11 10:57:14 crc kubenswrapper[4953]: I1211 10:57:14.360084 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5pfcx\" (UniqueName: \"kubernetes.io/projected/f6197a1a-b251-497e-9a61-902c5baa17fc-kube-api-access-5pfcx\") pod \"f6197a1a-b251-497e-9a61-902c5baa17fc\" (UID: \"f6197a1a-b251-497e-9a61-902c5baa17fc\") " Dec 11 10:57:14 crc kubenswrapper[4953]: I1211 10:57:14.361445 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f6197a1a-b251-497e-9a61-902c5baa17fc-utilities" (OuterVolumeSpecName: "utilities") pod "f6197a1a-b251-497e-9a61-902c5baa17fc" (UID: "f6197a1a-b251-497e-9a61-902c5baa17fc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:57:14 crc kubenswrapper[4953]: I1211 10:57:14.368338 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6197a1a-b251-497e-9a61-902c5baa17fc-kube-api-access-5pfcx" (OuterVolumeSpecName: "kube-api-access-5pfcx") pod "f6197a1a-b251-497e-9a61-902c5baa17fc" (UID: "f6197a1a-b251-497e-9a61-902c5baa17fc"). InnerVolumeSpecName "kube-api-access-5pfcx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:57:14 crc kubenswrapper[4953]: I1211 10:57:14.461537 4953 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6197a1a-b251-497e-9a61-902c5baa17fc-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 10:57:14 crc kubenswrapper[4953]: I1211 10:57:14.461597 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5pfcx\" (UniqueName: \"kubernetes.io/projected/f6197a1a-b251-497e-9a61-902c5baa17fc-kube-api-access-5pfcx\") on node \"crc\" DevicePath \"\"" Dec 11 10:57:14 crc kubenswrapper[4953]: I1211 10:57:14.482851 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f6197a1a-b251-497e-9a61-902c5baa17fc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f6197a1a-b251-497e-9a61-902c5baa17fc" (UID: "f6197a1a-b251-497e-9a61-902c5baa17fc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:57:14 crc kubenswrapper[4953]: I1211 10:57:14.562965 4953 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6197a1a-b251-497e-9a61-902c5baa17fc-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 10:57:15 crc kubenswrapper[4953]: I1211 10:57:15.118276 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ljthq" event={"ID":"f6197a1a-b251-497e-9a61-902c5baa17fc","Type":"ContainerDied","Data":"08862718575b6303a29cbb5c381349348c0a21626ff6a9f7fbc033c60e902e3d"} Dec 11 10:57:15 crc kubenswrapper[4953]: I1211 10:57:15.118389 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ljthq" Dec 11 10:57:15 crc kubenswrapper[4953]: I1211 10:57:15.119370 4953 scope.go:117] "RemoveContainer" containerID="7283335fca2b69e106937402d031dd2fee5482d638a9e2d591a5473a9469256c" Dec 11 10:57:15 crc kubenswrapper[4953]: I1211 10:57:15.149951 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ljthq"] Dec 11 10:57:15 crc kubenswrapper[4953]: I1211 10:57:15.159710 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-ljthq"] Dec 11 10:57:15 crc kubenswrapper[4953]: I1211 10:57:15.160030 4953 scope.go:117] "RemoveContainer" containerID="bc7aab122f650620d020ae70d7d4b69237d5ac98ed901b1b8e0a14721cd269b5" Dec 11 10:57:15 crc kubenswrapper[4953]: I1211 10:57:15.180789 4953 scope.go:117] "RemoveContainer" containerID="24e656fec8ab26e5e1ee731710dc103ea5c8e5ac57a470ca8c9d1c3b1d83da98" Dec 11 10:57:16 crc kubenswrapper[4953]: I1211 10:57:16.485252 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6197a1a-b251-497e-9a61-902c5baa17fc" path="/var/lib/kubelet/pods/f6197a1a-b251-497e-9a61-902c5baa17fc/volumes" Dec 11 10:57:18 crc kubenswrapper[4953]: I1211 10:57:18.194258 4953 patch_prober.go:28] interesting pod/machine-config-daemon-q2898 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 10:57:18 crc kubenswrapper[4953]: I1211 10:57:18.194339 4953 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 10:57:48 crc kubenswrapper[4953]: I1211 10:57:48.194303 4953 patch_prober.go:28] interesting pod/machine-config-daemon-q2898 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 10:57:48 crc kubenswrapper[4953]: I1211 10:57:48.194984 4953 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 10:58:18 crc kubenswrapper[4953]: I1211 10:58:18.194459 4953 patch_prober.go:28] interesting pod/machine-config-daemon-q2898 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 10:58:18 crc kubenswrapper[4953]: I1211 10:58:18.195000 4953 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 10:58:18 crc kubenswrapper[4953]: I1211 10:58:18.195054 4953 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q2898" Dec 11 10:58:18 crc kubenswrapper[4953]: I1211 10:58:18.195706 4953 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f51d1aefa9fd63384083ddae5740d863c31e6a3f5174b662a513773698844a30"} pod="openshift-machine-config-operator/machine-config-daemon-q2898" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 11 10:58:18 crc kubenswrapper[4953]: I1211 10:58:18.195768 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" containerName="machine-config-daemon" containerID="cri-o://f51d1aefa9fd63384083ddae5740d863c31e6a3f5174b662a513773698844a30" gracePeriod=600 Dec 11 10:58:18 crc kubenswrapper[4953]: I1211 10:58:18.708267 4953 generic.go:334] "Generic (PLEG): container finished" podID="ed741fb7-1326-48b7-a713-17c9f0243eac" containerID="f51d1aefa9fd63384083ddae5740d863c31e6a3f5174b662a513773698844a30" exitCode=0 Dec 11 10:58:18 crc kubenswrapper[4953]: I1211 10:58:18.708341 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q2898" event={"ID":"ed741fb7-1326-48b7-a713-17c9f0243eac","Type":"ContainerDied","Data":"f51d1aefa9fd63384083ddae5740d863c31e6a3f5174b662a513773698844a30"} Dec 11 10:58:18 crc kubenswrapper[4953]: I1211 10:58:18.708649 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q2898" event={"ID":"ed741fb7-1326-48b7-a713-17c9f0243eac","Type":"ContainerStarted","Data":"f156bb95cd3385f36a1370320e380956da4ee8d10144d102d832956b66b536ef"} Dec 11 10:58:18 crc kubenswrapper[4953]: I1211 10:58:18.708685 4953 scope.go:117] "RemoveContainer" containerID="db311be6007e706389a642343b1bda137f9065a87b1e374f50d74165612ba610" Dec 11 10:59:58 crc kubenswrapper[4953]: I1211 10:59:58.374512 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vl8jc"] Dec 11 10:59:58 crc kubenswrapper[4953]: E1211 10:59:58.375444 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6197a1a-b251-497e-9a61-902c5baa17fc" containerName="extract-content" Dec 11 10:59:58 crc kubenswrapper[4953]: I1211 10:59:58.375466 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6197a1a-b251-497e-9a61-902c5baa17fc" containerName="extract-content" Dec 11 10:59:58 crc kubenswrapper[4953]: E1211 10:59:58.375483 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2fafff8-1d94-4365-bef9-68f64c4e1523" containerName="extract-utilities" Dec 11 10:59:58 crc kubenswrapper[4953]: I1211 10:59:58.375489 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2fafff8-1d94-4365-bef9-68f64c4e1523" containerName="extract-utilities" Dec 11 10:59:58 crc kubenswrapper[4953]: E1211 10:59:58.375497 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6197a1a-b251-497e-9a61-902c5baa17fc" containerName="registry-server" Dec 11 10:59:58 crc kubenswrapper[4953]: I1211 10:59:58.375504 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6197a1a-b251-497e-9a61-902c5baa17fc" containerName="registry-server" Dec 11 10:59:58 crc kubenswrapper[4953]: E1211 10:59:58.375515 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2fafff8-1d94-4365-bef9-68f64c4e1523" containerName="extract-content" Dec 11 10:59:58 crc kubenswrapper[4953]: I1211 10:59:58.375521 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2fafff8-1d94-4365-bef9-68f64c4e1523" containerName="extract-content" Dec 11 10:59:58 crc kubenswrapper[4953]: E1211 10:59:58.375527 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6197a1a-b251-497e-9a61-902c5baa17fc" containerName="extract-utilities" Dec 11 10:59:58 crc kubenswrapper[4953]: I1211 10:59:58.375533 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6197a1a-b251-497e-9a61-902c5baa17fc" containerName="extract-utilities" Dec 11 10:59:58 crc kubenswrapper[4953]: E1211 10:59:58.375554 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2fafff8-1d94-4365-bef9-68f64c4e1523" containerName="registry-server" Dec 11 10:59:58 crc kubenswrapper[4953]: I1211 10:59:58.375560 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2fafff8-1d94-4365-bef9-68f64c4e1523" containerName="registry-server" Dec 11 10:59:58 crc kubenswrapper[4953]: I1211 10:59:58.375730 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2fafff8-1d94-4365-bef9-68f64c4e1523" containerName="registry-server" Dec 11 10:59:58 crc kubenswrapper[4953]: I1211 10:59:58.375763 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6197a1a-b251-497e-9a61-902c5baa17fc" containerName="registry-server" Dec 11 10:59:58 crc kubenswrapper[4953]: I1211 10:59:58.376926 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vl8jc" Dec 11 10:59:58 crc kubenswrapper[4953]: I1211 10:59:58.395374 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vl8jc"] Dec 11 10:59:58 crc kubenswrapper[4953]: I1211 10:59:58.496200 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kc8bp\" (UniqueName: \"kubernetes.io/projected/834f84e0-4ce8-438b-90ff-99027b9ff15e-kube-api-access-kc8bp\") pod \"redhat-marketplace-vl8jc\" (UID: \"834f84e0-4ce8-438b-90ff-99027b9ff15e\") " pod="openshift-marketplace/redhat-marketplace-vl8jc" Dec 11 10:59:58 crc kubenswrapper[4953]: I1211 10:59:58.496300 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/834f84e0-4ce8-438b-90ff-99027b9ff15e-catalog-content\") pod \"redhat-marketplace-vl8jc\" (UID: \"834f84e0-4ce8-438b-90ff-99027b9ff15e\") " pod="openshift-marketplace/redhat-marketplace-vl8jc" Dec 11 10:59:58 crc kubenswrapper[4953]: I1211 10:59:58.496366 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/834f84e0-4ce8-438b-90ff-99027b9ff15e-utilities\") pod \"redhat-marketplace-vl8jc\" (UID: \"834f84e0-4ce8-438b-90ff-99027b9ff15e\") " pod="openshift-marketplace/redhat-marketplace-vl8jc" Dec 11 10:59:58 crc kubenswrapper[4953]: I1211 10:59:58.597357 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/834f84e0-4ce8-438b-90ff-99027b9ff15e-utilities\") pod \"redhat-marketplace-vl8jc\" (UID: \"834f84e0-4ce8-438b-90ff-99027b9ff15e\") " pod="openshift-marketplace/redhat-marketplace-vl8jc" Dec 11 10:59:58 crc kubenswrapper[4953]: I1211 10:59:58.597467 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kc8bp\" (UniqueName: \"kubernetes.io/projected/834f84e0-4ce8-438b-90ff-99027b9ff15e-kube-api-access-kc8bp\") pod \"redhat-marketplace-vl8jc\" (UID: \"834f84e0-4ce8-438b-90ff-99027b9ff15e\") " pod="openshift-marketplace/redhat-marketplace-vl8jc" Dec 11 10:59:58 crc kubenswrapper[4953]: I1211 10:59:58.597541 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/834f84e0-4ce8-438b-90ff-99027b9ff15e-catalog-content\") pod \"redhat-marketplace-vl8jc\" (UID: \"834f84e0-4ce8-438b-90ff-99027b9ff15e\") " pod="openshift-marketplace/redhat-marketplace-vl8jc" Dec 11 10:59:58 crc kubenswrapper[4953]: I1211 10:59:58.598227 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/834f84e0-4ce8-438b-90ff-99027b9ff15e-utilities\") pod \"redhat-marketplace-vl8jc\" (UID: \"834f84e0-4ce8-438b-90ff-99027b9ff15e\") " pod="openshift-marketplace/redhat-marketplace-vl8jc" Dec 11 10:59:58 crc kubenswrapper[4953]: I1211 10:59:58.598253 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/834f84e0-4ce8-438b-90ff-99027b9ff15e-catalog-content\") pod \"redhat-marketplace-vl8jc\" (UID: \"834f84e0-4ce8-438b-90ff-99027b9ff15e\") " pod="openshift-marketplace/redhat-marketplace-vl8jc" Dec 11 10:59:58 crc kubenswrapper[4953]: I1211 10:59:58.619529 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kc8bp\" (UniqueName: \"kubernetes.io/projected/834f84e0-4ce8-438b-90ff-99027b9ff15e-kube-api-access-kc8bp\") pod \"redhat-marketplace-vl8jc\" (UID: \"834f84e0-4ce8-438b-90ff-99027b9ff15e\") " pod="openshift-marketplace/redhat-marketplace-vl8jc" Dec 11 10:59:58 crc kubenswrapper[4953]: I1211 10:59:58.703153 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vl8jc" Dec 11 10:59:59 crc kubenswrapper[4953]: I1211 10:59:59.029979 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-h449w"] Dec 11 10:59:59 crc kubenswrapper[4953]: I1211 10:59:59.032269 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h449w" Dec 11 10:59:59 crc kubenswrapper[4953]: I1211 10:59:59.044960 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-h449w"] Dec 11 10:59:59 crc kubenswrapper[4953]: I1211 10:59:59.162015 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vl8jc"] Dec 11 10:59:59 crc kubenswrapper[4953]: I1211 10:59:59.181475 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbxfh\" (UniqueName: \"kubernetes.io/projected/41055966-9840-43ed-9850-c43fdde754db-kube-api-access-dbxfh\") pod \"community-operators-h449w\" (UID: \"41055966-9840-43ed-9850-c43fdde754db\") " pod="openshift-marketplace/community-operators-h449w" Dec 11 10:59:59 crc kubenswrapper[4953]: I1211 10:59:59.181558 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41055966-9840-43ed-9850-c43fdde754db-utilities\") pod \"community-operators-h449w\" (UID: \"41055966-9840-43ed-9850-c43fdde754db\") " pod="openshift-marketplace/community-operators-h449w" Dec 11 10:59:59 crc kubenswrapper[4953]: I1211 10:59:59.181612 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41055966-9840-43ed-9850-c43fdde754db-catalog-content\") pod \"community-operators-h449w\" (UID: \"41055966-9840-43ed-9850-c43fdde754db\") " pod="openshift-marketplace/community-operators-h449w" Dec 11 10:59:59 crc kubenswrapper[4953]: I1211 10:59:59.283312 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dbxfh\" (UniqueName: \"kubernetes.io/projected/41055966-9840-43ed-9850-c43fdde754db-kube-api-access-dbxfh\") pod \"community-operators-h449w\" (UID: \"41055966-9840-43ed-9850-c43fdde754db\") " pod="openshift-marketplace/community-operators-h449w" Dec 11 10:59:59 crc kubenswrapper[4953]: I1211 10:59:59.283445 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41055966-9840-43ed-9850-c43fdde754db-utilities\") pod \"community-operators-h449w\" (UID: \"41055966-9840-43ed-9850-c43fdde754db\") " pod="openshift-marketplace/community-operators-h449w" Dec 11 10:59:59 crc kubenswrapper[4953]: I1211 10:59:59.283479 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41055966-9840-43ed-9850-c43fdde754db-catalog-content\") pod \"community-operators-h449w\" (UID: \"41055966-9840-43ed-9850-c43fdde754db\") " pod="openshift-marketplace/community-operators-h449w" Dec 11 10:59:59 crc kubenswrapper[4953]: I1211 10:59:59.284085 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41055966-9840-43ed-9850-c43fdde754db-utilities\") pod \"community-operators-h449w\" (UID: \"41055966-9840-43ed-9850-c43fdde754db\") " pod="openshift-marketplace/community-operators-h449w" Dec 11 10:59:59 crc kubenswrapper[4953]: I1211 10:59:59.284181 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41055966-9840-43ed-9850-c43fdde754db-catalog-content\") pod \"community-operators-h449w\" (UID: \"41055966-9840-43ed-9850-c43fdde754db\") " pod="openshift-marketplace/community-operators-h449w" Dec 11 10:59:59 crc kubenswrapper[4953]: I1211 10:59:59.304436 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbxfh\" (UniqueName: \"kubernetes.io/projected/41055966-9840-43ed-9850-c43fdde754db-kube-api-access-dbxfh\") pod \"community-operators-h449w\" (UID: \"41055966-9840-43ed-9850-c43fdde754db\") " pod="openshift-marketplace/community-operators-h449w" Dec 11 10:59:59 crc kubenswrapper[4953]: I1211 10:59:59.367362 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h449w" Dec 11 10:59:59 crc kubenswrapper[4953]: I1211 10:59:59.935185 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-h449w"] Dec 11 11:00:00 crc kubenswrapper[4953]: I1211 11:00:00.015163 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h449w" event={"ID":"41055966-9840-43ed-9850-c43fdde754db","Type":"ContainerStarted","Data":"cad363b501d94689cb04a657efffdce02d347f5bf29a7795e15ca574389ff7a5"} Dec 11 11:00:00 crc kubenswrapper[4953]: I1211 11:00:00.017833 4953 generic.go:334] "Generic (PLEG): container finished" podID="834f84e0-4ce8-438b-90ff-99027b9ff15e" containerID="a08338bf1829f2552c94e79e745255f066cb55e6e121a376bcb8167a8c64ab63" exitCode=0 Dec 11 11:00:00 crc kubenswrapper[4953]: I1211 11:00:00.017861 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vl8jc" event={"ID":"834f84e0-4ce8-438b-90ff-99027b9ff15e","Type":"ContainerDied","Data":"a08338bf1829f2552c94e79e745255f066cb55e6e121a376bcb8167a8c64ab63"} Dec 11 11:00:00 crc kubenswrapper[4953]: I1211 11:00:00.017879 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vl8jc" event={"ID":"834f84e0-4ce8-438b-90ff-99027b9ff15e","Type":"ContainerStarted","Data":"e091a7b3fd8cd2c5f77bcc5254f90a7327425cb558c57a2eef3ec91f4464dd25"} Dec 11 11:00:00 crc kubenswrapper[4953]: I1211 11:00:00.144093 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424180-s7w5s"] Dec 11 11:00:00 crc kubenswrapper[4953]: I1211 11:00:00.145089 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424180-s7w5s" Dec 11 11:00:00 crc kubenswrapper[4953]: I1211 11:00:00.147082 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 11 11:00:00 crc kubenswrapper[4953]: I1211 11:00:00.147101 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 11 11:00:00 crc kubenswrapper[4953]: I1211 11:00:00.158459 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424180-s7w5s"] Dec 11 11:00:00 crc kubenswrapper[4953]: I1211 11:00:00.224816 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7t29\" (UniqueName: \"kubernetes.io/projected/2d996a75-67cb-4658-9a27-f04e9f57b36c-kube-api-access-f7t29\") pod \"collect-profiles-29424180-s7w5s\" (UID: \"2d996a75-67cb-4658-9a27-f04e9f57b36c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424180-s7w5s" Dec 11 11:00:00 crc kubenswrapper[4953]: I1211 11:00:00.224864 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2d996a75-67cb-4658-9a27-f04e9f57b36c-secret-volume\") pod \"collect-profiles-29424180-s7w5s\" (UID: \"2d996a75-67cb-4658-9a27-f04e9f57b36c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424180-s7w5s" Dec 11 11:00:00 crc kubenswrapper[4953]: I1211 11:00:00.225237 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2d996a75-67cb-4658-9a27-f04e9f57b36c-config-volume\") pod \"collect-profiles-29424180-s7w5s\" (UID: \"2d996a75-67cb-4658-9a27-f04e9f57b36c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424180-s7w5s" Dec 11 11:00:00 crc kubenswrapper[4953]: I1211 11:00:00.327975 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f7t29\" (UniqueName: \"kubernetes.io/projected/2d996a75-67cb-4658-9a27-f04e9f57b36c-kube-api-access-f7t29\") pod \"collect-profiles-29424180-s7w5s\" (UID: \"2d996a75-67cb-4658-9a27-f04e9f57b36c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424180-s7w5s" Dec 11 11:00:00 crc kubenswrapper[4953]: I1211 11:00:00.328125 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2d996a75-67cb-4658-9a27-f04e9f57b36c-secret-volume\") pod \"collect-profiles-29424180-s7w5s\" (UID: \"2d996a75-67cb-4658-9a27-f04e9f57b36c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424180-s7w5s" Dec 11 11:00:00 crc kubenswrapper[4953]: I1211 11:00:00.328203 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2d996a75-67cb-4658-9a27-f04e9f57b36c-config-volume\") pod \"collect-profiles-29424180-s7w5s\" (UID: \"2d996a75-67cb-4658-9a27-f04e9f57b36c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424180-s7w5s" Dec 11 11:00:00 crc kubenswrapper[4953]: I1211 11:00:00.329654 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2d996a75-67cb-4658-9a27-f04e9f57b36c-config-volume\") pod \"collect-profiles-29424180-s7w5s\" (UID: \"2d996a75-67cb-4658-9a27-f04e9f57b36c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424180-s7w5s" Dec 11 11:00:00 crc kubenswrapper[4953]: I1211 11:00:00.347768 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2d996a75-67cb-4658-9a27-f04e9f57b36c-secret-volume\") pod \"collect-profiles-29424180-s7w5s\" (UID: \"2d996a75-67cb-4658-9a27-f04e9f57b36c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424180-s7w5s" Dec 11 11:00:00 crc kubenswrapper[4953]: I1211 11:00:00.348115 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f7t29\" (UniqueName: \"kubernetes.io/projected/2d996a75-67cb-4658-9a27-f04e9f57b36c-kube-api-access-f7t29\") pod \"collect-profiles-29424180-s7w5s\" (UID: \"2d996a75-67cb-4658-9a27-f04e9f57b36c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424180-s7w5s" Dec 11 11:00:00 crc kubenswrapper[4953]: I1211 11:00:00.589503 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424180-s7w5s" Dec 11 11:00:01 crc kubenswrapper[4953]: W1211 11:00:01.013837 4953 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2d996a75_67cb_4658_9a27_f04e9f57b36c.slice/crio-0b7600903e3c8e05200d4048507f102a860ba1da64d8b25f0c9ea862893fed8b WatchSource:0}: Error finding container 0b7600903e3c8e05200d4048507f102a860ba1da64d8b25f0c9ea862893fed8b: Status 404 returned error can't find the container with id 0b7600903e3c8e05200d4048507f102a860ba1da64d8b25f0c9ea862893fed8b Dec 11 11:00:01 crc kubenswrapper[4953]: I1211 11:00:01.013892 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424180-s7w5s"] Dec 11 11:00:01 crc kubenswrapper[4953]: I1211 11:00:01.047837 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29424180-s7w5s" event={"ID":"2d996a75-67cb-4658-9a27-f04e9f57b36c","Type":"ContainerStarted","Data":"0b7600903e3c8e05200d4048507f102a860ba1da64d8b25f0c9ea862893fed8b"} Dec 11 11:00:01 crc kubenswrapper[4953]: I1211 11:00:01.050452 4953 generic.go:334] "Generic (PLEG): container finished" podID="41055966-9840-43ed-9850-c43fdde754db" containerID="20011853678d2994676473cfcf6c3672fe75e10c0b0384b09a293c9b4edd12a2" exitCode=0 Dec 11 11:00:01 crc kubenswrapper[4953]: I1211 11:00:01.050514 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h449w" event={"ID":"41055966-9840-43ed-9850-c43fdde754db","Type":"ContainerDied","Data":"20011853678d2994676473cfcf6c3672fe75e10c0b0384b09a293c9b4edd12a2"} Dec 11 11:00:02 crc kubenswrapper[4953]: I1211 11:00:02.059081 4953 generic.go:334] "Generic (PLEG): container finished" podID="2d996a75-67cb-4658-9a27-f04e9f57b36c" containerID="c483224e66703d078ac2591ba02317812e099a03315b68c957ed4c067d5e13df" exitCode=0 Dec 11 11:00:02 crc kubenswrapper[4953]: I1211 11:00:02.059287 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29424180-s7w5s" event={"ID":"2d996a75-67cb-4658-9a27-f04e9f57b36c","Type":"ContainerDied","Data":"c483224e66703d078ac2591ba02317812e099a03315b68c957ed4c067d5e13df"} Dec 11 11:00:02 crc kubenswrapper[4953]: I1211 11:00:02.061818 4953 generic.go:334] "Generic (PLEG): container finished" podID="834f84e0-4ce8-438b-90ff-99027b9ff15e" containerID="227a03c7bbc5d5a0dc36ac0338fd4d3cfef3332bb95ca59324374a87048a2a11" exitCode=0 Dec 11 11:00:02 crc kubenswrapper[4953]: I1211 11:00:02.061851 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vl8jc" event={"ID":"834f84e0-4ce8-438b-90ff-99027b9ff15e","Type":"ContainerDied","Data":"227a03c7bbc5d5a0dc36ac0338fd4d3cfef3332bb95ca59324374a87048a2a11"} Dec 11 11:00:03 crc kubenswrapper[4953]: I1211 11:00:03.071192 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vl8jc" event={"ID":"834f84e0-4ce8-438b-90ff-99027b9ff15e","Type":"ContainerStarted","Data":"b37c612b6de7535d776fe68d6ee9114b61c69830358fb8576f8377f35acbab56"} Dec 11 11:00:03 crc kubenswrapper[4953]: I1211 11:00:03.090828 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vl8jc" podStartSLOduration=2.57214434 podStartE2EDuration="5.090804481s" podCreationTimestamp="2025-12-11 10:59:58 +0000 UTC" firstStartedPulling="2025-12-11 11:00:00.019685693 +0000 UTC m=+2918.043544726" lastFinishedPulling="2025-12-11 11:00:02.538345834 +0000 UTC m=+2920.562204867" observedRunningTime="2025-12-11 11:00:03.088775498 +0000 UTC m=+2921.112634541" watchObservedRunningTime="2025-12-11 11:00:03.090804481 +0000 UTC m=+2921.114663514" Dec 11 11:00:03 crc kubenswrapper[4953]: I1211 11:00:03.384526 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424180-s7w5s" Dec 11 11:00:03 crc kubenswrapper[4953]: I1211 11:00:03.487240 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2d996a75-67cb-4658-9a27-f04e9f57b36c-config-volume\") pod \"2d996a75-67cb-4658-9a27-f04e9f57b36c\" (UID: \"2d996a75-67cb-4658-9a27-f04e9f57b36c\") " Dec 11 11:00:03 crc kubenswrapper[4953]: I1211 11:00:03.490254 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d996a75-67cb-4658-9a27-f04e9f57b36c-config-volume" (OuterVolumeSpecName: "config-volume") pod "2d996a75-67cb-4658-9a27-f04e9f57b36c" (UID: "2d996a75-67cb-4658-9a27-f04e9f57b36c"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 11:00:03 crc kubenswrapper[4953]: I1211 11:00:03.490496 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2d996a75-67cb-4658-9a27-f04e9f57b36c-secret-volume\") pod \"2d996a75-67cb-4658-9a27-f04e9f57b36c\" (UID: \"2d996a75-67cb-4658-9a27-f04e9f57b36c\") " Dec 11 11:00:03 crc kubenswrapper[4953]: I1211 11:00:03.491520 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f7t29\" (UniqueName: \"kubernetes.io/projected/2d996a75-67cb-4658-9a27-f04e9f57b36c-kube-api-access-f7t29\") pod \"2d996a75-67cb-4658-9a27-f04e9f57b36c\" (UID: \"2d996a75-67cb-4658-9a27-f04e9f57b36c\") " Dec 11 11:00:03 crc kubenswrapper[4953]: I1211 11:00:03.492236 4953 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2d996a75-67cb-4658-9a27-f04e9f57b36c-config-volume\") on node \"crc\" DevicePath \"\"" Dec 11 11:00:03 crc kubenswrapper[4953]: I1211 11:00:03.497028 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d996a75-67cb-4658-9a27-f04e9f57b36c-kube-api-access-f7t29" (OuterVolumeSpecName: "kube-api-access-f7t29") pod "2d996a75-67cb-4658-9a27-f04e9f57b36c" (UID: "2d996a75-67cb-4658-9a27-f04e9f57b36c"). InnerVolumeSpecName "kube-api-access-f7t29". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 11:00:03 crc kubenswrapper[4953]: I1211 11:00:03.499811 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d996a75-67cb-4658-9a27-f04e9f57b36c-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "2d996a75-67cb-4658-9a27-f04e9f57b36c" (UID: "2d996a75-67cb-4658-9a27-f04e9f57b36c"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 11:00:03 crc kubenswrapper[4953]: I1211 11:00:03.594054 4953 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2d996a75-67cb-4658-9a27-f04e9f57b36c-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 11 11:00:03 crc kubenswrapper[4953]: I1211 11:00:03.594101 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f7t29\" (UniqueName: \"kubernetes.io/projected/2d996a75-67cb-4658-9a27-f04e9f57b36c-kube-api-access-f7t29\") on node \"crc\" DevicePath \"\"" Dec 11 11:00:04 crc kubenswrapper[4953]: I1211 11:00:04.080709 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29424180-s7w5s" event={"ID":"2d996a75-67cb-4658-9a27-f04e9f57b36c","Type":"ContainerDied","Data":"0b7600903e3c8e05200d4048507f102a860ba1da64d8b25f0c9ea862893fed8b"} Dec 11 11:00:04 crc kubenswrapper[4953]: I1211 11:00:04.080769 4953 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0b7600903e3c8e05200d4048507f102a860ba1da64d8b25f0c9ea862893fed8b" Dec 11 11:00:04 crc kubenswrapper[4953]: I1211 11:00:04.082441 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424180-s7w5s" Dec 11 11:00:04 crc kubenswrapper[4953]: I1211 11:00:04.457303 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424135-wrj49"] Dec 11 11:00:04 crc kubenswrapper[4953]: I1211 11:00:04.462289 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424135-wrj49"] Dec 11 11:00:04 crc kubenswrapper[4953]: I1211 11:00:04.482389 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a45ae5ef-f18c-4275-b9f8-36afd1d25451" path="/var/lib/kubelet/pods/a45ae5ef-f18c-4275-b9f8-36afd1d25451/volumes" Dec 11 11:00:06 crc kubenswrapper[4953]: I1211 11:00:06.107312 4953 generic.go:334] "Generic (PLEG): container finished" podID="41055966-9840-43ed-9850-c43fdde754db" containerID="a9184d5f483d646d8869d31d9d2192f9512d7c9b36f740857b316af39679ef38" exitCode=0 Dec 11 11:00:06 crc kubenswrapper[4953]: I1211 11:00:06.107399 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h449w" event={"ID":"41055966-9840-43ed-9850-c43fdde754db","Type":"ContainerDied","Data":"a9184d5f483d646d8869d31d9d2192f9512d7c9b36f740857b316af39679ef38"} Dec 11 11:00:08 crc kubenswrapper[4953]: I1211 11:00:08.123062 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h449w" event={"ID":"41055966-9840-43ed-9850-c43fdde754db","Type":"ContainerStarted","Data":"b9b4beb2d33cf92e64730f65b832335aab054f36c35a44b6cfe427c3b0093db1"} Dec 11 11:00:08 crc kubenswrapper[4953]: I1211 11:00:08.144020 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-h449w" podStartSLOduration=4.163630156 podStartE2EDuration="10.14400151s" podCreationTimestamp="2025-12-11 10:59:58 +0000 UTC" firstStartedPulling="2025-12-11 11:00:01.052263087 +0000 UTC m=+2919.076122120" lastFinishedPulling="2025-12-11 11:00:07.032634441 +0000 UTC m=+2925.056493474" observedRunningTime="2025-12-11 11:00:08.142905656 +0000 UTC m=+2926.166764689" watchObservedRunningTime="2025-12-11 11:00:08.14400151 +0000 UTC m=+2926.167860543" Dec 11 11:00:08 crc kubenswrapper[4953]: I1211 11:00:08.704108 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vl8jc" Dec 11 11:00:08 crc kubenswrapper[4953]: I1211 11:00:08.704166 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-vl8jc" Dec 11 11:00:08 crc kubenswrapper[4953]: I1211 11:00:08.749222 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vl8jc" Dec 11 11:00:09 crc kubenswrapper[4953]: I1211 11:00:09.169204 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vl8jc" Dec 11 11:00:09 crc kubenswrapper[4953]: I1211 11:00:09.367984 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-h449w" Dec 11 11:00:09 crc kubenswrapper[4953]: I1211 11:00:09.368064 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-h449w" Dec 11 11:00:09 crc kubenswrapper[4953]: I1211 11:00:09.407851 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-h449w" Dec 11 11:00:10 crc kubenswrapper[4953]: I1211 11:00:10.354209 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vl8jc"] Dec 11 11:00:11 crc kubenswrapper[4953]: I1211 11:00:11.143063 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-vl8jc" podUID="834f84e0-4ce8-438b-90ff-99027b9ff15e" containerName="registry-server" containerID="cri-o://b37c612b6de7535d776fe68d6ee9114b61c69830358fb8576f8377f35acbab56" gracePeriod=2 Dec 11 11:00:12 crc kubenswrapper[4953]: I1211 11:00:12.047239 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vl8jc" Dec 11 11:00:12 crc kubenswrapper[4953]: I1211 11:00:12.152118 4953 generic.go:334] "Generic (PLEG): container finished" podID="834f84e0-4ce8-438b-90ff-99027b9ff15e" containerID="b37c612b6de7535d776fe68d6ee9114b61c69830358fb8576f8377f35acbab56" exitCode=0 Dec 11 11:00:12 crc kubenswrapper[4953]: I1211 11:00:12.152179 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vl8jc" event={"ID":"834f84e0-4ce8-438b-90ff-99027b9ff15e","Type":"ContainerDied","Data":"b37c612b6de7535d776fe68d6ee9114b61c69830358fb8576f8377f35acbab56"} Dec 11 11:00:12 crc kubenswrapper[4953]: I1211 11:00:12.152218 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vl8jc" event={"ID":"834f84e0-4ce8-438b-90ff-99027b9ff15e","Type":"ContainerDied","Data":"e091a7b3fd8cd2c5f77bcc5254f90a7327425cb558c57a2eef3ec91f4464dd25"} Dec 11 11:00:12 crc kubenswrapper[4953]: I1211 11:00:12.152242 4953 scope.go:117] "RemoveContainer" containerID="b37c612b6de7535d776fe68d6ee9114b61c69830358fb8576f8377f35acbab56" Dec 11 11:00:12 crc kubenswrapper[4953]: I1211 11:00:12.152242 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vl8jc" Dec 11 11:00:12 crc kubenswrapper[4953]: I1211 11:00:12.171270 4953 scope.go:117] "RemoveContainer" containerID="227a03c7bbc5d5a0dc36ac0338fd4d3cfef3332bb95ca59324374a87048a2a11" Dec 11 11:00:12 crc kubenswrapper[4953]: I1211 11:00:12.175140 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/834f84e0-4ce8-438b-90ff-99027b9ff15e-utilities\") pod \"834f84e0-4ce8-438b-90ff-99027b9ff15e\" (UID: \"834f84e0-4ce8-438b-90ff-99027b9ff15e\") " Dec 11 11:00:12 crc kubenswrapper[4953]: I1211 11:00:12.175223 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kc8bp\" (UniqueName: \"kubernetes.io/projected/834f84e0-4ce8-438b-90ff-99027b9ff15e-kube-api-access-kc8bp\") pod \"834f84e0-4ce8-438b-90ff-99027b9ff15e\" (UID: \"834f84e0-4ce8-438b-90ff-99027b9ff15e\") " Dec 11 11:00:12 crc kubenswrapper[4953]: I1211 11:00:12.175276 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/834f84e0-4ce8-438b-90ff-99027b9ff15e-catalog-content\") pod \"834f84e0-4ce8-438b-90ff-99027b9ff15e\" (UID: \"834f84e0-4ce8-438b-90ff-99027b9ff15e\") " Dec 11 11:00:12 crc kubenswrapper[4953]: I1211 11:00:12.176154 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/834f84e0-4ce8-438b-90ff-99027b9ff15e-utilities" (OuterVolumeSpecName: "utilities") pod "834f84e0-4ce8-438b-90ff-99027b9ff15e" (UID: "834f84e0-4ce8-438b-90ff-99027b9ff15e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 11:00:12 crc kubenswrapper[4953]: I1211 11:00:12.182035 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/834f84e0-4ce8-438b-90ff-99027b9ff15e-kube-api-access-kc8bp" (OuterVolumeSpecName: "kube-api-access-kc8bp") pod "834f84e0-4ce8-438b-90ff-99027b9ff15e" (UID: "834f84e0-4ce8-438b-90ff-99027b9ff15e"). InnerVolumeSpecName "kube-api-access-kc8bp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 11:00:12 crc kubenswrapper[4953]: I1211 11:00:12.203179 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/834f84e0-4ce8-438b-90ff-99027b9ff15e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "834f84e0-4ce8-438b-90ff-99027b9ff15e" (UID: "834f84e0-4ce8-438b-90ff-99027b9ff15e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 11:00:12 crc kubenswrapper[4953]: I1211 11:00:12.219361 4953 scope.go:117] "RemoveContainer" containerID="a08338bf1829f2552c94e79e745255f066cb55e6e121a376bcb8167a8c64ab63" Dec 11 11:00:12 crc kubenswrapper[4953]: I1211 11:00:12.255799 4953 scope.go:117] "RemoveContainer" containerID="b37c612b6de7535d776fe68d6ee9114b61c69830358fb8576f8377f35acbab56" Dec 11 11:00:12 crc kubenswrapper[4953]: E1211 11:00:12.256439 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b37c612b6de7535d776fe68d6ee9114b61c69830358fb8576f8377f35acbab56\": container with ID starting with b37c612b6de7535d776fe68d6ee9114b61c69830358fb8576f8377f35acbab56 not found: ID does not exist" containerID="b37c612b6de7535d776fe68d6ee9114b61c69830358fb8576f8377f35acbab56" Dec 11 11:00:12 crc kubenswrapper[4953]: I1211 11:00:12.256494 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b37c612b6de7535d776fe68d6ee9114b61c69830358fb8576f8377f35acbab56"} err="failed to get container status \"b37c612b6de7535d776fe68d6ee9114b61c69830358fb8576f8377f35acbab56\": rpc error: code = NotFound desc = could not find container \"b37c612b6de7535d776fe68d6ee9114b61c69830358fb8576f8377f35acbab56\": container with ID starting with b37c612b6de7535d776fe68d6ee9114b61c69830358fb8576f8377f35acbab56 not found: ID does not exist" Dec 11 11:00:12 crc kubenswrapper[4953]: I1211 11:00:12.256520 4953 scope.go:117] "RemoveContainer" containerID="227a03c7bbc5d5a0dc36ac0338fd4d3cfef3332bb95ca59324374a87048a2a11" Dec 11 11:00:12 crc kubenswrapper[4953]: E1211 11:00:12.256866 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"227a03c7bbc5d5a0dc36ac0338fd4d3cfef3332bb95ca59324374a87048a2a11\": container with ID starting with 227a03c7bbc5d5a0dc36ac0338fd4d3cfef3332bb95ca59324374a87048a2a11 not found: ID does not exist" containerID="227a03c7bbc5d5a0dc36ac0338fd4d3cfef3332bb95ca59324374a87048a2a11" Dec 11 11:00:12 crc kubenswrapper[4953]: I1211 11:00:12.256927 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"227a03c7bbc5d5a0dc36ac0338fd4d3cfef3332bb95ca59324374a87048a2a11"} err="failed to get container status \"227a03c7bbc5d5a0dc36ac0338fd4d3cfef3332bb95ca59324374a87048a2a11\": rpc error: code = NotFound desc = could not find container \"227a03c7bbc5d5a0dc36ac0338fd4d3cfef3332bb95ca59324374a87048a2a11\": container with ID starting with 227a03c7bbc5d5a0dc36ac0338fd4d3cfef3332bb95ca59324374a87048a2a11 not found: ID does not exist" Dec 11 11:00:12 crc kubenswrapper[4953]: I1211 11:00:12.256964 4953 scope.go:117] "RemoveContainer" containerID="a08338bf1829f2552c94e79e745255f066cb55e6e121a376bcb8167a8c64ab63" Dec 11 11:00:12 crc kubenswrapper[4953]: E1211 11:00:12.257422 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a08338bf1829f2552c94e79e745255f066cb55e6e121a376bcb8167a8c64ab63\": container with ID starting with a08338bf1829f2552c94e79e745255f066cb55e6e121a376bcb8167a8c64ab63 not found: ID does not exist" containerID="a08338bf1829f2552c94e79e745255f066cb55e6e121a376bcb8167a8c64ab63" Dec 11 11:00:12 crc kubenswrapper[4953]: I1211 11:00:12.257466 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a08338bf1829f2552c94e79e745255f066cb55e6e121a376bcb8167a8c64ab63"} err="failed to get container status \"a08338bf1829f2552c94e79e745255f066cb55e6e121a376bcb8167a8c64ab63\": rpc error: code = NotFound desc = could not find container \"a08338bf1829f2552c94e79e745255f066cb55e6e121a376bcb8167a8c64ab63\": container with ID starting with a08338bf1829f2552c94e79e745255f066cb55e6e121a376bcb8167a8c64ab63 not found: ID does not exist" Dec 11 11:00:12 crc kubenswrapper[4953]: I1211 11:00:12.276693 4953 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/834f84e0-4ce8-438b-90ff-99027b9ff15e-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 11:00:12 crc kubenswrapper[4953]: I1211 11:00:12.276745 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kc8bp\" (UniqueName: \"kubernetes.io/projected/834f84e0-4ce8-438b-90ff-99027b9ff15e-kube-api-access-kc8bp\") on node \"crc\" DevicePath \"\"" Dec 11 11:00:12 crc kubenswrapper[4953]: I1211 11:00:12.276756 4953 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/834f84e0-4ce8-438b-90ff-99027b9ff15e-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 11:00:12 crc kubenswrapper[4953]: I1211 11:00:12.491175 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vl8jc"] Dec 11 11:00:12 crc kubenswrapper[4953]: I1211 11:00:12.491545 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-vl8jc"] Dec 11 11:00:14 crc kubenswrapper[4953]: I1211 11:00:14.482663 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="834f84e0-4ce8-438b-90ff-99027b9ff15e" path="/var/lib/kubelet/pods/834f84e0-4ce8-438b-90ff-99027b9ff15e/volumes" Dec 11 11:00:18 crc kubenswrapper[4953]: I1211 11:00:18.193689 4953 patch_prober.go:28] interesting pod/machine-config-daemon-q2898 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 11:00:18 crc kubenswrapper[4953]: I1211 11:00:18.194079 4953 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 11:00:19 crc kubenswrapper[4953]: I1211 11:00:19.412487 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-h449w" Dec 11 11:00:19 crc kubenswrapper[4953]: I1211 11:00:19.472547 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-h449w"] Dec 11 11:00:20 crc kubenswrapper[4953]: I1211 11:00:20.222720 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-h449w" podUID="41055966-9840-43ed-9850-c43fdde754db" containerName="registry-server" containerID="cri-o://b9b4beb2d33cf92e64730f65b832335aab054f36c35a44b6cfe427c3b0093db1" gracePeriod=2 Dec 11 11:00:20 crc kubenswrapper[4953]: I1211 11:00:20.612209 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h449w" Dec 11 11:00:20 crc kubenswrapper[4953]: I1211 11:00:20.707054 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41055966-9840-43ed-9850-c43fdde754db-catalog-content\") pod \"41055966-9840-43ed-9850-c43fdde754db\" (UID: \"41055966-9840-43ed-9850-c43fdde754db\") " Dec 11 11:00:20 crc kubenswrapper[4953]: I1211 11:00:20.707114 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41055966-9840-43ed-9850-c43fdde754db-utilities\") pod \"41055966-9840-43ed-9850-c43fdde754db\" (UID: \"41055966-9840-43ed-9850-c43fdde754db\") " Dec 11 11:00:20 crc kubenswrapper[4953]: I1211 11:00:20.707261 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbxfh\" (UniqueName: \"kubernetes.io/projected/41055966-9840-43ed-9850-c43fdde754db-kube-api-access-dbxfh\") pod \"41055966-9840-43ed-9850-c43fdde754db\" (UID: \"41055966-9840-43ed-9850-c43fdde754db\") " Dec 11 11:00:20 crc kubenswrapper[4953]: I1211 11:00:20.708088 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/41055966-9840-43ed-9850-c43fdde754db-utilities" (OuterVolumeSpecName: "utilities") pod "41055966-9840-43ed-9850-c43fdde754db" (UID: "41055966-9840-43ed-9850-c43fdde754db"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 11:00:20 crc kubenswrapper[4953]: I1211 11:00:20.714674 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41055966-9840-43ed-9850-c43fdde754db-kube-api-access-dbxfh" (OuterVolumeSpecName: "kube-api-access-dbxfh") pod "41055966-9840-43ed-9850-c43fdde754db" (UID: "41055966-9840-43ed-9850-c43fdde754db"). InnerVolumeSpecName "kube-api-access-dbxfh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 11:00:20 crc kubenswrapper[4953]: I1211 11:00:20.767286 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/41055966-9840-43ed-9850-c43fdde754db-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "41055966-9840-43ed-9850-c43fdde754db" (UID: "41055966-9840-43ed-9850-c43fdde754db"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 11:00:20 crc kubenswrapper[4953]: I1211 11:00:20.809136 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbxfh\" (UniqueName: \"kubernetes.io/projected/41055966-9840-43ed-9850-c43fdde754db-kube-api-access-dbxfh\") on node \"crc\" DevicePath \"\"" Dec 11 11:00:20 crc kubenswrapper[4953]: I1211 11:00:20.809174 4953 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41055966-9840-43ed-9850-c43fdde754db-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 11:00:20 crc kubenswrapper[4953]: I1211 11:00:20.809189 4953 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41055966-9840-43ed-9850-c43fdde754db-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 11:00:21 crc kubenswrapper[4953]: I1211 11:00:21.230848 4953 generic.go:334] "Generic (PLEG): container finished" podID="41055966-9840-43ed-9850-c43fdde754db" containerID="b9b4beb2d33cf92e64730f65b832335aab054f36c35a44b6cfe427c3b0093db1" exitCode=0 Dec 11 11:00:21 crc kubenswrapper[4953]: I1211 11:00:21.230900 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h449w" event={"ID":"41055966-9840-43ed-9850-c43fdde754db","Type":"ContainerDied","Data":"b9b4beb2d33cf92e64730f65b832335aab054f36c35a44b6cfe427c3b0093db1"} Dec 11 11:00:21 crc kubenswrapper[4953]: I1211 11:00:21.230928 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h449w" event={"ID":"41055966-9840-43ed-9850-c43fdde754db","Type":"ContainerDied","Data":"cad363b501d94689cb04a657efffdce02d347f5bf29a7795e15ca574389ff7a5"} Dec 11 11:00:21 crc kubenswrapper[4953]: I1211 11:00:21.230944 4953 scope.go:117] "RemoveContainer" containerID="b9b4beb2d33cf92e64730f65b832335aab054f36c35a44b6cfe427c3b0093db1" Dec 11 11:00:21 crc kubenswrapper[4953]: I1211 11:00:21.230955 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h449w" Dec 11 11:00:21 crc kubenswrapper[4953]: I1211 11:00:21.249673 4953 scope.go:117] "RemoveContainer" containerID="a9184d5f483d646d8869d31d9d2192f9512d7c9b36f740857b316af39679ef38" Dec 11 11:00:21 crc kubenswrapper[4953]: I1211 11:00:21.265337 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-h449w"] Dec 11 11:00:21 crc kubenswrapper[4953]: I1211 11:00:21.270894 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-h449w"] Dec 11 11:00:21 crc kubenswrapper[4953]: I1211 11:00:21.293875 4953 scope.go:117] "RemoveContainer" containerID="20011853678d2994676473cfcf6c3672fe75e10c0b0384b09a293c9b4edd12a2" Dec 11 11:00:21 crc kubenswrapper[4953]: I1211 11:00:21.309310 4953 scope.go:117] "RemoveContainer" containerID="b9b4beb2d33cf92e64730f65b832335aab054f36c35a44b6cfe427c3b0093db1" Dec 11 11:00:21 crc kubenswrapper[4953]: E1211 11:00:21.309928 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b9b4beb2d33cf92e64730f65b832335aab054f36c35a44b6cfe427c3b0093db1\": container with ID starting with b9b4beb2d33cf92e64730f65b832335aab054f36c35a44b6cfe427c3b0093db1 not found: ID does not exist" containerID="b9b4beb2d33cf92e64730f65b832335aab054f36c35a44b6cfe427c3b0093db1" Dec 11 11:00:21 crc kubenswrapper[4953]: I1211 11:00:21.309974 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9b4beb2d33cf92e64730f65b832335aab054f36c35a44b6cfe427c3b0093db1"} err="failed to get container status \"b9b4beb2d33cf92e64730f65b832335aab054f36c35a44b6cfe427c3b0093db1\": rpc error: code = NotFound desc = could not find container \"b9b4beb2d33cf92e64730f65b832335aab054f36c35a44b6cfe427c3b0093db1\": container with ID starting with b9b4beb2d33cf92e64730f65b832335aab054f36c35a44b6cfe427c3b0093db1 not found: ID does not exist" Dec 11 11:00:21 crc kubenswrapper[4953]: I1211 11:00:21.310004 4953 scope.go:117] "RemoveContainer" containerID="a9184d5f483d646d8869d31d9d2192f9512d7c9b36f740857b316af39679ef38" Dec 11 11:00:21 crc kubenswrapper[4953]: E1211 11:00:21.310538 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a9184d5f483d646d8869d31d9d2192f9512d7c9b36f740857b316af39679ef38\": container with ID starting with a9184d5f483d646d8869d31d9d2192f9512d7c9b36f740857b316af39679ef38 not found: ID does not exist" containerID="a9184d5f483d646d8869d31d9d2192f9512d7c9b36f740857b316af39679ef38" Dec 11 11:00:21 crc kubenswrapper[4953]: I1211 11:00:21.310573 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9184d5f483d646d8869d31d9d2192f9512d7c9b36f740857b316af39679ef38"} err="failed to get container status \"a9184d5f483d646d8869d31d9d2192f9512d7c9b36f740857b316af39679ef38\": rpc error: code = NotFound desc = could not find container \"a9184d5f483d646d8869d31d9d2192f9512d7c9b36f740857b316af39679ef38\": container with ID starting with a9184d5f483d646d8869d31d9d2192f9512d7c9b36f740857b316af39679ef38 not found: ID does not exist" Dec 11 11:00:21 crc kubenswrapper[4953]: I1211 11:00:21.310612 4953 scope.go:117] "RemoveContainer" containerID="20011853678d2994676473cfcf6c3672fe75e10c0b0384b09a293c9b4edd12a2" Dec 11 11:00:21 crc kubenswrapper[4953]: E1211 11:00:21.310994 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"20011853678d2994676473cfcf6c3672fe75e10c0b0384b09a293c9b4edd12a2\": container with ID starting with 20011853678d2994676473cfcf6c3672fe75e10c0b0384b09a293c9b4edd12a2 not found: ID does not exist" containerID="20011853678d2994676473cfcf6c3672fe75e10c0b0384b09a293c9b4edd12a2" Dec 11 11:00:21 crc kubenswrapper[4953]: I1211 11:00:21.311048 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"20011853678d2994676473cfcf6c3672fe75e10c0b0384b09a293c9b4edd12a2"} err="failed to get container status \"20011853678d2994676473cfcf6c3672fe75e10c0b0384b09a293c9b4edd12a2\": rpc error: code = NotFound desc = could not find container \"20011853678d2994676473cfcf6c3672fe75e10c0b0384b09a293c9b4edd12a2\": container with ID starting with 20011853678d2994676473cfcf6c3672fe75e10c0b0384b09a293c9b4edd12a2 not found: ID does not exist" Dec 11 11:00:22 crc kubenswrapper[4953]: I1211 11:00:22.483945 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41055966-9840-43ed-9850-c43fdde754db" path="/var/lib/kubelet/pods/41055966-9840-43ed-9850-c43fdde754db/volumes" Dec 11 11:00:34 crc kubenswrapper[4953]: I1211 11:00:34.146598 4953 scope.go:117] "RemoveContainer" containerID="fc46e6df9641ac46850d4248dc07f77bdb522ee7c24c06377eb246d1986826f7" Dec 11 11:00:48 crc kubenswrapper[4953]: I1211 11:00:48.193983 4953 patch_prober.go:28] interesting pod/machine-config-daemon-q2898 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 11:00:48 crc kubenswrapper[4953]: I1211 11:00:48.194677 4953 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 11:01:18 crc kubenswrapper[4953]: I1211 11:01:18.193746 4953 patch_prober.go:28] interesting pod/machine-config-daemon-q2898 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 11:01:18 crc kubenswrapper[4953]: I1211 11:01:18.194330 4953 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 11:01:18 crc kubenswrapper[4953]: I1211 11:01:18.194376 4953 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q2898" Dec 11 11:01:18 crc kubenswrapper[4953]: I1211 11:01:18.195027 4953 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f156bb95cd3385f36a1370320e380956da4ee8d10144d102d832956b66b536ef"} pod="openshift-machine-config-operator/machine-config-daemon-q2898" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 11 11:01:18 crc kubenswrapper[4953]: I1211 11:01:18.195090 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" containerName="machine-config-daemon" containerID="cri-o://f156bb95cd3385f36a1370320e380956da4ee8d10144d102d832956b66b536ef" gracePeriod=600 Dec 11 11:01:18 crc kubenswrapper[4953]: E1211 11:01:18.346908 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2898_openshift-machine-config-operator(ed741fb7-1326-48b7-a713-17c9f0243eac)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" Dec 11 11:01:18 crc kubenswrapper[4953]: I1211 11:01:18.720542 4953 generic.go:334] "Generic (PLEG): container finished" podID="ed741fb7-1326-48b7-a713-17c9f0243eac" containerID="f156bb95cd3385f36a1370320e380956da4ee8d10144d102d832956b66b536ef" exitCode=0 Dec 11 11:01:18 crc kubenswrapper[4953]: I1211 11:01:18.720609 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q2898" event={"ID":"ed741fb7-1326-48b7-a713-17c9f0243eac","Type":"ContainerDied","Data":"f156bb95cd3385f36a1370320e380956da4ee8d10144d102d832956b66b536ef"} Dec 11 11:01:18 crc kubenswrapper[4953]: I1211 11:01:18.720699 4953 scope.go:117] "RemoveContainer" containerID="f51d1aefa9fd63384083ddae5740d863c31e6a3f5174b662a513773698844a30" Dec 11 11:01:18 crc kubenswrapper[4953]: I1211 11:01:18.721251 4953 scope.go:117] "RemoveContainer" containerID="f156bb95cd3385f36a1370320e380956da4ee8d10144d102d832956b66b536ef" Dec 11 11:01:18 crc kubenswrapper[4953]: E1211 11:01:18.721481 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2898_openshift-machine-config-operator(ed741fb7-1326-48b7-a713-17c9f0243eac)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" Dec 11 11:01:33 crc kubenswrapper[4953]: I1211 11:01:33.473469 4953 scope.go:117] "RemoveContainer" containerID="f156bb95cd3385f36a1370320e380956da4ee8d10144d102d832956b66b536ef" Dec 11 11:01:33 crc kubenswrapper[4953]: E1211 11:01:33.474397 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2898_openshift-machine-config-operator(ed741fb7-1326-48b7-a713-17c9f0243eac)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" Dec 11 11:01:46 crc kubenswrapper[4953]: I1211 11:01:46.475386 4953 scope.go:117] "RemoveContainer" containerID="f156bb95cd3385f36a1370320e380956da4ee8d10144d102d832956b66b536ef" Dec 11 11:01:46 crc kubenswrapper[4953]: E1211 11:01:46.477998 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2898_openshift-machine-config-operator(ed741fb7-1326-48b7-a713-17c9f0243eac)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" Dec 11 11:02:01 crc kubenswrapper[4953]: I1211 11:02:01.473753 4953 scope.go:117] "RemoveContainer" containerID="f156bb95cd3385f36a1370320e380956da4ee8d10144d102d832956b66b536ef" Dec 11 11:02:01 crc kubenswrapper[4953]: E1211 11:02:01.475756 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2898_openshift-machine-config-operator(ed741fb7-1326-48b7-a713-17c9f0243eac)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" Dec 11 11:02:14 crc kubenswrapper[4953]: I1211 11:02:14.474151 4953 scope.go:117] "RemoveContainer" containerID="f156bb95cd3385f36a1370320e380956da4ee8d10144d102d832956b66b536ef" Dec 11 11:02:14 crc kubenswrapper[4953]: E1211 11:02:14.475043 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2898_openshift-machine-config-operator(ed741fb7-1326-48b7-a713-17c9f0243eac)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" Dec 11 11:02:26 crc kubenswrapper[4953]: I1211 11:02:26.473827 4953 scope.go:117] "RemoveContainer" containerID="f156bb95cd3385f36a1370320e380956da4ee8d10144d102d832956b66b536ef" Dec 11 11:02:26 crc kubenswrapper[4953]: E1211 11:02:26.474749 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2898_openshift-machine-config-operator(ed741fb7-1326-48b7-a713-17c9f0243eac)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" Dec 11 11:02:40 crc kubenswrapper[4953]: I1211 11:02:40.474037 4953 scope.go:117] "RemoveContainer" containerID="f156bb95cd3385f36a1370320e380956da4ee8d10144d102d832956b66b536ef" Dec 11 11:02:40 crc kubenswrapper[4953]: E1211 11:02:40.475026 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2898_openshift-machine-config-operator(ed741fb7-1326-48b7-a713-17c9f0243eac)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" Dec 11 11:02:55 crc kubenswrapper[4953]: I1211 11:02:55.473917 4953 scope.go:117] "RemoveContainer" containerID="f156bb95cd3385f36a1370320e380956da4ee8d10144d102d832956b66b536ef" Dec 11 11:02:55 crc kubenswrapper[4953]: E1211 11:02:55.474786 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2898_openshift-machine-config-operator(ed741fb7-1326-48b7-a713-17c9f0243eac)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" Dec 11 11:03:09 crc kubenswrapper[4953]: I1211 11:03:09.473255 4953 scope.go:117] "RemoveContainer" containerID="f156bb95cd3385f36a1370320e380956da4ee8d10144d102d832956b66b536ef" Dec 11 11:03:09 crc kubenswrapper[4953]: E1211 11:03:09.474709 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2898_openshift-machine-config-operator(ed741fb7-1326-48b7-a713-17c9f0243eac)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" Dec 11 11:03:24 crc kubenswrapper[4953]: I1211 11:03:24.473542 4953 scope.go:117] "RemoveContainer" containerID="f156bb95cd3385f36a1370320e380956da4ee8d10144d102d832956b66b536ef" Dec 11 11:03:24 crc kubenswrapper[4953]: E1211 11:03:24.474647 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2898_openshift-machine-config-operator(ed741fb7-1326-48b7-a713-17c9f0243eac)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" Dec 11 11:03:36 crc kubenswrapper[4953]: I1211 11:03:36.474137 4953 scope.go:117] "RemoveContainer" containerID="f156bb95cd3385f36a1370320e380956da4ee8d10144d102d832956b66b536ef" Dec 11 11:03:36 crc kubenswrapper[4953]: E1211 11:03:36.475042 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2898_openshift-machine-config-operator(ed741fb7-1326-48b7-a713-17c9f0243eac)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" Dec 11 11:03:48 crc kubenswrapper[4953]: I1211 11:03:48.474092 4953 scope.go:117] "RemoveContainer" containerID="f156bb95cd3385f36a1370320e380956da4ee8d10144d102d832956b66b536ef" Dec 11 11:03:48 crc kubenswrapper[4953]: E1211 11:03:48.474862 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2898_openshift-machine-config-operator(ed741fb7-1326-48b7-a713-17c9f0243eac)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" Dec 11 11:04:01 crc kubenswrapper[4953]: I1211 11:04:01.474150 4953 scope.go:117] "RemoveContainer" containerID="f156bb95cd3385f36a1370320e380956da4ee8d10144d102d832956b66b536ef" Dec 11 11:04:01 crc kubenswrapper[4953]: E1211 11:04:01.476088 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2898_openshift-machine-config-operator(ed741fb7-1326-48b7-a713-17c9f0243eac)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" Dec 11 11:04:13 crc kubenswrapper[4953]: I1211 11:04:13.473620 4953 scope.go:117] "RemoveContainer" containerID="f156bb95cd3385f36a1370320e380956da4ee8d10144d102d832956b66b536ef" Dec 11 11:04:13 crc kubenswrapper[4953]: E1211 11:04:13.474530 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2898_openshift-machine-config-operator(ed741fb7-1326-48b7-a713-17c9f0243eac)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" Dec 11 11:04:26 crc kubenswrapper[4953]: I1211 11:04:26.474099 4953 scope.go:117] "RemoveContainer" containerID="f156bb95cd3385f36a1370320e380956da4ee8d10144d102d832956b66b536ef" Dec 11 11:04:26 crc kubenswrapper[4953]: E1211 11:04:26.474944 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2898_openshift-machine-config-operator(ed741fb7-1326-48b7-a713-17c9f0243eac)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" Dec 11 11:04:38 crc kubenswrapper[4953]: I1211 11:04:38.473710 4953 scope.go:117] "RemoveContainer" containerID="f156bb95cd3385f36a1370320e380956da4ee8d10144d102d832956b66b536ef" Dec 11 11:04:38 crc kubenswrapper[4953]: E1211 11:04:38.474284 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2898_openshift-machine-config-operator(ed741fb7-1326-48b7-a713-17c9f0243eac)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" Dec 11 11:04:53 crc kubenswrapper[4953]: I1211 11:04:53.472757 4953 scope.go:117] "RemoveContainer" containerID="f156bb95cd3385f36a1370320e380956da4ee8d10144d102d832956b66b536ef" Dec 11 11:04:53 crc kubenswrapper[4953]: E1211 11:04:53.473618 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2898_openshift-machine-config-operator(ed741fb7-1326-48b7-a713-17c9f0243eac)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" Dec 11 11:05:07 crc kubenswrapper[4953]: I1211 11:05:07.473349 4953 scope.go:117] "RemoveContainer" containerID="f156bb95cd3385f36a1370320e380956da4ee8d10144d102d832956b66b536ef" Dec 11 11:05:07 crc kubenswrapper[4953]: E1211 11:05:07.474191 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2898_openshift-machine-config-operator(ed741fb7-1326-48b7-a713-17c9f0243eac)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" Dec 11 11:05:18 crc kubenswrapper[4953]: I1211 11:05:18.473473 4953 scope.go:117] "RemoveContainer" containerID="f156bb95cd3385f36a1370320e380956da4ee8d10144d102d832956b66b536ef" Dec 11 11:05:18 crc kubenswrapper[4953]: E1211 11:05:18.474267 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2898_openshift-machine-config-operator(ed741fb7-1326-48b7-a713-17c9f0243eac)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" Dec 11 11:05:31 crc kubenswrapper[4953]: I1211 11:05:31.473438 4953 scope.go:117] "RemoveContainer" containerID="f156bb95cd3385f36a1370320e380956da4ee8d10144d102d832956b66b536ef" Dec 11 11:05:31 crc kubenswrapper[4953]: E1211 11:05:31.475741 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2898_openshift-machine-config-operator(ed741fb7-1326-48b7-a713-17c9f0243eac)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" Dec 11 11:05:44 crc kubenswrapper[4953]: I1211 11:05:44.473936 4953 scope.go:117] "RemoveContainer" containerID="f156bb95cd3385f36a1370320e380956da4ee8d10144d102d832956b66b536ef" Dec 11 11:05:44 crc kubenswrapper[4953]: E1211 11:05:44.474960 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2898_openshift-machine-config-operator(ed741fb7-1326-48b7-a713-17c9f0243eac)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" Dec 11 11:05:56 crc kubenswrapper[4953]: I1211 11:05:56.473188 4953 scope.go:117] "RemoveContainer" containerID="f156bb95cd3385f36a1370320e380956da4ee8d10144d102d832956b66b536ef" Dec 11 11:05:56 crc kubenswrapper[4953]: E1211 11:05:56.474043 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2898_openshift-machine-config-operator(ed741fb7-1326-48b7-a713-17c9f0243eac)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" Dec 11 11:06:07 crc kubenswrapper[4953]: I1211 11:06:07.473718 4953 scope.go:117] "RemoveContainer" containerID="f156bb95cd3385f36a1370320e380956da4ee8d10144d102d832956b66b536ef" Dec 11 11:06:07 crc kubenswrapper[4953]: E1211 11:06:07.474512 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2898_openshift-machine-config-operator(ed741fb7-1326-48b7-a713-17c9f0243eac)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" Dec 11 11:06:18 crc kubenswrapper[4953]: I1211 11:06:18.474819 4953 scope.go:117] "RemoveContainer" containerID="f156bb95cd3385f36a1370320e380956da4ee8d10144d102d832956b66b536ef" Dec 11 11:06:19 crc kubenswrapper[4953]: I1211 11:06:19.390734 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q2898" event={"ID":"ed741fb7-1326-48b7-a713-17c9f0243eac","Type":"ContainerStarted","Data":"3596d60146c5bcb908e0f575e2de3ca703cbc87c2aae02fd995c4fd54c05bec0"} Dec 11 11:07:54 crc kubenswrapper[4953]: I1211 11:07:54.978297 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-qpzsr"] Dec 11 11:07:54 crc kubenswrapper[4953]: E1211 11:07:54.979210 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="834f84e0-4ce8-438b-90ff-99027b9ff15e" containerName="registry-server" Dec 11 11:07:54 crc kubenswrapper[4953]: I1211 11:07:54.979236 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="834f84e0-4ce8-438b-90ff-99027b9ff15e" containerName="registry-server" Dec 11 11:07:54 crc kubenswrapper[4953]: E1211 11:07:54.979255 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="834f84e0-4ce8-438b-90ff-99027b9ff15e" containerName="extract-content" Dec 11 11:07:54 crc kubenswrapper[4953]: I1211 11:07:54.979266 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="834f84e0-4ce8-438b-90ff-99027b9ff15e" containerName="extract-content" Dec 11 11:07:54 crc kubenswrapper[4953]: E1211 11:07:54.979280 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41055966-9840-43ed-9850-c43fdde754db" containerName="extract-utilities" Dec 11 11:07:54 crc kubenswrapper[4953]: I1211 11:07:54.979291 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="41055966-9840-43ed-9850-c43fdde754db" containerName="extract-utilities" Dec 11 11:07:54 crc kubenswrapper[4953]: E1211 11:07:54.979317 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="834f84e0-4ce8-438b-90ff-99027b9ff15e" containerName="extract-utilities" Dec 11 11:07:54 crc kubenswrapper[4953]: I1211 11:07:54.979327 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="834f84e0-4ce8-438b-90ff-99027b9ff15e" containerName="extract-utilities" Dec 11 11:07:54 crc kubenswrapper[4953]: E1211 11:07:54.979351 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41055966-9840-43ed-9850-c43fdde754db" containerName="registry-server" Dec 11 11:07:54 crc kubenswrapper[4953]: I1211 11:07:54.979360 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="41055966-9840-43ed-9850-c43fdde754db" containerName="registry-server" Dec 11 11:07:54 crc kubenswrapper[4953]: E1211 11:07:54.979380 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41055966-9840-43ed-9850-c43fdde754db" containerName="extract-content" Dec 11 11:07:54 crc kubenswrapper[4953]: I1211 11:07:54.979389 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="41055966-9840-43ed-9850-c43fdde754db" containerName="extract-content" Dec 11 11:07:54 crc kubenswrapper[4953]: E1211 11:07:54.979404 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d996a75-67cb-4658-9a27-f04e9f57b36c" containerName="collect-profiles" Dec 11 11:07:54 crc kubenswrapper[4953]: I1211 11:07:54.979414 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d996a75-67cb-4658-9a27-f04e9f57b36c" containerName="collect-profiles" Dec 11 11:07:54 crc kubenswrapper[4953]: I1211 11:07:54.979690 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="41055966-9840-43ed-9850-c43fdde754db" containerName="registry-server" Dec 11 11:07:54 crc kubenswrapper[4953]: I1211 11:07:54.979720 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="834f84e0-4ce8-438b-90ff-99027b9ff15e" containerName="registry-server" Dec 11 11:07:54 crc kubenswrapper[4953]: I1211 11:07:54.979744 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d996a75-67cb-4658-9a27-f04e9f57b36c" containerName="collect-profiles" Dec 11 11:07:54 crc kubenswrapper[4953]: I1211 11:07:54.981245 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qpzsr" Dec 11 11:07:55 crc kubenswrapper[4953]: I1211 11:07:55.010763 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qpzsr"] Dec 11 11:07:55 crc kubenswrapper[4953]: I1211 11:07:55.087759 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ebb894a-b53e-4b23-a1e6-8b4e66388c5b-catalog-content\") pod \"certified-operators-qpzsr\" (UID: \"5ebb894a-b53e-4b23-a1e6-8b4e66388c5b\") " pod="openshift-marketplace/certified-operators-qpzsr" Dec 11 11:07:55 crc kubenswrapper[4953]: I1211 11:07:55.087910 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46554\" (UniqueName: \"kubernetes.io/projected/5ebb894a-b53e-4b23-a1e6-8b4e66388c5b-kube-api-access-46554\") pod \"certified-operators-qpzsr\" (UID: \"5ebb894a-b53e-4b23-a1e6-8b4e66388c5b\") " pod="openshift-marketplace/certified-operators-qpzsr" Dec 11 11:07:55 crc kubenswrapper[4953]: I1211 11:07:55.087962 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ebb894a-b53e-4b23-a1e6-8b4e66388c5b-utilities\") pod \"certified-operators-qpzsr\" (UID: \"5ebb894a-b53e-4b23-a1e6-8b4e66388c5b\") " pod="openshift-marketplace/certified-operators-qpzsr" Dec 11 11:07:55 crc kubenswrapper[4953]: I1211 11:07:55.188978 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-46554\" (UniqueName: \"kubernetes.io/projected/5ebb894a-b53e-4b23-a1e6-8b4e66388c5b-kube-api-access-46554\") pod \"certified-operators-qpzsr\" (UID: \"5ebb894a-b53e-4b23-a1e6-8b4e66388c5b\") " pod="openshift-marketplace/certified-operators-qpzsr" Dec 11 11:07:55 crc kubenswrapper[4953]: I1211 11:07:55.189047 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ebb894a-b53e-4b23-a1e6-8b4e66388c5b-utilities\") pod \"certified-operators-qpzsr\" (UID: \"5ebb894a-b53e-4b23-a1e6-8b4e66388c5b\") " pod="openshift-marketplace/certified-operators-qpzsr" Dec 11 11:07:55 crc kubenswrapper[4953]: I1211 11:07:55.189106 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ebb894a-b53e-4b23-a1e6-8b4e66388c5b-catalog-content\") pod \"certified-operators-qpzsr\" (UID: \"5ebb894a-b53e-4b23-a1e6-8b4e66388c5b\") " pod="openshift-marketplace/certified-operators-qpzsr" Dec 11 11:07:55 crc kubenswrapper[4953]: I1211 11:07:55.189755 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ebb894a-b53e-4b23-a1e6-8b4e66388c5b-catalog-content\") pod \"certified-operators-qpzsr\" (UID: \"5ebb894a-b53e-4b23-a1e6-8b4e66388c5b\") " pod="openshift-marketplace/certified-operators-qpzsr" Dec 11 11:07:55 crc kubenswrapper[4953]: I1211 11:07:55.190386 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ebb894a-b53e-4b23-a1e6-8b4e66388c5b-utilities\") pod \"certified-operators-qpzsr\" (UID: \"5ebb894a-b53e-4b23-a1e6-8b4e66388c5b\") " pod="openshift-marketplace/certified-operators-qpzsr" Dec 11 11:07:55 crc kubenswrapper[4953]: I1211 11:07:55.212325 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-46554\" (UniqueName: \"kubernetes.io/projected/5ebb894a-b53e-4b23-a1e6-8b4e66388c5b-kube-api-access-46554\") pod \"certified-operators-qpzsr\" (UID: \"5ebb894a-b53e-4b23-a1e6-8b4e66388c5b\") " pod="openshift-marketplace/certified-operators-qpzsr" Dec 11 11:07:55 crc kubenswrapper[4953]: I1211 11:07:55.356420 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qpzsr" Dec 11 11:07:56 crc kubenswrapper[4953]: I1211 11:07:56.084243 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qpzsr"] Dec 11 11:07:56 crc kubenswrapper[4953]: I1211 11:07:56.136140 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qpzsr" event={"ID":"5ebb894a-b53e-4b23-a1e6-8b4e66388c5b","Type":"ContainerStarted","Data":"b4f4654f06c55415ef78aea3bbe860dca832be3bcbea916b95893e822ed19edd"} Dec 11 11:07:57 crc kubenswrapper[4953]: I1211 11:07:57.146432 4953 generic.go:334] "Generic (PLEG): container finished" podID="5ebb894a-b53e-4b23-a1e6-8b4e66388c5b" containerID="381d0f57172eeefde4d093874ab702cde0bdd332d7bea774d681c0000b2bd628" exitCode=0 Dec 11 11:07:57 crc kubenswrapper[4953]: I1211 11:07:57.146511 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qpzsr" event={"ID":"5ebb894a-b53e-4b23-a1e6-8b4e66388c5b","Type":"ContainerDied","Data":"381d0f57172eeefde4d093874ab702cde0bdd332d7bea774d681c0000b2bd628"} Dec 11 11:07:57 crc kubenswrapper[4953]: I1211 11:07:57.150352 4953 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 11 11:08:02 crc kubenswrapper[4953]: I1211 11:08:02.197817 4953 generic.go:334] "Generic (PLEG): container finished" podID="5ebb894a-b53e-4b23-a1e6-8b4e66388c5b" containerID="00efb21a6b85f07cab010e9b7a4e633955e2cfe7a9ce5b4a2b7881aa6ad9115f" exitCode=0 Dec 11 11:08:02 crc kubenswrapper[4953]: I1211 11:08:02.197908 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qpzsr" event={"ID":"5ebb894a-b53e-4b23-a1e6-8b4e66388c5b","Type":"ContainerDied","Data":"00efb21a6b85f07cab010e9b7a4e633955e2cfe7a9ce5b4a2b7881aa6ad9115f"} Dec 11 11:08:03 crc kubenswrapper[4953]: I1211 11:08:03.208506 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qpzsr" event={"ID":"5ebb894a-b53e-4b23-a1e6-8b4e66388c5b","Type":"ContainerStarted","Data":"fe433190542a178c625fa28886f1e5702aa16a3a0da6e3b5f08918c2a134169f"} Dec 11 11:08:03 crc kubenswrapper[4953]: I1211 11:08:03.244753 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-qpzsr" podStartSLOduration=3.7183980180000002 podStartE2EDuration="9.244733317s" podCreationTimestamp="2025-12-11 11:07:54 +0000 UTC" firstStartedPulling="2025-12-11 11:07:57.148487526 +0000 UTC m=+3395.172346559" lastFinishedPulling="2025-12-11 11:08:02.674822805 +0000 UTC m=+3400.698681858" observedRunningTime="2025-12-11 11:08:03.236671714 +0000 UTC m=+3401.260530757" watchObservedRunningTime="2025-12-11 11:08:03.244733317 +0000 UTC m=+3401.268592350" Dec 11 11:08:05 crc kubenswrapper[4953]: I1211 11:08:05.357083 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-qpzsr" Dec 11 11:08:05 crc kubenswrapper[4953]: I1211 11:08:05.357426 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-qpzsr" Dec 11 11:08:05 crc kubenswrapper[4953]: I1211 11:08:05.417929 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-qpzsr" Dec 11 11:08:15 crc kubenswrapper[4953]: I1211 11:08:15.415422 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-qpzsr" Dec 11 11:08:15 crc kubenswrapper[4953]: I1211 11:08:15.504616 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qpzsr"] Dec 11 11:08:15 crc kubenswrapper[4953]: I1211 11:08:15.541730 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-w2cg9"] Dec 11 11:08:15 crc kubenswrapper[4953]: I1211 11:08:15.541985 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-w2cg9" podUID="7f4c8cd1-2ee1-4d74-aedf-f5da8fa20c9d" containerName="registry-server" containerID="cri-o://6a41c2126cd02748818b1009d5997376bace4e2f97bc5d9788dde104b76e2803" gracePeriod=2 Dec 11 11:08:16 crc kubenswrapper[4953]: I1211 11:08:16.310276 4953 generic.go:334] "Generic (PLEG): container finished" podID="7f4c8cd1-2ee1-4d74-aedf-f5da8fa20c9d" containerID="6a41c2126cd02748818b1009d5997376bace4e2f97bc5d9788dde104b76e2803" exitCode=0 Dec 11 11:08:16 crc kubenswrapper[4953]: I1211 11:08:16.310471 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w2cg9" event={"ID":"7f4c8cd1-2ee1-4d74-aedf-f5da8fa20c9d","Type":"ContainerDied","Data":"6a41c2126cd02748818b1009d5997376bace4e2f97bc5d9788dde104b76e2803"} Dec 11 11:08:16 crc kubenswrapper[4953]: I1211 11:08:16.507493 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w2cg9" Dec 11 11:08:16 crc kubenswrapper[4953]: I1211 11:08:16.613451 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h8c4p\" (UniqueName: \"kubernetes.io/projected/7f4c8cd1-2ee1-4d74-aedf-f5da8fa20c9d-kube-api-access-h8c4p\") pod \"7f4c8cd1-2ee1-4d74-aedf-f5da8fa20c9d\" (UID: \"7f4c8cd1-2ee1-4d74-aedf-f5da8fa20c9d\") " Dec 11 11:08:16 crc kubenswrapper[4953]: I1211 11:08:16.613964 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f4c8cd1-2ee1-4d74-aedf-f5da8fa20c9d-catalog-content\") pod \"7f4c8cd1-2ee1-4d74-aedf-f5da8fa20c9d\" (UID: \"7f4c8cd1-2ee1-4d74-aedf-f5da8fa20c9d\") " Dec 11 11:08:16 crc kubenswrapper[4953]: I1211 11:08:16.614119 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f4c8cd1-2ee1-4d74-aedf-f5da8fa20c9d-utilities\") pod \"7f4c8cd1-2ee1-4d74-aedf-f5da8fa20c9d\" (UID: \"7f4c8cd1-2ee1-4d74-aedf-f5da8fa20c9d\") " Dec 11 11:08:16 crc kubenswrapper[4953]: I1211 11:08:16.614555 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7f4c8cd1-2ee1-4d74-aedf-f5da8fa20c9d-utilities" (OuterVolumeSpecName: "utilities") pod "7f4c8cd1-2ee1-4d74-aedf-f5da8fa20c9d" (UID: "7f4c8cd1-2ee1-4d74-aedf-f5da8fa20c9d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 11:08:16 crc kubenswrapper[4953]: I1211 11:08:16.634976 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f4c8cd1-2ee1-4d74-aedf-f5da8fa20c9d-kube-api-access-h8c4p" (OuterVolumeSpecName: "kube-api-access-h8c4p") pod "7f4c8cd1-2ee1-4d74-aedf-f5da8fa20c9d" (UID: "7f4c8cd1-2ee1-4d74-aedf-f5da8fa20c9d"). InnerVolumeSpecName "kube-api-access-h8c4p". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 11:08:16 crc kubenswrapper[4953]: I1211 11:08:16.679914 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7f4c8cd1-2ee1-4d74-aedf-f5da8fa20c9d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7f4c8cd1-2ee1-4d74-aedf-f5da8fa20c9d" (UID: "7f4c8cd1-2ee1-4d74-aedf-f5da8fa20c9d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 11:08:16 crc kubenswrapper[4953]: I1211 11:08:16.716436 4953 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f4c8cd1-2ee1-4d74-aedf-f5da8fa20c9d-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 11:08:16 crc kubenswrapper[4953]: I1211 11:08:16.716474 4953 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f4c8cd1-2ee1-4d74-aedf-f5da8fa20c9d-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 11:08:16 crc kubenswrapper[4953]: I1211 11:08:16.716486 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h8c4p\" (UniqueName: \"kubernetes.io/projected/7f4c8cd1-2ee1-4d74-aedf-f5da8fa20c9d-kube-api-access-h8c4p\") on node \"crc\" DevicePath \"\"" Dec 11 11:08:17 crc kubenswrapper[4953]: I1211 11:08:17.318378 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w2cg9" event={"ID":"7f4c8cd1-2ee1-4d74-aedf-f5da8fa20c9d","Type":"ContainerDied","Data":"ee1dd12d62c82d4e7956ab483db977471a6af555b296f1f9d4ef776f7f87f89d"} Dec 11 11:08:17 crc kubenswrapper[4953]: I1211 11:08:17.318765 4953 scope.go:117] "RemoveContainer" containerID="6a41c2126cd02748818b1009d5997376bace4e2f97bc5d9788dde104b76e2803" Dec 11 11:08:17 crc kubenswrapper[4953]: I1211 11:08:17.318426 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w2cg9" Dec 11 11:08:17 crc kubenswrapper[4953]: I1211 11:08:17.338702 4953 scope.go:117] "RemoveContainer" containerID="87dc5bc65de8a1312cbf7c9c65698827eed6e5c75dd30656027fe156d18c9e90" Dec 11 11:08:17 crc kubenswrapper[4953]: I1211 11:08:17.355443 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-w2cg9"] Dec 11 11:08:17 crc kubenswrapper[4953]: I1211 11:08:17.361970 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-w2cg9"] Dec 11 11:08:17 crc kubenswrapper[4953]: I1211 11:08:17.378037 4953 scope.go:117] "RemoveContainer" containerID="abbd36f7f17c206c9d80a475740e8d31d51ef05e06140ee7bbc6cefe0454821d" Dec 11 11:08:18 crc kubenswrapper[4953]: I1211 11:08:18.193966 4953 patch_prober.go:28] interesting pod/machine-config-daemon-q2898 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 11:08:18 crc kubenswrapper[4953]: I1211 11:08:18.194042 4953 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 11:08:18 crc kubenswrapper[4953]: I1211 11:08:18.488847 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f4c8cd1-2ee1-4d74-aedf-f5da8fa20c9d" path="/var/lib/kubelet/pods/7f4c8cd1-2ee1-4d74-aedf-f5da8fa20c9d/volumes" Dec 11 11:08:24 crc kubenswrapper[4953]: I1211 11:08:24.188495 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-b6tfb"] Dec 11 11:08:24 crc kubenswrapper[4953]: E1211 11:08:24.189423 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f4c8cd1-2ee1-4d74-aedf-f5da8fa20c9d" containerName="extract-content" Dec 11 11:08:24 crc kubenswrapper[4953]: I1211 11:08:24.189480 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f4c8cd1-2ee1-4d74-aedf-f5da8fa20c9d" containerName="extract-content" Dec 11 11:08:24 crc kubenswrapper[4953]: E1211 11:08:24.189547 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f4c8cd1-2ee1-4d74-aedf-f5da8fa20c9d" containerName="extract-utilities" Dec 11 11:08:24 crc kubenswrapper[4953]: I1211 11:08:24.189557 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f4c8cd1-2ee1-4d74-aedf-f5da8fa20c9d" containerName="extract-utilities" Dec 11 11:08:24 crc kubenswrapper[4953]: E1211 11:08:24.189592 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f4c8cd1-2ee1-4d74-aedf-f5da8fa20c9d" containerName="registry-server" Dec 11 11:08:24 crc kubenswrapper[4953]: I1211 11:08:24.189599 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f4c8cd1-2ee1-4d74-aedf-f5da8fa20c9d" containerName="registry-server" Dec 11 11:08:24 crc kubenswrapper[4953]: I1211 11:08:24.189745 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f4c8cd1-2ee1-4d74-aedf-f5da8fa20c9d" containerName="registry-server" Dec 11 11:08:24 crc kubenswrapper[4953]: I1211 11:08:24.191312 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b6tfb" Dec 11 11:08:24 crc kubenswrapper[4953]: I1211 11:08:24.199520 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-b6tfb"] Dec 11 11:08:24 crc kubenswrapper[4953]: I1211 11:08:24.328592 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xs87\" (UniqueName: \"kubernetes.io/projected/e9213234-122f-4458-8cc7-296dcf88a5de-kube-api-access-2xs87\") pod \"redhat-operators-b6tfb\" (UID: \"e9213234-122f-4458-8cc7-296dcf88a5de\") " pod="openshift-marketplace/redhat-operators-b6tfb" Dec 11 11:08:24 crc kubenswrapper[4953]: I1211 11:08:24.328658 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9213234-122f-4458-8cc7-296dcf88a5de-catalog-content\") pod \"redhat-operators-b6tfb\" (UID: \"e9213234-122f-4458-8cc7-296dcf88a5de\") " pod="openshift-marketplace/redhat-operators-b6tfb" Dec 11 11:08:24 crc kubenswrapper[4953]: I1211 11:08:24.328734 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9213234-122f-4458-8cc7-296dcf88a5de-utilities\") pod \"redhat-operators-b6tfb\" (UID: \"e9213234-122f-4458-8cc7-296dcf88a5de\") " pod="openshift-marketplace/redhat-operators-b6tfb" Dec 11 11:08:24 crc kubenswrapper[4953]: I1211 11:08:24.429813 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2xs87\" (UniqueName: \"kubernetes.io/projected/e9213234-122f-4458-8cc7-296dcf88a5de-kube-api-access-2xs87\") pod \"redhat-operators-b6tfb\" (UID: \"e9213234-122f-4458-8cc7-296dcf88a5de\") " pod="openshift-marketplace/redhat-operators-b6tfb" Dec 11 11:08:24 crc kubenswrapper[4953]: I1211 11:08:24.429896 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9213234-122f-4458-8cc7-296dcf88a5de-catalog-content\") pod \"redhat-operators-b6tfb\" (UID: \"e9213234-122f-4458-8cc7-296dcf88a5de\") " pod="openshift-marketplace/redhat-operators-b6tfb" Dec 11 11:08:24 crc kubenswrapper[4953]: I1211 11:08:24.429922 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9213234-122f-4458-8cc7-296dcf88a5de-utilities\") pod \"redhat-operators-b6tfb\" (UID: \"e9213234-122f-4458-8cc7-296dcf88a5de\") " pod="openshift-marketplace/redhat-operators-b6tfb" Dec 11 11:08:24 crc kubenswrapper[4953]: I1211 11:08:24.430554 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9213234-122f-4458-8cc7-296dcf88a5de-utilities\") pod \"redhat-operators-b6tfb\" (UID: \"e9213234-122f-4458-8cc7-296dcf88a5de\") " pod="openshift-marketplace/redhat-operators-b6tfb" Dec 11 11:08:24 crc kubenswrapper[4953]: I1211 11:08:24.430586 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9213234-122f-4458-8cc7-296dcf88a5de-catalog-content\") pod \"redhat-operators-b6tfb\" (UID: \"e9213234-122f-4458-8cc7-296dcf88a5de\") " pod="openshift-marketplace/redhat-operators-b6tfb" Dec 11 11:08:24 crc kubenswrapper[4953]: I1211 11:08:24.449114 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xs87\" (UniqueName: \"kubernetes.io/projected/e9213234-122f-4458-8cc7-296dcf88a5de-kube-api-access-2xs87\") pod \"redhat-operators-b6tfb\" (UID: \"e9213234-122f-4458-8cc7-296dcf88a5de\") " pod="openshift-marketplace/redhat-operators-b6tfb" Dec 11 11:08:24 crc kubenswrapper[4953]: I1211 11:08:24.531077 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b6tfb" Dec 11 11:08:24 crc kubenswrapper[4953]: I1211 11:08:24.998302 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-b6tfb"] Dec 11 11:08:25 crc kubenswrapper[4953]: I1211 11:08:25.451829 4953 generic.go:334] "Generic (PLEG): container finished" podID="e9213234-122f-4458-8cc7-296dcf88a5de" containerID="eed6136b75d24f1c07d298434edf4b27ff066400814832bad6307f3952867864" exitCode=0 Dec 11 11:08:25 crc kubenswrapper[4953]: I1211 11:08:25.452019 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b6tfb" event={"ID":"e9213234-122f-4458-8cc7-296dcf88a5de","Type":"ContainerDied","Data":"eed6136b75d24f1c07d298434edf4b27ff066400814832bad6307f3952867864"} Dec 11 11:08:25 crc kubenswrapper[4953]: I1211 11:08:25.452223 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b6tfb" event={"ID":"e9213234-122f-4458-8cc7-296dcf88a5de","Type":"ContainerStarted","Data":"0d55277314170a239d7ce81b9dd66821f7082df6cd96c418c347c63aa8133b13"} Dec 11 11:08:26 crc kubenswrapper[4953]: I1211 11:08:26.460251 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b6tfb" event={"ID":"e9213234-122f-4458-8cc7-296dcf88a5de","Type":"ContainerStarted","Data":"e06666bbca532e9194d64caed532d531e6c95c000858fbc3a30422d210618da0"} Dec 11 11:08:27 crc kubenswrapper[4953]: I1211 11:08:27.470749 4953 generic.go:334] "Generic (PLEG): container finished" podID="e9213234-122f-4458-8cc7-296dcf88a5de" containerID="e06666bbca532e9194d64caed532d531e6c95c000858fbc3a30422d210618da0" exitCode=0 Dec 11 11:08:27 crc kubenswrapper[4953]: I1211 11:08:27.471000 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b6tfb" event={"ID":"e9213234-122f-4458-8cc7-296dcf88a5de","Type":"ContainerDied","Data":"e06666bbca532e9194d64caed532d531e6c95c000858fbc3a30422d210618da0"} Dec 11 11:08:28 crc kubenswrapper[4953]: I1211 11:08:28.480475 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b6tfb" event={"ID":"e9213234-122f-4458-8cc7-296dcf88a5de","Type":"ContainerStarted","Data":"f13db964d536c3e99406fa47d5372e9160677249724614579d80613ab4df1274"} Dec 11 11:08:28 crc kubenswrapper[4953]: I1211 11:08:28.499934 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-b6tfb" podStartSLOduration=1.849984408 podStartE2EDuration="4.499912774s" podCreationTimestamp="2025-12-11 11:08:24 +0000 UTC" firstStartedPulling="2025-12-11 11:08:25.453971096 +0000 UTC m=+3423.477830129" lastFinishedPulling="2025-12-11 11:08:28.103899462 +0000 UTC m=+3426.127758495" observedRunningTime="2025-12-11 11:08:28.495485315 +0000 UTC m=+3426.519344348" watchObservedRunningTime="2025-12-11 11:08:28.499912774 +0000 UTC m=+3426.523771827" Dec 11 11:08:34 crc kubenswrapper[4953]: I1211 11:08:34.531508 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-b6tfb" Dec 11 11:08:34 crc kubenswrapper[4953]: I1211 11:08:34.532124 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-b6tfb" Dec 11 11:08:34 crc kubenswrapper[4953]: I1211 11:08:34.577348 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-b6tfb" Dec 11 11:08:35 crc kubenswrapper[4953]: I1211 11:08:35.574536 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-b6tfb" Dec 11 11:08:35 crc kubenswrapper[4953]: I1211 11:08:35.616716 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-b6tfb"] Dec 11 11:08:37 crc kubenswrapper[4953]: I1211 11:08:37.549255 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-b6tfb" podUID="e9213234-122f-4458-8cc7-296dcf88a5de" containerName="registry-server" containerID="cri-o://f13db964d536c3e99406fa47d5372e9160677249724614579d80613ab4df1274" gracePeriod=2 Dec 11 11:08:39 crc kubenswrapper[4953]: I1211 11:08:39.580490 4953 generic.go:334] "Generic (PLEG): container finished" podID="e9213234-122f-4458-8cc7-296dcf88a5de" containerID="f13db964d536c3e99406fa47d5372e9160677249724614579d80613ab4df1274" exitCode=0 Dec 11 11:08:39 crc kubenswrapper[4953]: I1211 11:08:39.580570 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b6tfb" event={"ID":"e9213234-122f-4458-8cc7-296dcf88a5de","Type":"ContainerDied","Data":"f13db964d536c3e99406fa47d5372e9160677249724614579d80613ab4df1274"} Dec 11 11:08:39 crc kubenswrapper[4953]: I1211 11:08:39.808690 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b6tfb" Dec 11 11:08:39 crc kubenswrapper[4953]: I1211 11:08:39.937033 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9213234-122f-4458-8cc7-296dcf88a5de-utilities\") pod \"e9213234-122f-4458-8cc7-296dcf88a5de\" (UID: \"e9213234-122f-4458-8cc7-296dcf88a5de\") " Dec 11 11:08:39 crc kubenswrapper[4953]: I1211 11:08:39.937112 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2xs87\" (UniqueName: \"kubernetes.io/projected/e9213234-122f-4458-8cc7-296dcf88a5de-kube-api-access-2xs87\") pod \"e9213234-122f-4458-8cc7-296dcf88a5de\" (UID: \"e9213234-122f-4458-8cc7-296dcf88a5de\") " Dec 11 11:08:39 crc kubenswrapper[4953]: I1211 11:08:39.937149 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9213234-122f-4458-8cc7-296dcf88a5de-catalog-content\") pod \"e9213234-122f-4458-8cc7-296dcf88a5de\" (UID: \"e9213234-122f-4458-8cc7-296dcf88a5de\") " Dec 11 11:08:39 crc kubenswrapper[4953]: I1211 11:08:39.938121 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e9213234-122f-4458-8cc7-296dcf88a5de-utilities" (OuterVolumeSpecName: "utilities") pod "e9213234-122f-4458-8cc7-296dcf88a5de" (UID: "e9213234-122f-4458-8cc7-296dcf88a5de"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 11:08:39 crc kubenswrapper[4953]: I1211 11:08:39.938366 4953 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9213234-122f-4458-8cc7-296dcf88a5de-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 11:08:39 crc kubenswrapper[4953]: I1211 11:08:39.943128 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9213234-122f-4458-8cc7-296dcf88a5de-kube-api-access-2xs87" (OuterVolumeSpecName: "kube-api-access-2xs87") pod "e9213234-122f-4458-8cc7-296dcf88a5de" (UID: "e9213234-122f-4458-8cc7-296dcf88a5de"). InnerVolumeSpecName "kube-api-access-2xs87". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 11:08:40 crc kubenswrapper[4953]: I1211 11:08:40.038982 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2xs87\" (UniqueName: \"kubernetes.io/projected/e9213234-122f-4458-8cc7-296dcf88a5de-kube-api-access-2xs87\") on node \"crc\" DevicePath \"\"" Dec 11 11:08:40 crc kubenswrapper[4953]: I1211 11:08:40.063186 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e9213234-122f-4458-8cc7-296dcf88a5de-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e9213234-122f-4458-8cc7-296dcf88a5de" (UID: "e9213234-122f-4458-8cc7-296dcf88a5de"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 11:08:40 crc kubenswrapper[4953]: I1211 11:08:40.150390 4953 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9213234-122f-4458-8cc7-296dcf88a5de-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 11:08:40 crc kubenswrapper[4953]: I1211 11:08:40.591024 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b6tfb" event={"ID":"e9213234-122f-4458-8cc7-296dcf88a5de","Type":"ContainerDied","Data":"0d55277314170a239d7ce81b9dd66821f7082df6cd96c418c347c63aa8133b13"} Dec 11 11:08:40 crc kubenswrapper[4953]: I1211 11:08:40.591907 4953 scope.go:117] "RemoveContainer" containerID="f13db964d536c3e99406fa47d5372e9160677249724614579d80613ab4df1274" Dec 11 11:08:40 crc kubenswrapper[4953]: I1211 11:08:40.591107 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b6tfb" Dec 11 11:08:40 crc kubenswrapper[4953]: I1211 11:08:40.617136 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-b6tfb"] Dec 11 11:08:40 crc kubenswrapper[4953]: I1211 11:08:40.619368 4953 scope.go:117] "RemoveContainer" containerID="e06666bbca532e9194d64caed532d531e6c95c000858fbc3a30422d210618da0" Dec 11 11:08:40 crc kubenswrapper[4953]: I1211 11:08:40.624871 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-b6tfb"] Dec 11 11:08:40 crc kubenswrapper[4953]: I1211 11:08:40.638424 4953 scope.go:117] "RemoveContainer" containerID="eed6136b75d24f1c07d298434edf4b27ff066400814832bad6307f3952867864" Dec 11 11:08:42 crc kubenswrapper[4953]: I1211 11:08:42.486477 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9213234-122f-4458-8cc7-296dcf88a5de" path="/var/lib/kubelet/pods/e9213234-122f-4458-8cc7-296dcf88a5de/volumes" Dec 11 11:08:48 crc kubenswrapper[4953]: I1211 11:08:48.193727 4953 patch_prober.go:28] interesting pod/machine-config-daemon-q2898 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 11:08:48 crc kubenswrapper[4953]: I1211 11:08:48.194276 4953 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 11:09:18 crc kubenswrapper[4953]: I1211 11:09:18.194229 4953 patch_prober.go:28] interesting pod/machine-config-daemon-q2898 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 11:09:18 crc kubenswrapper[4953]: I1211 11:09:18.194838 4953 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 11:09:18 crc kubenswrapper[4953]: I1211 11:09:18.194940 4953 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q2898" Dec 11 11:09:18 crc kubenswrapper[4953]: I1211 11:09:18.195700 4953 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3596d60146c5bcb908e0f575e2de3ca703cbc87c2aae02fd995c4fd54c05bec0"} pod="openshift-machine-config-operator/machine-config-daemon-q2898" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 11 11:09:18 crc kubenswrapper[4953]: I1211 11:09:18.195765 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" containerName="machine-config-daemon" containerID="cri-o://3596d60146c5bcb908e0f575e2de3ca703cbc87c2aae02fd995c4fd54c05bec0" gracePeriod=600 Dec 11 11:09:18 crc kubenswrapper[4953]: I1211 11:09:18.907026 4953 generic.go:334] "Generic (PLEG): container finished" podID="ed741fb7-1326-48b7-a713-17c9f0243eac" containerID="3596d60146c5bcb908e0f575e2de3ca703cbc87c2aae02fd995c4fd54c05bec0" exitCode=0 Dec 11 11:09:18 crc kubenswrapper[4953]: I1211 11:09:18.907077 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q2898" event={"ID":"ed741fb7-1326-48b7-a713-17c9f0243eac","Type":"ContainerDied","Data":"3596d60146c5bcb908e0f575e2de3ca703cbc87c2aae02fd995c4fd54c05bec0"} Dec 11 11:09:18 crc kubenswrapper[4953]: I1211 11:09:18.907397 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q2898" event={"ID":"ed741fb7-1326-48b7-a713-17c9f0243eac","Type":"ContainerStarted","Data":"fdfaa916f9a5003c8063d2dc716fea8730c3a8110f54d90c8b00ba3bedd729a9"} Dec 11 11:09:18 crc kubenswrapper[4953]: I1211 11:09:18.907419 4953 scope.go:117] "RemoveContainer" containerID="f156bb95cd3385f36a1370320e380956da4ee8d10144d102d832956b66b536ef" Dec 11 11:10:06 crc kubenswrapper[4953]: I1211 11:10:06.062380 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-sx27x"] Dec 11 11:10:06 crc kubenswrapper[4953]: E1211 11:10:06.065298 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9213234-122f-4458-8cc7-296dcf88a5de" containerName="registry-server" Dec 11 11:10:06 crc kubenswrapper[4953]: I1211 11:10:06.065352 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9213234-122f-4458-8cc7-296dcf88a5de" containerName="registry-server" Dec 11 11:10:06 crc kubenswrapper[4953]: E1211 11:10:06.065412 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9213234-122f-4458-8cc7-296dcf88a5de" containerName="extract-utilities" Dec 11 11:10:06 crc kubenswrapper[4953]: I1211 11:10:06.065436 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9213234-122f-4458-8cc7-296dcf88a5de" containerName="extract-utilities" Dec 11 11:10:06 crc kubenswrapper[4953]: E1211 11:10:06.065487 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9213234-122f-4458-8cc7-296dcf88a5de" containerName="extract-content" Dec 11 11:10:06 crc kubenswrapper[4953]: I1211 11:10:06.065501 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9213234-122f-4458-8cc7-296dcf88a5de" containerName="extract-content" Dec 11 11:10:06 crc kubenswrapper[4953]: I1211 11:10:06.066787 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9213234-122f-4458-8cc7-296dcf88a5de" containerName="registry-server" Dec 11 11:10:06 crc kubenswrapper[4953]: I1211 11:10:06.177172 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sx27x"] Dec 11 11:10:06 crc kubenswrapper[4953]: I1211 11:10:06.177322 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sx27x" Dec 11 11:10:06 crc kubenswrapper[4953]: I1211 11:10:06.262553 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgwzv\" (UniqueName: \"kubernetes.io/projected/b4335469-7d05-45d0-8941-15b94659f864-kube-api-access-fgwzv\") pod \"community-operators-sx27x\" (UID: \"b4335469-7d05-45d0-8941-15b94659f864\") " pod="openshift-marketplace/community-operators-sx27x" Dec 11 11:10:06 crc kubenswrapper[4953]: I1211 11:10:06.262662 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4335469-7d05-45d0-8941-15b94659f864-utilities\") pod \"community-operators-sx27x\" (UID: \"b4335469-7d05-45d0-8941-15b94659f864\") " pod="openshift-marketplace/community-operators-sx27x" Dec 11 11:10:06 crc kubenswrapper[4953]: I1211 11:10:06.262738 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4335469-7d05-45d0-8941-15b94659f864-catalog-content\") pod \"community-operators-sx27x\" (UID: \"b4335469-7d05-45d0-8941-15b94659f864\") " pod="openshift-marketplace/community-operators-sx27x" Dec 11 11:10:06 crc kubenswrapper[4953]: I1211 11:10:06.363418 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fgwzv\" (UniqueName: \"kubernetes.io/projected/b4335469-7d05-45d0-8941-15b94659f864-kube-api-access-fgwzv\") pod \"community-operators-sx27x\" (UID: \"b4335469-7d05-45d0-8941-15b94659f864\") " pod="openshift-marketplace/community-operators-sx27x" Dec 11 11:10:06 crc kubenswrapper[4953]: I1211 11:10:06.363490 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4335469-7d05-45d0-8941-15b94659f864-utilities\") pod \"community-operators-sx27x\" (UID: \"b4335469-7d05-45d0-8941-15b94659f864\") " pod="openshift-marketplace/community-operators-sx27x" Dec 11 11:10:06 crc kubenswrapper[4953]: I1211 11:10:06.363549 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4335469-7d05-45d0-8941-15b94659f864-catalog-content\") pod \"community-operators-sx27x\" (UID: \"b4335469-7d05-45d0-8941-15b94659f864\") " pod="openshift-marketplace/community-operators-sx27x" Dec 11 11:10:06 crc kubenswrapper[4953]: I1211 11:10:06.364092 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4335469-7d05-45d0-8941-15b94659f864-catalog-content\") pod \"community-operators-sx27x\" (UID: \"b4335469-7d05-45d0-8941-15b94659f864\") " pod="openshift-marketplace/community-operators-sx27x" Dec 11 11:10:06 crc kubenswrapper[4953]: I1211 11:10:06.364217 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4335469-7d05-45d0-8941-15b94659f864-utilities\") pod \"community-operators-sx27x\" (UID: \"b4335469-7d05-45d0-8941-15b94659f864\") " pod="openshift-marketplace/community-operators-sx27x" Dec 11 11:10:06 crc kubenswrapper[4953]: I1211 11:10:06.390163 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fgwzv\" (UniqueName: \"kubernetes.io/projected/b4335469-7d05-45d0-8941-15b94659f864-kube-api-access-fgwzv\") pod \"community-operators-sx27x\" (UID: \"b4335469-7d05-45d0-8941-15b94659f864\") " pod="openshift-marketplace/community-operators-sx27x" Dec 11 11:10:06 crc kubenswrapper[4953]: I1211 11:10:06.513546 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sx27x" Dec 11 11:10:06 crc kubenswrapper[4953]: I1211 11:10:06.869143 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sx27x"] Dec 11 11:10:06 crc kubenswrapper[4953]: W1211 11:10:06.878333 4953 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb4335469_7d05_45d0_8941_15b94659f864.slice/crio-f105bae54587a4ae3850cc3c334742d66ce4d950567ca8eee6eb2998b0cc426a WatchSource:0}: Error finding container f105bae54587a4ae3850cc3c334742d66ce4d950567ca8eee6eb2998b0cc426a: Status 404 returned error can't find the container with id f105bae54587a4ae3850cc3c334742d66ce4d950567ca8eee6eb2998b0cc426a Dec 11 11:10:07 crc kubenswrapper[4953]: I1211 11:10:07.535629 4953 generic.go:334] "Generic (PLEG): container finished" podID="b4335469-7d05-45d0-8941-15b94659f864" containerID="d5a7b96a78b05ae0895cd9ce5dab2a9d93823b6bfe3fbe339ddbd712cfdeca87" exitCode=0 Dec 11 11:10:07 crc kubenswrapper[4953]: I1211 11:10:07.535767 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sx27x" event={"ID":"b4335469-7d05-45d0-8941-15b94659f864","Type":"ContainerDied","Data":"d5a7b96a78b05ae0895cd9ce5dab2a9d93823b6bfe3fbe339ddbd712cfdeca87"} Dec 11 11:10:07 crc kubenswrapper[4953]: I1211 11:10:07.535939 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sx27x" event={"ID":"b4335469-7d05-45d0-8941-15b94659f864","Type":"ContainerStarted","Data":"f105bae54587a4ae3850cc3c334742d66ce4d950567ca8eee6eb2998b0cc426a"} Dec 11 11:10:09 crc kubenswrapper[4953]: I1211 11:10:09.559257 4953 generic.go:334] "Generic (PLEG): container finished" podID="b4335469-7d05-45d0-8941-15b94659f864" containerID="7daf3eba22783a6db5a6c8d66a82396a90ee0a10d2f7fd09b6d6db8d62a000de" exitCode=0 Dec 11 11:10:09 crc kubenswrapper[4953]: I1211 11:10:09.559358 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sx27x" event={"ID":"b4335469-7d05-45d0-8941-15b94659f864","Type":"ContainerDied","Data":"7daf3eba22783a6db5a6c8d66a82396a90ee0a10d2f7fd09b6d6db8d62a000de"} Dec 11 11:10:10 crc kubenswrapper[4953]: I1211 11:10:10.568037 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sx27x" event={"ID":"b4335469-7d05-45d0-8941-15b94659f864","Type":"ContainerStarted","Data":"1c1fd716404b4a5a3290df2177e38bbc185a868870d6ebe4c268a2a3fbc78db5"} Dec 11 11:10:16 crc kubenswrapper[4953]: I1211 11:10:16.514899 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-sx27x" Dec 11 11:10:16 crc kubenswrapper[4953]: I1211 11:10:16.516427 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-sx27x" Dec 11 11:10:16 crc kubenswrapper[4953]: I1211 11:10:16.560267 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-sx27x" Dec 11 11:10:16 crc kubenswrapper[4953]: I1211 11:10:16.583310 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-sx27x" podStartSLOduration=8.124936092 podStartE2EDuration="10.583283148s" podCreationTimestamp="2025-12-11 11:10:06 +0000 UTC" firstStartedPulling="2025-12-11 11:10:07.537071385 +0000 UTC m=+3525.560930418" lastFinishedPulling="2025-12-11 11:10:09.995418441 +0000 UTC m=+3528.019277474" observedRunningTime="2025-12-11 11:10:10.605041303 +0000 UTC m=+3528.628900336" watchObservedRunningTime="2025-12-11 11:10:16.583283148 +0000 UTC m=+3534.607142181" Dec 11 11:10:16 crc kubenswrapper[4953]: I1211 11:10:16.724991 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-sx27x" Dec 11 11:10:16 crc kubenswrapper[4953]: I1211 11:10:16.889216 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-sx27x"] Dec 11 11:10:18 crc kubenswrapper[4953]: I1211 11:10:18.685946 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-sx27x" podUID="b4335469-7d05-45d0-8941-15b94659f864" containerName="registry-server" containerID="cri-o://1c1fd716404b4a5a3290df2177e38bbc185a868870d6ebe4c268a2a3fbc78db5" gracePeriod=2 Dec 11 11:10:20 crc kubenswrapper[4953]: I1211 11:10:20.445012 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sx27x" Dec 11 11:10:20 crc kubenswrapper[4953]: I1211 11:10:20.567526 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fgwzv\" (UniqueName: \"kubernetes.io/projected/b4335469-7d05-45d0-8941-15b94659f864-kube-api-access-fgwzv\") pod \"b4335469-7d05-45d0-8941-15b94659f864\" (UID: \"b4335469-7d05-45d0-8941-15b94659f864\") " Dec 11 11:10:20 crc kubenswrapper[4953]: I1211 11:10:20.567680 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4335469-7d05-45d0-8941-15b94659f864-catalog-content\") pod \"b4335469-7d05-45d0-8941-15b94659f864\" (UID: \"b4335469-7d05-45d0-8941-15b94659f864\") " Dec 11 11:10:20 crc kubenswrapper[4953]: I1211 11:10:20.567748 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4335469-7d05-45d0-8941-15b94659f864-utilities\") pod \"b4335469-7d05-45d0-8941-15b94659f864\" (UID: \"b4335469-7d05-45d0-8941-15b94659f864\") " Dec 11 11:10:20 crc kubenswrapper[4953]: I1211 11:10:20.569001 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b4335469-7d05-45d0-8941-15b94659f864-utilities" (OuterVolumeSpecName: "utilities") pod "b4335469-7d05-45d0-8941-15b94659f864" (UID: "b4335469-7d05-45d0-8941-15b94659f864"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 11:10:20 crc kubenswrapper[4953]: I1211 11:10:20.573748 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4335469-7d05-45d0-8941-15b94659f864-kube-api-access-fgwzv" (OuterVolumeSpecName: "kube-api-access-fgwzv") pod "b4335469-7d05-45d0-8941-15b94659f864" (UID: "b4335469-7d05-45d0-8941-15b94659f864"). InnerVolumeSpecName "kube-api-access-fgwzv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 11:10:20 crc kubenswrapper[4953]: I1211 11:10:20.627836 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b4335469-7d05-45d0-8941-15b94659f864-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b4335469-7d05-45d0-8941-15b94659f864" (UID: "b4335469-7d05-45d0-8941-15b94659f864"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 11:10:20 crc kubenswrapper[4953]: I1211 11:10:20.670013 4953 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4335469-7d05-45d0-8941-15b94659f864-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 11:10:20 crc kubenswrapper[4953]: I1211 11:10:20.670352 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fgwzv\" (UniqueName: \"kubernetes.io/projected/b4335469-7d05-45d0-8941-15b94659f864-kube-api-access-fgwzv\") on node \"crc\" DevicePath \"\"" Dec 11 11:10:20 crc kubenswrapper[4953]: I1211 11:10:20.670417 4953 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4335469-7d05-45d0-8941-15b94659f864-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 11:10:20 crc kubenswrapper[4953]: I1211 11:10:20.702623 4953 generic.go:334] "Generic (PLEG): container finished" podID="b4335469-7d05-45d0-8941-15b94659f864" containerID="1c1fd716404b4a5a3290df2177e38bbc185a868870d6ebe4c268a2a3fbc78db5" exitCode=0 Dec 11 11:10:20 crc kubenswrapper[4953]: I1211 11:10:20.702670 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sx27x" event={"ID":"b4335469-7d05-45d0-8941-15b94659f864","Type":"ContainerDied","Data":"1c1fd716404b4a5a3290df2177e38bbc185a868870d6ebe4c268a2a3fbc78db5"} Dec 11 11:10:20 crc kubenswrapper[4953]: I1211 11:10:20.702695 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sx27x" event={"ID":"b4335469-7d05-45d0-8941-15b94659f864","Type":"ContainerDied","Data":"f105bae54587a4ae3850cc3c334742d66ce4d950567ca8eee6eb2998b0cc426a"} Dec 11 11:10:20 crc kubenswrapper[4953]: I1211 11:10:20.702711 4953 scope.go:117] "RemoveContainer" containerID="1c1fd716404b4a5a3290df2177e38bbc185a868870d6ebe4c268a2a3fbc78db5" Dec 11 11:10:20 crc kubenswrapper[4953]: I1211 11:10:20.702709 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sx27x" Dec 11 11:10:20 crc kubenswrapper[4953]: I1211 11:10:20.728725 4953 scope.go:117] "RemoveContainer" containerID="7daf3eba22783a6db5a6c8d66a82396a90ee0a10d2f7fd09b6d6db8d62a000de" Dec 11 11:10:20 crc kubenswrapper[4953]: I1211 11:10:20.737383 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-sx27x"] Dec 11 11:10:20 crc kubenswrapper[4953]: I1211 11:10:20.744397 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-sx27x"] Dec 11 11:10:20 crc kubenswrapper[4953]: I1211 11:10:20.766112 4953 scope.go:117] "RemoveContainer" containerID="d5a7b96a78b05ae0895cd9ce5dab2a9d93823b6bfe3fbe339ddbd712cfdeca87" Dec 11 11:10:20 crc kubenswrapper[4953]: I1211 11:10:20.784999 4953 scope.go:117] "RemoveContainer" containerID="1c1fd716404b4a5a3290df2177e38bbc185a868870d6ebe4c268a2a3fbc78db5" Dec 11 11:10:20 crc kubenswrapper[4953]: E1211 11:10:20.785510 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c1fd716404b4a5a3290df2177e38bbc185a868870d6ebe4c268a2a3fbc78db5\": container with ID starting with 1c1fd716404b4a5a3290df2177e38bbc185a868870d6ebe4c268a2a3fbc78db5 not found: ID does not exist" containerID="1c1fd716404b4a5a3290df2177e38bbc185a868870d6ebe4c268a2a3fbc78db5" Dec 11 11:10:20 crc kubenswrapper[4953]: I1211 11:10:20.785543 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c1fd716404b4a5a3290df2177e38bbc185a868870d6ebe4c268a2a3fbc78db5"} err="failed to get container status \"1c1fd716404b4a5a3290df2177e38bbc185a868870d6ebe4c268a2a3fbc78db5\": rpc error: code = NotFound desc = could not find container \"1c1fd716404b4a5a3290df2177e38bbc185a868870d6ebe4c268a2a3fbc78db5\": container with ID starting with 1c1fd716404b4a5a3290df2177e38bbc185a868870d6ebe4c268a2a3fbc78db5 not found: ID does not exist" Dec 11 11:10:20 crc kubenswrapper[4953]: I1211 11:10:20.785566 4953 scope.go:117] "RemoveContainer" containerID="7daf3eba22783a6db5a6c8d66a82396a90ee0a10d2f7fd09b6d6db8d62a000de" Dec 11 11:10:20 crc kubenswrapper[4953]: E1211 11:10:20.786993 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7daf3eba22783a6db5a6c8d66a82396a90ee0a10d2f7fd09b6d6db8d62a000de\": container with ID starting with 7daf3eba22783a6db5a6c8d66a82396a90ee0a10d2f7fd09b6d6db8d62a000de not found: ID does not exist" containerID="7daf3eba22783a6db5a6c8d66a82396a90ee0a10d2f7fd09b6d6db8d62a000de" Dec 11 11:10:20 crc kubenswrapper[4953]: I1211 11:10:20.787047 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7daf3eba22783a6db5a6c8d66a82396a90ee0a10d2f7fd09b6d6db8d62a000de"} err="failed to get container status \"7daf3eba22783a6db5a6c8d66a82396a90ee0a10d2f7fd09b6d6db8d62a000de\": rpc error: code = NotFound desc = could not find container \"7daf3eba22783a6db5a6c8d66a82396a90ee0a10d2f7fd09b6d6db8d62a000de\": container with ID starting with 7daf3eba22783a6db5a6c8d66a82396a90ee0a10d2f7fd09b6d6db8d62a000de not found: ID does not exist" Dec 11 11:10:20 crc kubenswrapper[4953]: I1211 11:10:20.787082 4953 scope.go:117] "RemoveContainer" containerID="d5a7b96a78b05ae0895cd9ce5dab2a9d93823b6bfe3fbe339ddbd712cfdeca87" Dec 11 11:10:20 crc kubenswrapper[4953]: E1211 11:10:20.787398 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d5a7b96a78b05ae0895cd9ce5dab2a9d93823b6bfe3fbe339ddbd712cfdeca87\": container with ID starting with d5a7b96a78b05ae0895cd9ce5dab2a9d93823b6bfe3fbe339ddbd712cfdeca87 not found: ID does not exist" containerID="d5a7b96a78b05ae0895cd9ce5dab2a9d93823b6bfe3fbe339ddbd712cfdeca87" Dec 11 11:10:20 crc kubenswrapper[4953]: I1211 11:10:20.787429 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d5a7b96a78b05ae0895cd9ce5dab2a9d93823b6bfe3fbe339ddbd712cfdeca87"} err="failed to get container status \"d5a7b96a78b05ae0895cd9ce5dab2a9d93823b6bfe3fbe339ddbd712cfdeca87\": rpc error: code = NotFound desc = could not find container \"d5a7b96a78b05ae0895cd9ce5dab2a9d93823b6bfe3fbe339ddbd712cfdeca87\": container with ID starting with d5a7b96a78b05ae0895cd9ce5dab2a9d93823b6bfe3fbe339ddbd712cfdeca87 not found: ID does not exist" Dec 11 11:10:22 crc kubenswrapper[4953]: I1211 11:10:22.485987 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4335469-7d05-45d0-8941-15b94659f864" path="/var/lib/kubelet/pods/b4335469-7d05-45d0-8941-15b94659f864/volumes" Dec 11 11:11:18 crc kubenswrapper[4953]: I1211 11:11:18.194350 4953 patch_prober.go:28] interesting pod/machine-config-daemon-q2898 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 11:11:18 crc kubenswrapper[4953]: I1211 11:11:18.195110 4953 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 11:11:48 crc kubenswrapper[4953]: I1211 11:11:48.194647 4953 patch_prober.go:28] interesting pod/machine-config-daemon-q2898 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 11:11:48 crc kubenswrapper[4953]: I1211 11:11:48.195346 4953 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 11:12:18 crc kubenswrapper[4953]: I1211 11:12:18.194076 4953 patch_prober.go:28] interesting pod/machine-config-daemon-q2898 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 11:12:18 crc kubenswrapper[4953]: I1211 11:12:18.195727 4953 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 11:12:18 crc kubenswrapper[4953]: I1211 11:12:18.195809 4953 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q2898" Dec 11 11:12:18 crc kubenswrapper[4953]: I1211 11:12:18.198276 4953 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"fdfaa916f9a5003c8063d2dc716fea8730c3a8110f54d90c8b00ba3bedd729a9"} pod="openshift-machine-config-operator/machine-config-daemon-q2898" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 11 11:12:18 crc kubenswrapper[4953]: I1211 11:12:18.198407 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" containerName="machine-config-daemon" containerID="cri-o://fdfaa916f9a5003c8063d2dc716fea8730c3a8110f54d90c8b00ba3bedd729a9" gracePeriod=600 Dec 11 11:12:18 crc kubenswrapper[4953]: E1211 11:12:18.336789 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2898_openshift-machine-config-operator(ed741fb7-1326-48b7-a713-17c9f0243eac)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" Dec 11 11:12:18 crc kubenswrapper[4953]: I1211 11:12:18.575852 4953 generic.go:334] "Generic (PLEG): container finished" podID="ed741fb7-1326-48b7-a713-17c9f0243eac" containerID="fdfaa916f9a5003c8063d2dc716fea8730c3a8110f54d90c8b00ba3bedd729a9" exitCode=0 Dec 11 11:12:18 crc kubenswrapper[4953]: I1211 11:12:18.575894 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q2898" event={"ID":"ed741fb7-1326-48b7-a713-17c9f0243eac","Type":"ContainerDied","Data":"fdfaa916f9a5003c8063d2dc716fea8730c3a8110f54d90c8b00ba3bedd729a9"} Dec 11 11:12:18 crc kubenswrapper[4953]: I1211 11:12:18.576031 4953 scope.go:117] "RemoveContainer" containerID="3596d60146c5bcb908e0f575e2de3ca703cbc87c2aae02fd995c4fd54c05bec0" Dec 11 11:12:18 crc kubenswrapper[4953]: I1211 11:12:18.576754 4953 scope.go:117] "RemoveContainer" containerID="fdfaa916f9a5003c8063d2dc716fea8730c3a8110f54d90c8b00ba3bedd729a9" Dec 11 11:12:18 crc kubenswrapper[4953]: E1211 11:12:18.577123 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2898_openshift-machine-config-operator(ed741fb7-1326-48b7-a713-17c9f0243eac)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" Dec 11 11:12:30 crc kubenswrapper[4953]: I1211 11:12:30.473992 4953 scope.go:117] "RemoveContainer" containerID="fdfaa916f9a5003c8063d2dc716fea8730c3a8110f54d90c8b00ba3bedd729a9" Dec 11 11:12:30 crc kubenswrapper[4953]: E1211 11:12:30.474805 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2898_openshift-machine-config-operator(ed741fb7-1326-48b7-a713-17c9f0243eac)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" Dec 11 11:12:44 crc kubenswrapper[4953]: I1211 11:12:44.473900 4953 scope.go:117] "RemoveContainer" containerID="fdfaa916f9a5003c8063d2dc716fea8730c3a8110f54d90c8b00ba3bedd729a9" Dec 11 11:12:44 crc kubenswrapper[4953]: E1211 11:12:44.474934 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2898_openshift-machine-config-operator(ed741fb7-1326-48b7-a713-17c9f0243eac)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" Dec 11 11:12:59 crc kubenswrapper[4953]: I1211 11:12:59.473686 4953 scope.go:117] "RemoveContainer" containerID="fdfaa916f9a5003c8063d2dc716fea8730c3a8110f54d90c8b00ba3bedd729a9" Dec 11 11:12:59 crc kubenswrapper[4953]: E1211 11:12:59.474641 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2898_openshift-machine-config-operator(ed741fb7-1326-48b7-a713-17c9f0243eac)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" Dec 11 11:13:13 crc kubenswrapper[4953]: I1211 11:13:13.474007 4953 scope.go:117] "RemoveContainer" containerID="fdfaa916f9a5003c8063d2dc716fea8730c3a8110f54d90c8b00ba3bedd729a9" Dec 11 11:13:13 crc kubenswrapper[4953]: E1211 11:13:13.477463 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2898_openshift-machine-config-operator(ed741fb7-1326-48b7-a713-17c9f0243eac)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" Dec 11 11:13:26 crc kubenswrapper[4953]: I1211 11:13:26.473534 4953 scope.go:117] "RemoveContainer" containerID="fdfaa916f9a5003c8063d2dc716fea8730c3a8110f54d90c8b00ba3bedd729a9" Dec 11 11:13:26 crc kubenswrapper[4953]: E1211 11:13:26.474534 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2898_openshift-machine-config-operator(ed741fb7-1326-48b7-a713-17c9f0243eac)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" Dec 11 11:13:38 crc kubenswrapper[4953]: I1211 11:13:38.473260 4953 scope.go:117] "RemoveContainer" containerID="fdfaa916f9a5003c8063d2dc716fea8730c3a8110f54d90c8b00ba3bedd729a9" Dec 11 11:13:38 crc kubenswrapper[4953]: E1211 11:13:38.475999 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2898_openshift-machine-config-operator(ed741fb7-1326-48b7-a713-17c9f0243eac)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" Dec 11 11:13:39 crc kubenswrapper[4953]: I1211 11:13:39.968303 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-jg775"] Dec 11 11:13:39 crc kubenswrapper[4953]: E1211 11:13:39.968753 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4335469-7d05-45d0-8941-15b94659f864" containerName="registry-server" Dec 11 11:13:39 crc kubenswrapper[4953]: I1211 11:13:39.968776 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4335469-7d05-45d0-8941-15b94659f864" containerName="registry-server" Dec 11 11:13:39 crc kubenswrapper[4953]: E1211 11:13:39.968790 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4335469-7d05-45d0-8941-15b94659f864" containerName="extract-utilities" Dec 11 11:13:39 crc kubenswrapper[4953]: I1211 11:13:39.968796 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4335469-7d05-45d0-8941-15b94659f864" containerName="extract-utilities" Dec 11 11:13:39 crc kubenswrapper[4953]: E1211 11:13:39.968804 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4335469-7d05-45d0-8941-15b94659f864" containerName="extract-content" Dec 11 11:13:39 crc kubenswrapper[4953]: I1211 11:13:39.968811 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4335469-7d05-45d0-8941-15b94659f864" containerName="extract-content" Dec 11 11:13:39 crc kubenswrapper[4953]: I1211 11:13:39.968982 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4335469-7d05-45d0-8941-15b94659f864" containerName="registry-server" Dec 11 11:13:39 crc kubenswrapper[4953]: I1211 11:13:39.970262 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jg775" Dec 11 11:13:39 crc kubenswrapper[4953]: I1211 11:13:39.986700 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jg775"] Dec 11 11:13:40 crc kubenswrapper[4953]: I1211 11:13:40.140316 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b0b98bb-1873-42cb-9981-f19ab0d5f90c-catalog-content\") pod \"redhat-marketplace-jg775\" (UID: \"1b0b98bb-1873-42cb-9981-f19ab0d5f90c\") " pod="openshift-marketplace/redhat-marketplace-jg775" Dec 11 11:13:40 crc kubenswrapper[4953]: I1211 11:13:40.140419 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b0b98bb-1873-42cb-9981-f19ab0d5f90c-utilities\") pod \"redhat-marketplace-jg775\" (UID: \"1b0b98bb-1873-42cb-9981-f19ab0d5f90c\") " pod="openshift-marketplace/redhat-marketplace-jg775" Dec 11 11:13:40 crc kubenswrapper[4953]: I1211 11:13:40.140508 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcph2\" (UniqueName: \"kubernetes.io/projected/1b0b98bb-1873-42cb-9981-f19ab0d5f90c-kube-api-access-xcph2\") pod \"redhat-marketplace-jg775\" (UID: \"1b0b98bb-1873-42cb-9981-f19ab0d5f90c\") " pod="openshift-marketplace/redhat-marketplace-jg775" Dec 11 11:13:40 crc kubenswrapper[4953]: I1211 11:13:40.242335 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b0b98bb-1873-42cb-9981-f19ab0d5f90c-utilities\") pod \"redhat-marketplace-jg775\" (UID: \"1b0b98bb-1873-42cb-9981-f19ab0d5f90c\") " pod="openshift-marketplace/redhat-marketplace-jg775" Dec 11 11:13:40 crc kubenswrapper[4953]: I1211 11:13:40.242436 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xcph2\" (UniqueName: \"kubernetes.io/projected/1b0b98bb-1873-42cb-9981-f19ab0d5f90c-kube-api-access-xcph2\") pod \"redhat-marketplace-jg775\" (UID: \"1b0b98bb-1873-42cb-9981-f19ab0d5f90c\") " pod="openshift-marketplace/redhat-marketplace-jg775" Dec 11 11:13:40 crc kubenswrapper[4953]: I1211 11:13:40.242489 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b0b98bb-1873-42cb-9981-f19ab0d5f90c-catalog-content\") pod \"redhat-marketplace-jg775\" (UID: \"1b0b98bb-1873-42cb-9981-f19ab0d5f90c\") " pod="openshift-marketplace/redhat-marketplace-jg775" Dec 11 11:13:40 crc kubenswrapper[4953]: I1211 11:13:40.243116 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b0b98bb-1873-42cb-9981-f19ab0d5f90c-utilities\") pod \"redhat-marketplace-jg775\" (UID: \"1b0b98bb-1873-42cb-9981-f19ab0d5f90c\") " pod="openshift-marketplace/redhat-marketplace-jg775" Dec 11 11:13:40 crc kubenswrapper[4953]: I1211 11:13:40.243174 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b0b98bb-1873-42cb-9981-f19ab0d5f90c-catalog-content\") pod \"redhat-marketplace-jg775\" (UID: \"1b0b98bb-1873-42cb-9981-f19ab0d5f90c\") " pod="openshift-marketplace/redhat-marketplace-jg775" Dec 11 11:13:40 crc kubenswrapper[4953]: I1211 11:13:40.267155 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xcph2\" (UniqueName: \"kubernetes.io/projected/1b0b98bb-1873-42cb-9981-f19ab0d5f90c-kube-api-access-xcph2\") pod \"redhat-marketplace-jg775\" (UID: \"1b0b98bb-1873-42cb-9981-f19ab0d5f90c\") " pod="openshift-marketplace/redhat-marketplace-jg775" Dec 11 11:13:40 crc kubenswrapper[4953]: I1211 11:13:40.358032 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jg775" Dec 11 11:13:40 crc kubenswrapper[4953]: I1211 11:13:40.828920 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jg775"] Dec 11 11:13:41 crc kubenswrapper[4953]: I1211 11:13:41.379198 4953 generic.go:334] "Generic (PLEG): container finished" podID="1b0b98bb-1873-42cb-9981-f19ab0d5f90c" containerID="44324cf2cc587ba1cea8f061c607cb5f2aa6677931ae87675084bd26b5db4e8b" exitCode=0 Dec 11 11:13:41 crc kubenswrapper[4953]: I1211 11:13:41.379284 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jg775" event={"ID":"1b0b98bb-1873-42cb-9981-f19ab0d5f90c","Type":"ContainerDied","Data":"44324cf2cc587ba1cea8f061c607cb5f2aa6677931ae87675084bd26b5db4e8b"} Dec 11 11:13:41 crc kubenswrapper[4953]: I1211 11:13:41.382290 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jg775" event={"ID":"1b0b98bb-1873-42cb-9981-f19ab0d5f90c","Type":"ContainerStarted","Data":"eeefbc287128f2fe2704d0da4231e55d64aad066c2e0d8e5efa9d81ad999d5c4"} Dec 11 11:13:41 crc kubenswrapper[4953]: I1211 11:13:41.383369 4953 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 11 11:13:42 crc kubenswrapper[4953]: I1211 11:13:42.390460 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jg775" event={"ID":"1b0b98bb-1873-42cb-9981-f19ab0d5f90c","Type":"ContainerStarted","Data":"1c9e2374bc16ee9ad2b57454cdb16b0b6c3ef6f6a1426c9a7d75f0f7ccda9866"} Dec 11 11:13:43 crc kubenswrapper[4953]: I1211 11:13:43.481854 4953 generic.go:334] "Generic (PLEG): container finished" podID="1b0b98bb-1873-42cb-9981-f19ab0d5f90c" containerID="1c9e2374bc16ee9ad2b57454cdb16b0b6c3ef6f6a1426c9a7d75f0f7ccda9866" exitCode=0 Dec 11 11:13:43 crc kubenswrapper[4953]: I1211 11:13:43.481919 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jg775" event={"ID":"1b0b98bb-1873-42cb-9981-f19ab0d5f90c","Type":"ContainerDied","Data":"1c9e2374bc16ee9ad2b57454cdb16b0b6c3ef6f6a1426c9a7d75f0f7ccda9866"} Dec 11 11:13:44 crc kubenswrapper[4953]: I1211 11:13:44.491839 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jg775" event={"ID":"1b0b98bb-1873-42cb-9981-f19ab0d5f90c","Type":"ContainerStarted","Data":"8de531ff64c4465d8106e4cb2b5b7d3137e1452a45b7c7383e1dcac562713b2c"} Dec 11 11:13:44 crc kubenswrapper[4953]: I1211 11:13:44.522047 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-jg775" podStartSLOduration=2.95296542 podStartE2EDuration="5.522022804s" podCreationTimestamp="2025-12-11 11:13:39 +0000 UTC" firstStartedPulling="2025-12-11 11:13:41.382845769 +0000 UTC m=+3739.406704832" lastFinishedPulling="2025-12-11 11:13:43.951903183 +0000 UTC m=+3741.975762216" observedRunningTime="2025-12-11 11:13:44.517211843 +0000 UTC m=+3742.541070896" watchObservedRunningTime="2025-12-11 11:13:44.522022804 +0000 UTC m=+3742.545881847" Dec 11 11:13:50 crc kubenswrapper[4953]: I1211 11:13:50.359732 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-jg775" Dec 11 11:13:50 crc kubenswrapper[4953]: I1211 11:13:50.360266 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-jg775" Dec 11 11:13:50 crc kubenswrapper[4953]: I1211 11:13:50.427201 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-jg775" Dec 11 11:13:50 crc kubenswrapper[4953]: I1211 11:13:50.525271 4953 scope.go:117] "RemoveContainer" containerID="fdfaa916f9a5003c8063d2dc716fea8730c3a8110f54d90c8b00ba3bedd729a9" Dec 11 11:13:50 crc kubenswrapper[4953]: E1211 11:13:50.525851 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2898_openshift-machine-config-operator(ed741fb7-1326-48b7-a713-17c9f0243eac)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" Dec 11 11:13:50 crc kubenswrapper[4953]: I1211 11:13:50.643311 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-jg775" Dec 11 11:13:50 crc kubenswrapper[4953]: I1211 11:13:50.743515 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jg775"] Dec 11 11:13:52 crc kubenswrapper[4953]: I1211 11:13:52.551099 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-jg775" podUID="1b0b98bb-1873-42cb-9981-f19ab0d5f90c" containerName="registry-server" containerID="cri-o://8de531ff64c4465d8106e4cb2b5b7d3137e1452a45b7c7383e1dcac562713b2c" gracePeriod=2 Dec 11 11:13:53 crc kubenswrapper[4953]: I1211 11:13:53.547927 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jg775" Dec 11 11:13:53 crc kubenswrapper[4953]: I1211 11:13:53.558818 4953 generic.go:334] "Generic (PLEG): container finished" podID="1b0b98bb-1873-42cb-9981-f19ab0d5f90c" containerID="8de531ff64c4465d8106e4cb2b5b7d3137e1452a45b7c7383e1dcac562713b2c" exitCode=0 Dec 11 11:13:53 crc kubenswrapper[4953]: I1211 11:13:53.558887 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jg775" event={"ID":"1b0b98bb-1873-42cb-9981-f19ab0d5f90c","Type":"ContainerDied","Data":"8de531ff64c4465d8106e4cb2b5b7d3137e1452a45b7c7383e1dcac562713b2c"} Dec 11 11:13:53 crc kubenswrapper[4953]: I1211 11:13:53.558962 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jg775" event={"ID":"1b0b98bb-1873-42cb-9981-f19ab0d5f90c","Type":"ContainerDied","Data":"eeefbc287128f2fe2704d0da4231e55d64aad066c2e0d8e5efa9d81ad999d5c4"} Dec 11 11:13:53 crc kubenswrapper[4953]: I1211 11:13:53.558994 4953 scope.go:117] "RemoveContainer" containerID="8de531ff64c4465d8106e4cb2b5b7d3137e1452a45b7c7383e1dcac562713b2c" Dec 11 11:13:53 crc kubenswrapper[4953]: I1211 11:13:53.559477 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jg775" Dec 11 11:13:53 crc kubenswrapper[4953]: I1211 11:13:53.579251 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b0b98bb-1873-42cb-9981-f19ab0d5f90c-catalog-content\") pod \"1b0b98bb-1873-42cb-9981-f19ab0d5f90c\" (UID: \"1b0b98bb-1873-42cb-9981-f19ab0d5f90c\") " Dec 11 11:13:53 crc kubenswrapper[4953]: I1211 11:13:53.579460 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcph2\" (UniqueName: \"kubernetes.io/projected/1b0b98bb-1873-42cb-9981-f19ab0d5f90c-kube-api-access-xcph2\") pod \"1b0b98bb-1873-42cb-9981-f19ab0d5f90c\" (UID: \"1b0b98bb-1873-42cb-9981-f19ab0d5f90c\") " Dec 11 11:13:53 crc kubenswrapper[4953]: I1211 11:13:53.579524 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b0b98bb-1873-42cb-9981-f19ab0d5f90c-utilities\") pod \"1b0b98bb-1873-42cb-9981-f19ab0d5f90c\" (UID: \"1b0b98bb-1873-42cb-9981-f19ab0d5f90c\") " Dec 11 11:13:53 crc kubenswrapper[4953]: I1211 11:13:53.581925 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b0b98bb-1873-42cb-9981-f19ab0d5f90c-utilities" (OuterVolumeSpecName: "utilities") pod "1b0b98bb-1873-42cb-9981-f19ab0d5f90c" (UID: "1b0b98bb-1873-42cb-9981-f19ab0d5f90c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 11:13:53 crc kubenswrapper[4953]: I1211 11:13:53.590400 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b0b98bb-1873-42cb-9981-f19ab0d5f90c-kube-api-access-xcph2" (OuterVolumeSpecName: "kube-api-access-xcph2") pod "1b0b98bb-1873-42cb-9981-f19ab0d5f90c" (UID: "1b0b98bb-1873-42cb-9981-f19ab0d5f90c"). InnerVolumeSpecName "kube-api-access-xcph2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 11:13:53 crc kubenswrapper[4953]: I1211 11:13:53.595104 4953 scope.go:117] "RemoveContainer" containerID="1c9e2374bc16ee9ad2b57454cdb16b0b6c3ef6f6a1426c9a7d75f0f7ccda9866" Dec 11 11:13:53 crc kubenswrapper[4953]: I1211 11:13:53.622778 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b0b98bb-1873-42cb-9981-f19ab0d5f90c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1b0b98bb-1873-42cb-9981-f19ab0d5f90c" (UID: "1b0b98bb-1873-42cb-9981-f19ab0d5f90c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 11:13:53 crc kubenswrapper[4953]: I1211 11:13:53.629312 4953 scope.go:117] "RemoveContainer" containerID="44324cf2cc587ba1cea8f061c607cb5f2aa6677931ae87675084bd26b5db4e8b" Dec 11 11:13:53 crc kubenswrapper[4953]: I1211 11:13:53.655622 4953 scope.go:117] "RemoveContainer" containerID="8de531ff64c4465d8106e4cb2b5b7d3137e1452a45b7c7383e1dcac562713b2c" Dec 11 11:13:53 crc kubenswrapper[4953]: E1211 11:13:53.656101 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8de531ff64c4465d8106e4cb2b5b7d3137e1452a45b7c7383e1dcac562713b2c\": container with ID starting with 8de531ff64c4465d8106e4cb2b5b7d3137e1452a45b7c7383e1dcac562713b2c not found: ID does not exist" containerID="8de531ff64c4465d8106e4cb2b5b7d3137e1452a45b7c7383e1dcac562713b2c" Dec 11 11:13:53 crc kubenswrapper[4953]: I1211 11:13:53.656154 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8de531ff64c4465d8106e4cb2b5b7d3137e1452a45b7c7383e1dcac562713b2c"} err="failed to get container status \"8de531ff64c4465d8106e4cb2b5b7d3137e1452a45b7c7383e1dcac562713b2c\": rpc error: code = NotFound desc = could not find container \"8de531ff64c4465d8106e4cb2b5b7d3137e1452a45b7c7383e1dcac562713b2c\": container with ID starting with 8de531ff64c4465d8106e4cb2b5b7d3137e1452a45b7c7383e1dcac562713b2c not found: ID does not exist" Dec 11 11:13:53 crc kubenswrapper[4953]: I1211 11:13:53.656185 4953 scope.go:117] "RemoveContainer" containerID="1c9e2374bc16ee9ad2b57454cdb16b0b6c3ef6f6a1426c9a7d75f0f7ccda9866" Dec 11 11:13:53 crc kubenswrapper[4953]: E1211 11:13:53.656479 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c9e2374bc16ee9ad2b57454cdb16b0b6c3ef6f6a1426c9a7d75f0f7ccda9866\": container with ID starting with 1c9e2374bc16ee9ad2b57454cdb16b0b6c3ef6f6a1426c9a7d75f0f7ccda9866 not found: ID does not exist" containerID="1c9e2374bc16ee9ad2b57454cdb16b0b6c3ef6f6a1426c9a7d75f0f7ccda9866" Dec 11 11:13:53 crc kubenswrapper[4953]: I1211 11:13:53.656513 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c9e2374bc16ee9ad2b57454cdb16b0b6c3ef6f6a1426c9a7d75f0f7ccda9866"} err="failed to get container status \"1c9e2374bc16ee9ad2b57454cdb16b0b6c3ef6f6a1426c9a7d75f0f7ccda9866\": rpc error: code = NotFound desc = could not find container \"1c9e2374bc16ee9ad2b57454cdb16b0b6c3ef6f6a1426c9a7d75f0f7ccda9866\": container with ID starting with 1c9e2374bc16ee9ad2b57454cdb16b0b6c3ef6f6a1426c9a7d75f0f7ccda9866 not found: ID does not exist" Dec 11 11:13:53 crc kubenswrapper[4953]: I1211 11:13:53.656536 4953 scope.go:117] "RemoveContainer" containerID="44324cf2cc587ba1cea8f061c607cb5f2aa6677931ae87675084bd26b5db4e8b" Dec 11 11:13:53 crc kubenswrapper[4953]: E1211 11:13:53.656917 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"44324cf2cc587ba1cea8f061c607cb5f2aa6677931ae87675084bd26b5db4e8b\": container with ID starting with 44324cf2cc587ba1cea8f061c607cb5f2aa6677931ae87675084bd26b5db4e8b not found: ID does not exist" containerID="44324cf2cc587ba1cea8f061c607cb5f2aa6677931ae87675084bd26b5db4e8b" Dec 11 11:13:53 crc kubenswrapper[4953]: I1211 11:13:53.656946 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44324cf2cc587ba1cea8f061c607cb5f2aa6677931ae87675084bd26b5db4e8b"} err="failed to get container status \"44324cf2cc587ba1cea8f061c607cb5f2aa6677931ae87675084bd26b5db4e8b\": rpc error: code = NotFound desc = could not find container \"44324cf2cc587ba1cea8f061c607cb5f2aa6677931ae87675084bd26b5db4e8b\": container with ID starting with 44324cf2cc587ba1cea8f061c607cb5f2aa6677931ae87675084bd26b5db4e8b not found: ID does not exist" Dec 11 11:13:53 crc kubenswrapper[4953]: I1211 11:13:53.681507 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcph2\" (UniqueName: \"kubernetes.io/projected/1b0b98bb-1873-42cb-9981-f19ab0d5f90c-kube-api-access-xcph2\") on node \"crc\" DevicePath \"\"" Dec 11 11:13:53 crc kubenswrapper[4953]: I1211 11:13:53.681550 4953 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b0b98bb-1873-42cb-9981-f19ab0d5f90c-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 11:13:53 crc kubenswrapper[4953]: I1211 11:13:53.681566 4953 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b0b98bb-1873-42cb-9981-f19ab0d5f90c-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 11:13:53 crc kubenswrapper[4953]: I1211 11:13:53.890383 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jg775"] Dec 11 11:13:53 crc kubenswrapper[4953]: I1211 11:13:53.896388 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-jg775"] Dec 11 11:13:54 crc kubenswrapper[4953]: I1211 11:13:54.482294 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b0b98bb-1873-42cb-9981-f19ab0d5f90c" path="/var/lib/kubelet/pods/1b0b98bb-1873-42cb-9981-f19ab0d5f90c/volumes" Dec 11 11:14:05 crc kubenswrapper[4953]: I1211 11:14:05.473628 4953 scope.go:117] "RemoveContainer" containerID="fdfaa916f9a5003c8063d2dc716fea8730c3a8110f54d90c8b00ba3bedd729a9" Dec 11 11:14:05 crc kubenswrapper[4953]: E1211 11:14:05.474374 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2898_openshift-machine-config-operator(ed741fb7-1326-48b7-a713-17c9f0243eac)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" Dec 11 11:14:17 crc kubenswrapper[4953]: I1211 11:14:17.473566 4953 scope.go:117] "RemoveContainer" containerID="fdfaa916f9a5003c8063d2dc716fea8730c3a8110f54d90c8b00ba3bedd729a9" Dec 11 11:14:17 crc kubenswrapper[4953]: E1211 11:14:17.474541 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2898_openshift-machine-config-operator(ed741fb7-1326-48b7-a713-17c9f0243eac)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" Dec 11 11:14:28 crc kubenswrapper[4953]: I1211 11:14:28.473557 4953 scope.go:117] "RemoveContainer" containerID="fdfaa916f9a5003c8063d2dc716fea8730c3a8110f54d90c8b00ba3bedd729a9" Dec 11 11:14:28 crc kubenswrapper[4953]: E1211 11:14:28.475473 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2898_openshift-machine-config-operator(ed741fb7-1326-48b7-a713-17c9f0243eac)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" Dec 11 11:14:39 crc kubenswrapper[4953]: I1211 11:14:39.473605 4953 scope.go:117] "RemoveContainer" containerID="fdfaa916f9a5003c8063d2dc716fea8730c3a8110f54d90c8b00ba3bedd729a9" Dec 11 11:14:39 crc kubenswrapper[4953]: E1211 11:14:39.474387 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2898_openshift-machine-config-operator(ed741fb7-1326-48b7-a713-17c9f0243eac)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" Dec 11 11:14:52 crc kubenswrapper[4953]: I1211 11:14:52.477311 4953 scope.go:117] "RemoveContainer" containerID="fdfaa916f9a5003c8063d2dc716fea8730c3a8110f54d90c8b00ba3bedd729a9" Dec 11 11:14:52 crc kubenswrapper[4953]: E1211 11:14:52.479451 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2898_openshift-machine-config-operator(ed741fb7-1326-48b7-a713-17c9f0243eac)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" Dec 11 11:15:00 crc kubenswrapper[4953]: I1211 11:15:00.162783 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424195-vxz29"] Dec 11 11:15:00 crc kubenswrapper[4953]: E1211 11:15:00.163817 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b0b98bb-1873-42cb-9981-f19ab0d5f90c" containerName="extract-content" Dec 11 11:15:00 crc kubenswrapper[4953]: I1211 11:15:00.163836 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b0b98bb-1873-42cb-9981-f19ab0d5f90c" containerName="extract-content" Dec 11 11:15:00 crc kubenswrapper[4953]: E1211 11:15:00.163847 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b0b98bb-1873-42cb-9981-f19ab0d5f90c" containerName="extract-utilities" Dec 11 11:15:00 crc kubenswrapper[4953]: I1211 11:15:00.163856 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b0b98bb-1873-42cb-9981-f19ab0d5f90c" containerName="extract-utilities" Dec 11 11:15:00 crc kubenswrapper[4953]: E1211 11:15:00.163880 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b0b98bb-1873-42cb-9981-f19ab0d5f90c" containerName="registry-server" Dec 11 11:15:00 crc kubenswrapper[4953]: I1211 11:15:00.163889 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b0b98bb-1873-42cb-9981-f19ab0d5f90c" containerName="registry-server" Dec 11 11:15:00 crc kubenswrapper[4953]: I1211 11:15:00.164088 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b0b98bb-1873-42cb-9981-f19ab0d5f90c" containerName="registry-server" Dec 11 11:15:00 crc kubenswrapper[4953]: I1211 11:15:00.194258 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424195-vxz29"] Dec 11 11:15:00 crc kubenswrapper[4953]: I1211 11:15:00.194406 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424195-vxz29" Dec 11 11:15:00 crc kubenswrapper[4953]: I1211 11:15:00.201171 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 11 11:15:00 crc kubenswrapper[4953]: I1211 11:15:00.204718 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 11 11:15:00 crc kubenswrapper[4953]: I1211 11:15:00.340673 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzs67\" (UniqueName: \"kubernetes.io/projected/d41182da-2b5d-4a32-b67e-6b489b0653e1-kube-api-access-wzs67\") pod \"collect-profiles-29424195-vxz29\" (UID: \"d41182da-2b5d-4a32-b67e-6b489b0653e1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424195-vxz29" Dec 11 11:15:00 crc kubenswrapper[4953]: I1211 11:15:00.340742 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d41182da-2b5d-4a32-b67e-6b489b0653e1-config-volume\") pod \"collect-profiles-29424195-vxz29\" (UID: \"d41182da-2b5d-4a32-b67e-6b489b0653e1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424195-vxz29" Dec 11 11:15:00 crc kubenswrapper[4953]: I1211 11:15:00.340787 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d41182da-2b5d-4a32-b67e-6b489b0653e1-secret-volume\") pod \"collect-profiles-29424195-vxz29\" (UID: \"d41182da-2b5d-4a32-b67e-6b489b0653e1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424195-vxz29" Dec 11 11:15:00 crc kubenswrapper[4953]: I1211 11:15:00.442720 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d41182da-2b5d-4a32-b67e-6b489b0653e1-config-volume\") pod \"collect-profiles-29424195-vxz29\" (UID: \"d41182da-2b5d-4a32-b67e-6b489b0653e1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424195-vxz29" Dec 11 11:15:00 crc kubenswrapper[4953]: I1211 11:15:00.442783 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d41182da-2b5d-4a32-b67e-6b489b0653e1-secret-volume\") pod \"collect-profiles-29424195-vxz29\" (UID: \"d41182da-2b5d-4a32-b67e-6b489b0653e1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424195-vxz29" Dec 11 11:15:00 crc kubenswrapper[4953]: I1211 11:15:00.442917 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzs67\" (UniqueName: \"kubernetes.io/projected/d41182da-2b5d-4a32-b67e-6b489b0653e1-kube-api-access-wzs67\") pod \"collect-profiles-29424195-vxz29\" (UID: \"d41182da-2b5d-4a32-b67e-6b489b0653e1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424195-vxz29" Dec 11 11:15:00 crc kubenswrapper[4953]: I1211 11:15:00.444198 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d41182da-2b5d-4a32-b67e-6b489b0653e1-config-volume\") pod \"collect-profiles-29424195-vxz29\" (UID: \"d41182da-2b5d-4a32-b67e-6b489b0653e1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424195-vxz29" Dec 11 11:15:00 crc kubenswrapper[4953]: I1211 11:15:00.450410 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d41182da-2b5d-4a32-b67e-6b489b0653e1-secret-volume\") pod \"collect-profiles-29424195-vxz29\" (UID: \"d41182da-2b5d-4a32-b67e-6b489b0653e1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424195-vxz29" Dec 11 11:15:00 crc kubenswrapper[4953]: I1211 11:15:00.466594 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzs67\" (UniqueName: \"kubernetes.io/projected/d41182da-2b5d-4a32-b67e-6b489b0653e1-kube-api-access-wzs67\") pod \"collect-profiles-29424195-vxz29\" (UID: \"d41182da-2b5d-4a32-b67e-6b489b0653e1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424195-vxz29" Dec 11 11:15:00 crc kubenswrapper[4953]: I1211 11:15:00.533173 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424195-vxz29" Dec 11 11:15:00 crc kubenswrapper[4953]: I1211 11:15:00.967860 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424195-vxz29"] Dec 11 11:15:01 crc kubenswrapper[4953]: I1211 11:15:01.199612 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29424195-vxz29" event={"ID":"d41182da-2b5d-4a32-b67e-6b489b0653e1","Type":"ContainerStarted","Data":"5b77ee30d47cacb42a5340fccadf0fdddb48684efb6a3bc7481f36d9ccc6e8e6"} Dec 11 11:15:02 crc kubenswrapper[4953]: I1211 11:15:02.207408 4953 generic.go:334] "Generic (PLEG): container finished" podID="d41182da-2b5d-4a32-b67e-6b489b0653e1" containerID="38cc3f37dbdc5b6c70247b8ba9be5c58be8f953abfd85194abfa1767ca3b9586" exitCode=0 Dec 11 11:15:02 crc kubenswrapper[4953]: I1211 11:15:02.207470 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29424195-vxz29" event={"ID":"d41182da-2b5d-4a32-b67e-6b489b0653e1","Type":"ContainerDied","Data":"38cc3f37dbdc5b6c70247b8ba9be5c58be8f953abfd85194abfa1767ca3b9586"} Dec 11 11:15:03 crc kubenswrapper[4953]: I1211 11:15:03.477216 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424195-vxz29" Dec 11 11:15:03 crc kubenswrapper[4953]: I1211 11:15:03.591641 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d41182da-2b5d-4a32-b67e-6b489b0653e1-secret-volume\") pod \"d41182da-2b5d-4a32-b67e-6b489b0653e1\" (UID: \"d41182da-2b5d-4a32-b67e-6b489b0653e1\") " Dec 11 11:15:03 crc kubenswrapper[4953]: I1211 11:15:03.592205 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wzs67\" (UniqueName: \"kubernetes.io/projected/d41182da-2b5d-4a32-b67e-6b489b0653e1-kube-api-access-wzs67\") pod \"d41182da-2b5d-4a32-b67e-6b489b0653e1\" (UID: \"d41182da-2b5d-4a32-b67e-6b489b0653e1\") " Dec 11 11:15:03 crc kubenswrapper[4953]: I1211 11:15:03.592346 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d41182da-2b5d-4a32-b67e-6b489b0653e1-config-volume\") pod \"d41182da-2b5d-4a32-b67e-6b489b0653e1\" (UID: \"d41182da-2b5d-4a32-b67e-6b489b0653e1\") " Dec 11 11:15:03 crc kubenswrapper[4953]: I1211 11:15:03.594389 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d41182da-2b5d-4a32-b67e-6b489b0653e1-config-volume" (OuterVolumeSpecName: "config-volume") pod "d41182da-2b5d-4a32-b67e-6b489b0653e1" (UID: "d41182da-2b5d-4a32-b67e-6b489b0653e1"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 11:15:03 crc kubenswrapper[4953]: I1211 11:15:03.598325 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d41182da-2b5d-4a32-b67e-6b489b0653e1-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "d41182da-2b5d-4a32-b67e-6b489b0653e1" (UID: "d41182da-2b5d-4a32-b67e-6b489b0653e1"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 11:15:03 crc kubenswrapper[4953]: I1211 11:15:03.599930 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d41182da-2b5d-4a32-b67e-6b489b0653e1-kube-api-access-wzs67" (OuterVolumeSpecName: "kube-api-access-wzs67") pod "d41182da-2b5d-4a32-b67e-6b489b0653e1" (UID: "d41182da-2b5d-4a32-b67e-6b489b0653e1"). InnerVolumeSpecName "kube-api-access-wzs67". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 11:15:03 crc kubenswrapper[4953]: I1211 11:15:03.694818 4953 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d41182da-2b5d-4a32-b67e-6b489b0653e1-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 11 11:15:03 crc kubenswrapper[4953]: I1211 11:15:03.694862 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wzs67\" (UniqueName: \"kubernetes.io/projected/d41182da-2b5d-4a32-b67e-6b489b0653e1-kube-api-access-wzs67\") on node \"crc\" DevicePath \"\"" Dec 11 11:15:03 crc kubenswrapper[4953]: I1211 11:15:03.694874 4953 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d41182da-2b5d-4a32-b67e-6b489b0653e1-config-volume\") on node \"crc\" DevicePath \"\"" Dec 11 11:15:04 crc kubenswrapper[4953]: I1211 11:15:04.223706 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29424195-vxz29" event={"ID":"d41182da-2b5d-4a32-b67e-6b489b0653e1","Type":"ContainerDied","Data":"5b77ee30d47cacb42a5340fccadf0fdddb48684efb6a3bc7481f36d9ccc6e8e6"} Dec 11 11:15:04 crc kubenswrapper[4953]: I1211 11:15:04.223745 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424195-vxz29" Dec 11 11:15:04 crc kubenswrapper[4953]: I1211 11:15:04.223756 4953 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5b77ee30d47cacb42a5340fccadf0fdddb48684efb6a3bc7481f36d9ccc6e8e6" Dec 11 11:15:04 crc kubenswrapper[4953]: I1211 11:15:04.558645 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424150-978gp"] Dec 11 11:15:04 crc kubenswrapper[4953]: I1211 11:15:04.565635 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424150-978gp"] Dec 11 11:15:05 crc kubenswrapper[4953]: I1211 11:15:05.473190 4953 scope.go:117] "RemoveContainer" containerID="fdfaa916f9a5003c8063d2dc716fea8730c3a8110f54d90c8b00ba3bedd729a9" Dec 11 11:15:05 crc kubenswrapper[4953]: E1211 11:15:05.473926 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2898_openshift-machine-config-operator(ed741fb7-1326-48b7-a713-17c9f0243eac)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" Dec 11 11:15:06 crc kubenswrapper[4953]: I1211 11:15:06.485130 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8937b8d-554d-44bf-9a69-b0e6350fd8f0" path="/var/lib/kubelet/pods/b8937b8d-554d-44bf-9a69-b0e6350fd8f0/volumes" Dec 11 11:15:19 crc kubenswrapper[4953]: I1211 11:15:19.473520 4953 scope.go:117] "RemoveContainer" containerID="fdfaa916f9a5003c8063d2dc716fea8730c3a8110f54d90c8b00ba3bedd729a9" Dec 11 11:15:19 crc kubenswrapper[4953]: E1211 11:15:19.474334 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2898_openshift-machine-config-operator(ed741fb7-1326-48b7-a713-17c9f0243eac)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" Dec 11 11:15:34 crc kubenswrapper[4953]: I1211 11:15:34.473694 4953 scope.go:117] "RemoveContainer" containerID="fdfaa916f9a5003c8063d2dc716fea8730c3a8110f54d90c8b00ba3bedd729a9" Dec 11 11:15:34 crc kubenswrapper[4953]: E1211 11:15:34.474511 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2898_openshift-machine-config-operator(ed741fb7-1326-48b7-a713-17c9f0243eac)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" Dec 11 11:15:34 crc kubenswrapper[4953]: I1211 11:15:34.540348 4953 scope.go:117] "RemoveContainer" containerID="84cec215e2bf94f3194c2af17671802a197cf5821efaa3f0276adce66ccc2d68" Dec 11 11:15:47 crc kubenswrapper[4953]: I1211 11:15:47.473320 4953 scope.go:117] "RemoveContainer" containerID="fdfaa916f9a5003c8063d2dc716fea8730c3a8110f54d90c8b00ba3bedd729a9" Dec 11 11:15:47 crc kubenswrapper[4953]: E1211 11:15:47.486852 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2898_openshift-machine-config-operator(ed741fb7-1326-48b7-a713-17c9f0243eac)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" Dec 11 11:16:02 crc kubenswrapper[4953]: I1211 11:16:02.479093 4953 scope.go:117] "RemoveContainer" containerID="fdfaa916f9a5003c8063d2dc716fea8730c3a8110f54d90c8b00ba3bedd729a9" Dec 11 11:16:02 crc kubenswrapper[4953]: E1211 11:16:02.479954 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2898_openshift-machine-config-operator(ed741fb7-1326-48b7-a713-17c9f0243eac)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" Dec 11 11:16:13 crc kubenswrapper[4953]: I1211 11:16:13.473653 4953 scope.go:117] "RemoveContainer" containerID="fdfaa916f9a5003c8063d2dc716fea8730c3a8110f54d90c8b00ba3bedd729a9" Dec 11 11:16:13 crc kubenswrapper[4953]: E1211 11:16:13.474463 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2898_openshift-machine-config-operator(ed741fb7-1326-48b7-a713-17c9f0243eac)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" Dec 11 11:16:25 crc kubenswrapper[4953]: I1211 11:16:25.473369 4953 scope.go:117] "RemoveContainer" containerID="fdfaa916f9a5003c8063d2dc716fea8730c3a8110f54d90c8b00ba3bedd729a9" Dec 11 11:16:25 crc kubenswrapper[4953]: E1211 11:16:25.474435 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2898_openshift-machine-config-operator(ed741fb7-1326-48b7-a713-17c9f0243eac)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" Dec 11 11:16:40 crc kubenswrapper[4953]: I1211 11:16:40.473160 4953 scope.go:117] "RemoveContainer" containerID="fdfaa916f9a5003c8063d2dc716fea8730c3a8110f54d90c8b00ba3bedd729a9" Dec 11 11:16:40 crc kubenswrapper[4953]: E1211 11:16:40.473796 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2898_openshift-machine-config-operator(ed741fb7-1326-48b7-a713-17c9f0243eac)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" Dec 11 11:16:51 crc kubenswrapper[4953]: I1211 11:16:51.473589 4953 scope.go:117] "RemoveContainer" containerID="fdfaa916f9a5003c8063d2dc716fea8730c3a8110f54d90c8b00ba3bedd729a9" Dec 11 11:16:51 crc kubenswrapper[4953]: E1211 11:16:51.474536 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2898_openshift-machine-config-operator(ed741fb7-1326-48b7-a713-17c9f0243eac)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" Dec 11 11:17:04 crc kubenswrapper[4953]: I1211 11:17:04.473441 4953 scope.go:117] "RemoveContainer" containerID="fdfaa916f9a5003c8063d2dc716fea8730c3a8110f54d90c8b00ba3bedd729a9" Dec 11 11:17:04 crc kubenswrapper[4953]: E1211 11:17:04.474364 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2898_openshift-machine-config-operator(ed741fb7-1326-48b7-a713-17c9f0243eac)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" Dec 11 11:17:17 crc kubenswrapper[4953]: I1211 11:17:17.473731 4953 scope.go:117] "RemoveContainer" containerID="fdfaa916f9a5003c8063d2dc716fea8730c3a8110f54d90c8b00ba3bedd729a9" Dec 11 11:17:17 crc kubenswrapper[4953]: E1211 11:17:17.474778 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2898_openshift-machine-config-operator(ed741fb7-1326-48b7-a713-17c9f0243eac)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" Dec 11 11:17:31 crc kubenswrapper[4953]: I1211 11:17:31.473694 4953 scope.go:117] "RemoveContainer" containerID="fdfaa916f9a5003c8063d2dc716fea8730c3a8110f54d90c8b00ba3bedd729a9" Dec 11 11:17:32 crc kubenswrapper[4953]: I1211 11:17:32.625090 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q2898" event={"ID":"ed741fb7-1326-48b7-a713-17c9f0243eac","Type":"ContainerStarted","Data":"3cfab28c87780c4d4dda32247e636cecfc6f66e76b16aacbd60db038d590ac70"} Dec 11 11:18:38 crc kubenswrapper[4953]: I1211 11:18:38.527771 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-29ggj"] Dec 11 11:18:38 crc kubenswrapper[4953]: E1211 11:18:38.528841 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d41182da-2b5d-4a32-b67e-6b489b0653e1" containerName="collect-profiles" Dec 11 11:18:38 crc kubenswrapper[4953]: I1211 11:18:38.528869 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="d41182da-2b5d-4a32-b67e-6b489b0653e1" containerName="collect-profiles" Dec 11 11:18:38 crc kubenswrapper[4953]: I1211 11:18:38.529051 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="d41182da-2b5d-4a32-b67e-6b489b0653e1" containerName="collect-profiles" Dec 11 11:18:38 crc kubenswrapper[4953]: I1211 11:18:38.530452 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-29ggj" Dec 11 11:18:38 crc kubenswrapper[4953]: I1211 11:18:38.544137 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-29ggj"] Dec 11 11:18:38 crc kubenswrapper[4953]: I1211 11:18:38.705955 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ee6f306-5fad-4a1e-9167-253ca428c9cf-catalog-content\") pod \"certified-operators-29ggj\" (UID: \"4ee6f306-5fad-4a1e-9167-253ca428c9cf\") " pod="openshift-marketplace/certified-operators-29ggj" Dec 11 11:18:38 crc kubenswrapper[4953]: I1211 11:18:38.706051 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ee6f306-5fad-4a1e-9167-253ca428c9cf-utilities\") pod \"certified-operators-29ggj\" (UID: \"4ee6f306-5fad-4a1e-9167-253ca428c9cf\") " pod="openshift-marketplace/certified-operators-29ggj" Dec 11 11:18:38 crc kubenswrapper[4953]: I1211 11:18:38.706124 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtbst\" (UniqueName: \"kubernetes.io/projected/4ee6f306-5fad-4a1e-9167-253ca428c9cf-kube-api-access-wtbst\") pod \"certified-operators-29ggj\" (UID: \"4ee6f306-5fad-4a1e-9167-253ca428c9cf\") " pod="openshift-marketplace/certified-operators-29ggj" Dec 11 11:18:38 crc kubenswrapper[4953]: I1211 11:18:38.723635 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-d2c4x"] Dec 11 11:18:38 crc kubenswrapper[4953]: I1211 11:18:38.725795 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d2c4x" Dec 11 11:18:38 crc kubenswrapper[4953]: I1211 11:18:38.740176 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-d2c4x"] Dec 11 11:18:38 crc kubenswrapper[4953]: I1211 11:18:38.807717 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ee6f306-5fad-4a1e-9167-253ca428c9cf-catalog-content\") pod \"certified-operators-29ggj\" (UID: \"4ee6f306-5fad-4a1e-9167-253ca428c9cf\") " pod="openshift-marketplace/certified-operators-29ggj" Dec 11 11:18:38 crc kubenswrapper[4953]: I1211 11:18:38.807772 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkngm\" (UniqueName: \"kubernetes.io/projected/3b7e6b93-2635-4c0a-a72e-61442dc68b2f-kube-api-access-pkngm\") pod \"redhat-operators-d2c4x\" (UID: \"3b7e6b93-2635-4c0a-a72e-61442dc68b2f\") " pod="openshift-marketplace/redhat-operators-d2c4x" Dec 11 11:18:38 crc kubenswrapper[4953]: I1211 11:18:38.807813 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b7e6b93-2635-4c0a-a72e-61442dc68b2f-catalog-content\") pod \"redhat-operators-d2c4x\" (UID: \"3b7e6b93-2635-4c0a-a72e-61442dc68b2f\") " pod="openshift-marketplace/redhat-operators-d2c4x" Dec 11 11:18:38 crc kubenswrapper[4953]: I1211 11:18:38.807839 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ee6f306-5fad-4a1e-9167-253ca428c9cf-utilities\") pod \"certified-operators-29ggj\" (UID: \"4ee6f306-5fad-4a1e-9167-253ca428c9cf\") " pod="openshift-marketplace/certified-operators-29ggj" Dec 11 11:18:38 crc kubenswrapper[4953]: I1211 11:18:38.807865 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b7e6b93-2635-4c0a-a72e-61442dc68b2f-utilities\") pod \"redhat-operators-d2c4x\" (UID: \"3b7e6b93-2635-4c0a-a72e-61442dc68b2f\") " pod="openshift-marketplace/redhat-operators-d2c4x" Dec 11 11:18:38 crc kubenswrapper[4953]: I1211 11:18:38.807906 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wtbst\" (UniqueName: \"kubernetes.io/projected/4ee6f306-5fad-4a1e-9167-253ca428c9cf-kube-api-access-wtbst\") pod \"certified-operators-29ggj\" (UID: \"4ee6f306-5fad-4a1e-9167-253ca428c9cf\") " pod="openshift-marketplace/certified-operators-29ggj" Dec 11 11:18:38 crc kubenswrapper[4953]: I1211 11:18:38.808666 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ee6f306-5fad-4a1e-9167-253ca428c9cf-utilities\") pod \"certified-operators-29ggj\" (UID: \"4ee6f306-5fad-4a1e-9167-253ca428c9cf\") " pod="openshift-marketplace/certified-operators-29ggj" Dec 11 11:18:38 crc kubenswrapper[4953]: I1211 11:18:38.808696 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ee6f306-5fad-4a1e-9167-253ca428c9cf-catalog-content\") pod \"certified-operators-29ggj\" (UID: \"4ee6f306-5fad-4a1e-9167-253ca428c9cf\") " pod="openshift-marketplace/certified-operators-29ggj" Dec 11 11:18:38 crc kubenswrapper[4953]: I1211 11:18:38.842669 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wtbst\" (UniqueName: \"kubernetes.io/projected/4ee6f306-5fad-4a1e-9167-253ca428c9cf-kube-api-access-wtbst\") pod \"certified-operators-29ggj\" (UID: \"4ee6f306-5fad-4a1e-9167-253ca428c9cf\") " pod="openshift-marketplace/certified-operators-29ggj" Dec 11 11:18:38 crc kubenswrapper[4953]: I1211 11:18:38.870080 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-29ggj" Dec 11 11:18:38 crc kubenswrapper[4953]: I1211 11:18:38.909533 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pkngm\" (UniqueName: \"kubernetes.io/projected/3b7e6b93-2635-4c0a-a72e-61442dc68b2f-kube-api-access-pkngm\") pod \"redhat-operators-d2c4x\" (UID: \"3b7e6b93-2635-4c0a-a72e-61442dc68b2f\") " pod="openshift-marketplace/redhat-operators-d2c4x" Dec 11 11:18:38 crc kubenswrapper[4953]: I1211 11:18:38.909627 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b7e6b93-2635-4c0a-a72e-61442dc68b2f-catalog-content\") pod \"redhat-operators-d2c4x\" (UID: \"3b7e6b93-2635-4c0a-a72e-61442dc68b2f\") " pod="openshift-marketplace/redhat-operators-d2c4x" Dec 11 11:18:38 crc kubenswrapper[4953]: I1211 11:18:38.909671 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b7e6b93-2635-4c0a-a72e-61442dc68b2f-utilities\") pod \"redhat-operators-d2c4x\" (UID: \"3b7e6b93-2635-4c0a-a72e-61442dc68b2f\") " pod="openshift-marketplace/redhat-operators-d2c4x" Dec 11 11:18:38 crc kubenswrapper[4953]: I1211 11:18:38.910258 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b7e6b93-2635-4c0a-a72e-61442dc68b2f-utilities\") pod \"redhat-operators-d2c4x\" (UID: \"3b7e6b93-2635-4c0a-a72e-61442dc68b2f\") " pod="openshift-marketplace/redhat-operators-d2c4x" Dec 11 11:18:38 crc kubenswrapper[4953]: I1211 11:18:38.911491 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b7e6b93-2635-4c0a-a72e-61442dc68b2f-catalog-content\") pod \"redhat-operators-d2c4x\" (UID: \"3b7e6b93-2635-4c0a-a72e-61442dc68b2f\") " pod="openshift-marketplace/redhat-operators-d2c4x" Dec 11 11:18:38 crc kubenswrapper[4953]: I1211 11:18:38.932913 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkngm\" (UniqueName: \"kubernetes.io/projected/3b7e6b93-2635-4c0a-a72e-61442dc68b2f-kube-api-access-pkngm\") pod \"redhat-operators-d2c4x\" (UID: \"3b7e6b93-2635-4c0a-a72e-61442dc68b2f\") " pod="openshift-marketplace/redhat-operators-d2c4x" Dec 11 11:18:39 crc kubenswrapper[4953]: I1211 11:18:39.044931 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d2c4x" Dec 11 11:18:39 crc kubenswrapper[4953]: I1211 11:18:39.430335 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-29ggj"] Dec 11 11:18:39 crc kubenswrapper[4953]: I1211 11:18:39.736566 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-d2c4x"] Dec 11 11:18:40 crc kubenswrapper[4953]: I1211 11:18:40.256147 4953 generic.go:334] "Generic (PLEG): container finished" podID="4ee6f306-5fad-4a1e-9167-253ca428c9cf" containerID="40d83638c77d741b08c2d662d3947a7662d0d3b42fae6e2c2427f6851ea6b315" exitCode=0 Dec 11 11:18:40 crc kubenswrapper[4953]: I1211 11:18:40.256246 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-29ggj" event={"ID":"4ee6f306-5fad-4a1e-9167-253ca428c9cf","Type":"ContainerDied","Data":"40d83638c77d741b08c2d662d3947a7662d0d3b42fae6e2c2427f6851ea6b315"} Dec 11 11:18:40 crc kubenswrapper[4953]: I1211 11:18:40.256431 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-29ggj" event={"ID":"4ee6f306-5fad-4a1e-9167-253ca428c9cf","Type":"ContainerStarted","Data":"d80616abba1e63c14da15e85f515ac22b0771aadc1011538a9302e19a6957f72"} Dec 11 11:18:40 crc kubenswrapper[4953]: I1211 11:18:40.257949 4953 generic.go:334] "Generic (PLEG): container finished" podID="3b7e6b93-2635-4c0a-a72e-61442dc68b2f" containerID="0ccb172752db018cb9d68aa76a14b56ca514e309bba4425d194f8805525ea9b6" exitCode=0 Dec 11 11:18:40 crc kubenswrapper[4953]: I1211 11:18:40.258020 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d2c4x" event={"ID":"3b7e6b93-2635-4c0a-a72e-61442dc68b2f","Type":"ContainerDied","Data":"0ccb172752db018cb9d68aa76a14b56ca514e309bba4425d194f8805525ea9b6"} Dec 11 11:18:40 crc kubenswrapper[4953]: I1211 11:18:40.258070 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d2c4x" event={"ID":"3b7e6b93-2635-4c0a-a72e-61442dc68b2f","Type":"ContainerStarted","Data":"e43eb9939f726999f06be7fa5a676febefca27f8345014a0217cd309f56ed4eb"} Dec 11 11:18:41 crc kubenswrapper[4953]: I1211 11:18:41.266286 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d2c4x" event={"ID":"3b7e6b93-2635-4c0a-a72e-61442dc68b2f","Type":"ContainerStarted","Data":"2c251b7ce2e4585592c5cdb3add8350d49cb6e6159d684243aa42195649f3eaa"} Dec 11 11:18:41 crc kubenswrapper[4953]: I1211 11:18:41.268628 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-29ggj" event={"ID":"4ee6f306-5fad-4a1e-9167-253ca428c9cf","Type":"ContainerStarted","Data":"7cbb07199412ba4b3bc97ea68b264516736cec0e4b701be895a9f0e9b989f4cc"} Dec 11 11:18:42 crc kubenswrapper[4953]: I1211 11:18:42.279392 4953 generic.go:334] "Generic (PLEG): container finished" podID="4ee6f306-5fad-4a1e-9167-253ca428c9cf" containerID="7cbb07199412ba4b3bc97ea68b264516736cec0e4b701be895a9f0e9b989f4cc" exitCode=0 Dec 11 11:18:42 crc kubenswrapper[4953]: I1211 11:18:42.279469 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-29ggj" event={"ID":"4ee6f306-5fad-4a1e-9167-253ca428c9cf","Type":"ContainerDied","Data":"7cbb07199412ba4b3bc97ea68b264516736cec0e4b701be895a9f0e9b989f4cc"} Dec 11 11:18:42 crc kubenswrapper[4953]: I1211 11:18:42.280905 4953 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 11 11:18:43 crc kubenswrapper[4953]: I1211 11:18:43.288195 4953 generic.go:334] "Generic (PLEG): container finished" podID="3b7e6b93-2635-4c0a-a72e-61442dc68b2f" containerID="2c251b7ce2e4585592c5cdb3add8350d49cb6e6159d684243aa42195649f3eaa" exitCode=0 Dec 11 11:18:43 crc kubenswrapper[4953]: I1211 11:18:43.288279 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d2c4x" event={"ID":"3b7e6b93-2635-4c0a-a72e-61442dc68b2f","Type":"ContainerDied","Data":"2c251b7ce2e4585592c5cdb3add8350d49cb6e6159d684243aa42195649f3eaa"} Dec 11 11:18:44 crc kubenswrapper[4953]: I1211 11:18:44.298415 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-29ggj" event={"ID":"4ee6f306-5fad-4a1e-9167-253ca428c9cf","Type":"ContainerStarted","Data":"44ee9cf32d1336e5f6d9369616995883f8979d2a9660de6ec310a7663d10a12b"} Dec 11 11:18:44 crc kubenswrapper[4953]: I1211 11:18:44.317801 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-29ggj" podStartSLOduration=3.486379284 podStartE2EDuration="6.317778938s" podCreationTimestamp="2025-12-11 11:18:38 +0000 UTC" firstStartedPulling="2025-12-11 11:18:40.257885111 +0000 UTC m=+4038.281744144" lastFinishedPulling="2025-12-11 11:18:43.089284745 +0000 UTC m=+4041.113143798" observedRunningTime="2025-12-11 11:18:44.313190144 +0000 UTC m=+4042.337049177" watchObservedRunningTime="2025-12-11 11:18:44.317778938 +0000 UTC m=+4042.341637991" Dec 11 11:18:45 crc kubenswrapper[4953]: I1211 11:18:45.314688 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d2c4x" event={"ID":"3b7e6b93-2635-4c0a-a72e-61442dc68b2f","Type":"ContainerStarted","Data":"28327274e68c1ba02dda51f6dfa48ca9cd6a36e11970bd54de23afbc259827e4"} Dec 11 11:18:45 crc kubenswrapper[4953]: I1211 11:18:45.338881 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-d2c4x" podStartSLOduration=3.306525327 podStartE2EDuration="7.338858689s" podCreationTimestamp="2025-12-11 11:18:38 +0000 UTC" firstStartedPulling="2025-12-11 11:18:40.26072957 +0000 UTC m=+4038.284588623" lastFinishedPulling="2025-12-11 11:18:44.293062952 +0000 UTC m=+4042.316921985" observedRunningTime="2025-12-11 11:18:45.332296163 +0000 UTC m=+4043.356155196" watchObservedRunningTime="2025-12-11 11:18:45.338858689 +0000 UTC m=+4043.362717722" Dec 11 11:18:48 crc kubenswrapper[4953]: I1211 11:18:48.871118 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-29ggj" Dec 11 11:18:48 crc kubenswrapper[4953]: I1211 11:18:48.871778 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-29ggj" Dec 11 11:18:48 crc kubenswrapper[4953]: I1211 11:18:48.916742 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-29ggj" Dec 11 11:18:49 crc kubenswrapper[4953]: I1211 11:18:49.045605 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-d2c4x" Dec 11 11:18:49 crc kubenswrapper[4953]: I1211 11:18:49.045669 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-d2c4x" Dec 11 11:18:49 crc kubenswrapper[4953]: I1211 11:18:49.391515 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-29ggj" Dec 11 11:18:50 crc kubenswrapper[4953]: I1211 11:18:50.087877 4953 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-d2c4x" podUID="3b7e6b93-2635-4c0a-a72e-61442dc68b2f" containerName="registry-server" probeResult="failure" output=< Dec 11 11:18:50 crc kubenswrapper[4953]: timeout: failed to connect service ":50051" within 1s Dec 11 11:18:50 crc kubenswrapper[4953]: > Dec 11 11:18:50 crc kubenswrapper[4953]: I1211 11:18:50.307149 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-29ggj"] Dec 11 11:18:51 crc kubenswrapper[4953]: I1211 11:18:51.356366 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-29ggj" podUID="4ee6f306-5fad-4a1e-9167-253ca428c9cf" containerName="registry-server" containerID="cri-o://44ee9cf32d1336e5f6d9369616995883f8979d2a9660de6ec310a7663d10a12b" gracePeriod=2 Dec 11 11:18:54 crc kubenswrapper[4953]: I1211 11:18:54.379492 4953 generic.go:334] "Generic (PLEG): container finished" podID="4ee6f306-5fad-4a1e-9167-253ca428c9cf" containerID="44ee9cf32d1336e5f6d9369616995883f8979d2a9660de6ec310a7663d10a12b" exitCode=0 Dec 11 11:18:54 crc kubenswrapper[4953]: I1211 11:18:54.379583 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-29ggj" event={"ID":"4ee6f306-5fad-4a1e-9167-253ca428c9cf","Type":"ContainerDied","Data":"44ee9cf32d1336e5f6d9369616995883f8979d2a9660de6ec310a7663d10a12b"} Dec 11 11:18:55 crc kubenswrapper[4953]: I1211 11:18:55.882068 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-29ggj" Dec 11 11:18:56 crc kubenswrapper[4953]: I1211 11:18:56.061639 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wtbst\" (UniqueName: \"kubernetes.io/projected/4ee6f306-5fad-4a1e-9167-253ca428c9cf-kube-api-access-wtbst\") pod \"4ee6f306-5fad-4a1e-9167-253ca428c9cf\" (UID: \"4ee6f306-5fad-4a1e-9167-253ca428c9cf\") " Dec 11 11:18:56 crc kubenswrapper[4953]: I1211 11:18:56.063003 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ee6f306-5fad-4a1e-9167-253ca428c9cf-catalog-content\") pod \"4ee6f306-5fad-4a1e-9167-253ca428c9cf\" (UID: \"4ee6f306-5fad-4a1e-9167-253ca428c9cf\") " Dec 11 11:18:56 crc kubenswrapper[4953]: I1211 11:18:56.063200 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ee6f306-5fad-4a1e-9167-253ca428c9cf-utilities\") pod \"4ee6f306-5fad-4a1e-9167-253ca428c9cf\" (UID: \"4ee6f306-5fad-4a1e-9167-253ca428c9cf\") " Dec 11 11:18:56 crc kubenswrapper[4953]: I1211 11:18:56.064775 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ee6f306-5fad-4a1e-9167-253ca428c9cf-utilities" (OuterVolumeSpecName: "utilities") pod "4ee6f306-5fad-4a1e-9167-253ca428c9cf" (UID: "4ee6f306-5fad-4a1e-9167-253ca428c9cf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 11:18:56 crc kubenswrapper[4953]: I1211 11:18:56.070101 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ee6f306-5fad-4a1e-9167-253ca428c9cf-kube-api-access-wtbst" (OuterVolumeSpecName: "kube-api-access-wtbst") pod "4ee6f306-5fad-4a1e-9167-253ca428c9cf" (UID: "4ee6f306-5fad-4a1e-9167-253ca428c9cf"). InnerVolumeSpecName "kube-api-access-wtbst". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 11:18:56 crc kubenswrapper[4953]: I1211 11:18:56.123411 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ee6f306-5fad-4a1e-9167-253ca428c9cf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4ee6f306-5fad-4a1e-9167-253ca428c9cf" (UID: "4ee6f306-5fad-4a1e-9167-253ca428c9cf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 11:18:56 crc kubenswrapper[4953]: I1211 11:18:56.164387 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wtbst\" (UniqueName: \"kubernetes.io/projected/4ee6f306-5fad-4a1e-9167-253ca428c9cf-kube-api-access-wtbst\") on node \"crc\" DevicePath \"\"" Dec 11 11:18:56 crc kubenswrapper[4953]: I1211 11:18:56.164429 4953 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ee6f306-5fad-4a1e-9167-253ca428c9cf-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 11:18:56 crc kubenswrapper[4953]: I1211 11:18:56.164442 4953 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ee6f306-5fad-4a1e-9167-253ca428c9cf-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 11:18:56 crc kubenswrapper[4953]: I1211 11:18:56.395844 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-29ggj" event={"ID":"4ee6f306-5fad-4a1e-9167-253ca428c9cf","Type":"ContainerDied","Data":"d80616abba1e63c14da15e85f515ac22b0771aadc1011538a9302e19a6957f72"} Dec 11 11:18:56 crc kubenswrapper[4953]: I1211 11:18:56.396240 4953 scope.go:117] "RemoveContainer" containerID="44ee9cf32d1336e5f6d9369616995883f8979d2a9660de6ec310a7663d10a12b" Dec 11 11:18:56 crc kubenswrapper[4953]: I1211 11:18:56.395945 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-29ggj" Dec 11 11:18:56 crc kubenswrapper[4953]: I1211 11:18:56.418680 4953 scope.go:117] "RemoveContainer" containerID="7cbb07199412ba4b3bc97ea68b264516736cec0e4b701be895a9f0e9b989f4cc" Dec 11 11:18:56 crc kubenswrapper[4953]: I1211 11:18:56.431725 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-29ggj"] Dec 11 11:18:56 crc kubenswrapper[4953]: I1211 11:18:56.440885 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-29ggj"] Dec 11 11:18:56 crc kubenswrapper[4953]: I1211 11:18:56.454026 4953 scope.go:117] "RemoveContainer" containerID="40d83638c77d741b08c2d662d3947a7662d0d3b42fae6e2c2427f6851ea6b315" Dec 11 11:18:56 crc kubenswrapper[4953]: I1211 11:18:56.496832 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ee6f306-5fad-4a1e-9167-253ca428c9cf" path="/var/lib/kubelet/pods/4ee6f306-5fad-4a1e-9167-253ca428c9cf/volumes" Dec 11 11:18:59 crc kubenswrapper[4953]: I1211 11:18:59.123840 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-d2c4x" Dec 11 11:18:59 crc kubenswrapper[4953]: I1211 11:18:59.167918 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-d2c4x" Dec 11 11:18:59 crc kubenswrapper[4953]: I1211 11:18:59.359271 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-d2c4x"] Dec 11 11:19:00 crc kubenswrapper[4953]: I1211 11:19:00.426560 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-d2c4x" podUID="3b7e6b93-2635-4c0a-a72e-61442dc68b2f" containerName="registry-server" containerID="cri-o://28327274e68c1ba02dda51f6dfa48ca9cd6a36e11970bd54de23afbc259827e4" gracePeriod=2 Dec 11 11:19:01 crc kubenswrapper[4953]: I1211 11:19:01.027363 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d2c4x" Dec 11 11:19:01 crc kubenswrapper[4953]: I1211 11:19:01.071771 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b7e6b93-2635-4c0a-a72e-61442dc68b2f-utilities\") pod \"3b7e6b93-2635-4c0a-a72e-61442dc68b2f\" (UID: \"3b7e6b93-2635-4c0a-a72e-61442dc68b2f\") " Dec 11 11:19:01 crc kubenswrapper[4953]: I1211 11:19:01.071851 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pkngm\" (UniqueName: \"kubernetes.io/projected/3b7e6b93-2635-4c0a-a72e-61442dc68b2f-kube-api-access-pkngm\") pod \"3b7e6b93-2635-4c0a-a72e-61442dc68b2f\" (UID: \"3b7e6b93-2635-4c0a-a72e-61442dc68b2f\") " Dec 11 11:19:01 crc kubenswrapper[4953]: I1211 11:19:01.072829 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b7e6b93-2635-4c0a-a72e-61442dc68b2f-utilities" (OuterVolumeSpecName: "utilities") pod "3b7e6b93-2635-4c0a-a72e-61442dc68b2f" (UID: "3b7e6b93-2635-4c0a-a72e-61442dc68b2f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 11:19:01 crc kubenswrapper[4953]: I1211 11:19:01.080217 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b7e6b93-2635-4c0a-a72e-61442dc68b2f-kube-api-access-pkngm" (OuterVolumeSpecName: "kube-api-access-pkngm") pod "3b7e6b93-2635-4c0a-a72e-61442dc68b2f" (UID: "3b7e6b93-2635-4c0a-a72e-61442dc68b2f"). InnerVolumeSpecName "kube-api-access-pkngm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 11:19:01 crc kubenswrapper[4953]: I1211 11:19:01.173237 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b7e6b93-2635-4c0a-a72e-61442dc68b2f-catalog-content\") pod \"3b7e6b93-2635-4c0a-a72e-61442dc68b2f\" (UID: \"3b7e6b93-2635-4c0a-a72e-61442dc68b2f\") " Dec 11 11:19:01 crc kubenswrapper[4953]: I1211 11:19:01.173611 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pkngm\" (UniqueName: \"kubernetes.io/projected/3b7e6b93-2635-4c0a-a72e-61442dc68b2f-kube-api-access-pkngm\") on node \"crc\" DevicePath \"\"" Dec 11 11:19:01 crc kubenswrapper[4953]: I1211 11:19:01.173630 4953 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b7e6b93-2635-4c0a-a72e-61442dc68b2f-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 11:19:01 crc kubenswrapper[4953]: I1211 11:19:01.292367 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b7e6b93-2635-4c0a-a72e-61442dc68b2f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3b7e6b93-2635-4c0a-a72e-61442dc68b2f" (UID: "3b7e6b93-2635-4c0a-a72e-61442dc68b2f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 11:19:01 crc kubenswrapper[4953]: I1211 11:19:01.375800 4953 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b7e6b93-2635-4c0a-a72e-61442dc68b2f-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 11:19:01 crc kubenswrapper[4953]: I1211 11:19:01.438624 4953 generic.go:334] "Generic (PLEG): container finished" podID="3b7e6b93-2635-4c0a-a72e-61442dc68b2f" containerID="28327274e68c1ba02dda51f6dfa48ca9cd6a36e11970bd54de23afbc259827e4" exitCode=0 Dec 11 11:19:01 crc kubenswrapper[4953]: I1211 11:19:01.438687 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d2c4x" event={"ID":"3b7e6b93-2635-4c0a-a72e-61442dc68b2f","Type":"ContainerDied","Data":"28327274e68c1ba02dda51f6dfa48ca9cd6a36e11970bd54de23afbc259827e4"} Dec 11 11:19:01 crc kubenswrapper[4953]: I1211 11:19:01.438727 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d2c4x" event={"ID":"3b7e6b93-2635-4c0a-a72e-61442dc68b2f","Type":"ContainerDied","Data":"e43eb9939f726999f06be7fa5a676febefca27f8345014a0217cd309f56ed4eb"} Dec 11 11:19:01 crc kubenswrapper[4953]: I1211 11:19:01.438725 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d2c4x" Dec 11 11:19:01 crc kubenswrapper[4953]: I1211 11:19:01.438745 4953 scope.go:117] "RemoveContainer" containerID="28327274e68c1ba02dda51f6dfa48ca9cd6a36e11970bd54de23afbc259827e4" Dec 11 11:19:01 crc kubenswrapper[4953]: I1211 11:19:01.462154 4953 scope.go:117] "RemoveContainer" containerID="2c251b7ce2e4585592c5cdb3add8350d49cb6e6159d684243aa42195649f3eaa" Dec 11 11:19:01 crc kubenswrapper[4953]: I1211 11:19:01.488981 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-d2c4x"] Dec 11 11:19:01 crc kubenswrapper[4953]: I1211 11:19:01.497030 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-d2c4x"] Dec 11 11:19:01 crc kubenswrapper[4953]: I1211 11:19:01.497691 4953 scope.go:117] "RemoveContainer" containerID="0ccb172752db018cb9d68aa76a14b56ca514e309bba4425d194f8805525ea9b6" Dec 11 11:19:01 crc kubenswrapper[4953]: I1211 11:19:01.563445 4953 scope.go:117] "RemoveContainer" containerID="28327274e68c1ba02dda51f6dfa48ca9cd6a36e11970bd54de23afbc259827e4" Dec 11 11:19:01 crc kubenswrapper[4953]: E1211 11:19:01.564023 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"28327274e68c1ba02dda51f6dfa48ca9cd6a36e11970bd54de23afbc259827e4\": container with ID starting with 28327274e68c1ba02dda51f6dfa48ca9cd6a36e11970bd54de23afbc259827e4 not found: ID does not exist" containerID="28327274e68c1ba02dda51f6dfa48ca9cd6a36e11970bd54de23afbc259827e4" Dec 11 11:19:01 crc kubenswrapper[4953]: I1211 11:19:01.564063 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28327274e68c1ba02dda51f6dfa48ca9cd6a36e11970bd54de23afbc259827e4"} err="failed to get container status \"28327274e68c1ba02dda51f6dfa48ca9cd6a36e11970bd54de23afbc259827e4\": rpc error: code = NotFound desc = could not find container \"28327274e68c1ba02dda51f6dfa48ca9cd6a36e11970bd54de23afbc259827e4\": container with ID starting with 28327274e68c1ba02dda51f6dfa48ca9cd6a36e11970bd54de23afbc259827e4 not found: ID does not exist" Dec 11 11:19:01 crc kubenswrapper[4953]: I1211 11:19:01.564092 4953 scope.go:117] "RemoveContainer" containerID="2c251b7ce2e4585592c5cdb3add8350d49cb6e6159d684243aa42195649f3eaa" Dec 11 11:19:01 crc kubenswrapper[4953]: E1211 11:19:01.564348 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c251b7ce2e4585592c5cdb3add8350d49cb6e6159d684243aa42195649f3eaa\": container with ID starting with 2c251b7ce2e4585592c5cdb3add8350d49cb6e6159d684243aa42195649f3eaa not found: ID does not exist" containerID="2c251b7ce2e4585592c5cdb3add8350d49cb6e6159d684243aa42195649f3eaa" Dec 11 11:19:01 crc kubenswrapper[4953]: I1211 11:19:01.564376 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c251b7ce2e4585592c5cdb3add8350d49cb6e6159d684243aa42195649f3eaa"} err="failed to get container status \"2c251b7ce2e4585592c5cdb3add8350d49cb6e6159d684243aa42195649f3eaa\": rpc error: code = NotFound desc = could not find container \"2c251b7ce2e4585592c5cdb3add8350d49cb6e6159d684243aa42195649f3eaa\": container with ID starting with 2c251b7ce2e4585592c5cdb3add8350d49cb6e6159d684243aa42195649f3eaa not found: ID does not exist" Dec 11 11:19:01 crc kubenswrapper[4953]: I1211 11:19:01.564396 4953 scope.go:117] "RemoveContainer" containerID="0ccb172752db018cb9d68aa76a14b56ca514e309bba4425d194f8805525ea9b6" Dec 11 11:19:01 crc kubenswrapper[4953]: E1211 11:19:01.564609 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0ccb172752db018cb9d68aa76a14b56ca514e309bba4425d194f8805525ea9b6\": container with ID starting with 0ccb172752db018cb9d68aa76a14b56ca514e309bba4425d194f8805525ea9b6 not found: ID does not exist" containerID="0ccb172752db018cb9d68aa76a14b56ca514e309bba4425d194f8805525ea9b6" Dec 11 11:19:01 crc kubenswrapper[4953]: I1211 11:19:01.564638 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ccb172752db018cb9d68aa76a14b56ca514e309bba4425d194f8805525ea9b6"} err="failed to get container status \"0ccb172752db018cb9d68aa76a14b56ca514e309bba4425d194f8805525ea9b6\": rpc error: code = NotFound desc = could not find container \"0ccb172752db018cb9d68aa76a14b56ca514e309bba4425d194f8805525ea9b6\": container with ID starting with 0ccb172752db018cb9d68aa76a14b56ca514e309bba4425d194f8805525ea9b6 not found: ID does not exist" Dec 11 11:19:02 crc kubenswrapper[4953]: I1211 11:19:02.484614 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b7e6b93-2635-4c0a-a72e-61442dc68b2f" path="/var/lib/kubelet/pods/3b7e6b93-2635-4c0a-a72e-61442dc68b2f/volumes" Dec 11 11:19:48 crc kubenswrapper[4953]: I1211 11:19:48.193521 4953 patch_prober.go:28] interesting pod/machine-config-daemon-q2898 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 11:19:48 crc kubenswrapper[4953]: I1211 11:19:48.194563 4953 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 11:20:08 crc kubenswrapper[4953]: I1211 11:20:08.259026 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-s5msr"] Dec 11 11:20:08 crc kubenswrapper[4953]: E1211 11:20:08.259968 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ee6f306-5fad-4a1e-9167-253ca428c9cf" containerName="extract-utilities" Dec 11 11:20:08 crc kubenswrapper[4953]: I1211 11:20:08.259985 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ee6f306-5fad-4a1e-9167-253ca428c9cf" containerName="extract-utilities" Dec 11 11:20:08 crc kubenswrapper[4953]: E1211 11:20:08.260004 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ee6f306-5fad-4a1e-9167-253ca428c9cf" containerName="extract-content" Dec 11 11:20:08 crc kubenswrapper[4953]: I1211 11:20:08.260012 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ee6f306-5fad-4a1e-9167-253ca428c9cf" containerName="extract-content" Dec 11 11:20:08 crc kubenswrapper[4953]: E1211 11:20:08.260031 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b7e6b93-2635-4c0a-a72e-61442dc68b2f" containerName="extract-utilities" Dec 11 11:20:08 crc kubenswrapper[4953]: I1211 11:20:08.260037 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b7e6b93-2635-4c0a-a72e-61442dc68b2f" containerName="extract-utilities" Dec 11 11:20:08 crc kubenswrapper[4953]: E1211 11:20:08.260052 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b7e6b93-2635-4c0a-a72e-61442dc68b2f" containerName="extract-content" Dec 11 11:20:08 crc kubenswrapper[4953]: I1211 11:20:08.260058 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b7e6b93-2635-4c0a-a72e-61442dc68b2f" containerName="extract-content" Dec 11 11:20:08 crc kubenswrapper[4953]: E1211 11:20:08.260069 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b7e6b93-2635-4c0a-a72e-61442dc68b2f" containerName="registry-server" Dec 11 11:20:08 crc kubenswrapper[4953]: I1211 11:20:08.260075 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b7e6b93-2635-4c0a-a72e-61442dc68b2f" containerName="registry-server" Dec 11 11:20:08 crc kubenswrapper[4953]: E1211 11:20:08.260088 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ee6f306-5fad-4a1e-9167-253ca428c9cf" containerName="registry-server" Dec 11 11:20:08 crc kubenswrapper[4953]: I1211 11:20:08.260093 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ee6f306-5fad-4a1e-9167-253ca428c9cf" containerName="registry-server" Dec 11 11:20:08 crc kubenswrapper[4953]: I1211 11:20:08.260245 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b7e6b93-2635-4c0a-a72e-61442dc68b2f" containerName="registry-server" Dec 11 11:20:08 crc kubenswrapper[4953]: I1211 11:20:08.260266 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ee6f306-5fad-4a1e-9167-253ca428c9cf" containerName="registry-server" Dec 11 11:20:08 crc kubenswrapper[4953]: I1211 11:20:08.261290 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s5msr" Dec 11 11:20:08 crc kubenswrapper[4953]: I1211 11:20:08.286704 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-s5msr"] Dec 11 11:20:08 crc kubenswrapper[4953]: I1211 11:20:08.351961 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljgjx\" (UniqueName: \"kubernetes.io/projected/805ceaca-013c-4192-971c-f1111954422a-kube-api-access-ljgjx\") pod \"community-operators-s5msr\" (UID: \"805ceaca-013c-4192-971c-f1111954422a\") " pod="openshift-marketplace/community-operators-s5msr" Dec 11 11:20:08 crc kubenswrapper[4953]: I1211 11:20:08.352057 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/805ceaca-013c-4192-971c-f1111954422a-catalog-content\") pod \"community-operators-s5msr\" (UID: \"805ceaca-013c-4192-971c-f1111954422a\") " pod="openshift-marketplace/community-operators-s5msr" Dec 11 11:20:08 crc kubenswrapper[4953]: I1211 11:20:08.352147 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/805ceaca-013c-4192-971c-f1111954422a-utilities\") pod \"community-operators-s5msr\" (UID: \"805ceaca-013c-4192-971c-f1111954422a\") " pod="openshift-marketplace/community-operators-s5msr" Dec 11 11:20:08 crc kubenswrapper[4953]: I1211 11:20:08.453318 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ljgjx\" (UniqueName: \"kubernetes.io/projected/805ceaca-013c-4192-971c-f1111954422a-kube-api-access-ljgjx\") pod \"community-operators-s5msr\" (UID: \"805ceaca-013c-4192-971c-f1111954422a\") " pod="openshift-marketplace/community-operators-s5msr" Dec 11 11:20:08 crc kubenswrapper[4953]: I1211 11:20:08.453396 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/805ceaca-013c-4192-971c-f1111954422a-catalog-content\") pod \"community-operators-s5msr\" (UID: \"805ceaca-013c-4192-971c-f1111954422a\") " pod="openshift-marketplace/community-operators-s5msr" Dec 11 11:20:08 crc kubenswrapper[4953]: I1211 11:20:08.453495 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/805ceaca-013c-4192-971c-f1111954422a-utilities\") pod \"community-operators-s5msr\" (UID: \"805ceaca-013c-4192-971c-f1111954422a\") " pod="openshift-marketplace/community-operators-s5msr" Dec 11 11:20:08 crc kubenswrapper[4953]: I1211 11:20:08.454244 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/805ceaca-013c-4192-971c-f1111954422a-catalog-content\") pod \"community-operators-s5msr\" (UID: \"805ceaca-013c-4192-971c-f1111954422a\") " pod="openshift-marketplace/community-operators-s5msr" Dec 11 11:20:08 crc kubenswrapper[4953]: I1211 11:20:08.454313 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/805ceaca-013c-4192-971c-f1111954422a-utilities\") pod \"community-operators-s5msr\" (UID: \"805ceaca-013c-4192-971c-f1111954422a\") " pod="openshift-marketplace/community-operators-s5msr" Dec 11 11:20:08 crc kubenswrapper[4953]: I1211 11:20:08.474423 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljgjx\" (UniqueName: \"kubernetes.io/projected/805ceaca-013c-4192-971c-f1111954422a-kube-api-access-ljgjx\") pod \"community-operators-s5msr\" (UID: \"805ceaca-013c-4192-971c-f1111954422a\") " pod="openshift-marketplace/community-operators-s5msr" Dec 11 11:20:08 crc kubenswrapper[4953]: I1211 11:20:08.588389 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s5msr" Dec 11 11:20:09 crc kubenswrapper[4953]: I1211 11:20:09.089916 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-s5msr"] Dec 11 11:20:09 crc kubenswrapper[4953]: I1211 11:20:09.999213 4953 generic.go:334] "Generic (PLEG): container finished" podID="805ceaca-013c-4192-971c-f1111954422a" containerID="3208cacf489443b856ad44fc935041d37268d2fc2931910bce9a1d233ffd095a" exitCode=0 Dec 11 11:20:09 crc kubenswrapper[4953]: I1211 11:20:09.999262 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s5msr" event={"ID":"805ceaca-013c-4192-971c-f1111954422a","Type":"ContainerDied","Data":"3208cacf489443b856ad44fc935041d37268d2fc2931910bce9a1d233ffd095a"} Dec 11 11:20:10 crc kubenswrapper[4953]: I1211 11:20:09.999287 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s5msr" event={"ID":"805ceaca-013c-4192-971c-f1111954422a","Type":"ContainerStarted","Data":"269d7a83e342a125cb5195aa066c2567626bfe24b2e91f66f4a646f7c22687b6"} Dec 11 11:20:13 crc kubenswrapper[4953]: I1211 11:20:13.025903 4953 generic.go:334] "Generic (PLEG): container finished" podID="805ceaca-013c-4192-971c-f1111954422a" containerID="d0aa6f1075747c2cc464478611e18cbe8f0fcec569f2a2bb6f9d5366586c78e5" exitCode=0 Dec 11 11:20:13 crc kubenswrapper[4953]: I1211 11:20:13.026464 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s5msr" event={"ID":"805ceaca-013c-4192-971c-f1111954422a","Type":"ContainerDied","Data":"d0aa6f1075747c2cc464478611e18cbe8f0fcec569f2a2bb6f9d5366586c78e5"} Dec 11 11:20:14 crc kubenswrapper[4953]: I1211 11:20:14.036518 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s5msr" event={"ID":"805ceaca-013c-4192-971c-f1111954422a","Type":"ContainerStarted","Data":"705a8d9c2be4e9dcd9b3e4dd984fd3833f064f91773753c57942c58de719e62f"} Dec 11 11:20:14 crc kubenswrapper[4953]: I1211 11:20:14.062912 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-s5msr" podStartSLOduration=2.638830276 podStartE2EDuration="6.06288894s" podCreationTimestamp="2025-12-11 11:20:08 +0000 UTC" firstStartedPulling="2025-12-11 11:20:10.001444273 +0000 UTC m=+4128.025303346" lastFinishedPulling="2025-12-11 11:20:13.425502977 +0000 UTC m=+4131.449362010" observedRunningTime="2025-12-11 11:20:14.057608284 +0000 UTC m=+4132.081467327" watchObservedRunningTime="2025-12-11 11:20:14.06288894 +0000 UTC m=+4132.086747973" Dec 11 11:20:18 crc kubenswrapper[4953]: I1211 11:20:18.193787 4953 patch_prober.go:28] interesting pod/machine-config-daemon-q2898 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 11:20:18 crc kubenswrapper[4953]: I1211 11:20:18.194381 4953 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 11:20:18 crc kubenswrapper[4953]: I1211 11:20:18.590209 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-s5msr" Dec 11 11:20:18 crc kubenswrapper[4953]: I1211 11:20:18.590260 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-s5msr" Dec 11 11:20:18 crc kubenswrapper[4953]: I1211 11:20:18.643696 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-s5msr" Dec 11 11:20:19 crc kubenswrapper[4953]: I1211 11:20:19.130805 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-s5msr" Dec 11 11:20:21 crc kubenswrapper[4953]: I1211 11:20:21.430065 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-s5msr"] Dec 11 11:20:21 crc kubenswrapper[4953]: I1211 11:20:21.430605 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-s5msr" podUID="805ceaca-013c-4192-971c-f1111954422a" containerName="registry-server" containerID="cri-o://705a8d9c2be4e9dcd9b3e4dd984fd3833f064f91773753c57942c58de719e62f" gracePeriod=2 Dec 11 11:20:22 crc kubenswrapper[4953]: I1211 11:20:22.106982 4953 generic.go:334] "Generic (PLEG): container finished" podID="805ceaca-013c-4192-971c-f1111954422a" containerID="705a8d9c2be4e9dcd9b3e4dd984fd3833f064f91773753c57942c58de719e62f" exitCode=0 Dec 11 11:20:22 crc kubenswrapper[4953]: I1211 11:20:22.107073 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s5msr" event={"ID":"805ceaca-013c-4192-971c-f1111954422a","Type":"ContainerDied","Data":"705a8d9c2be4e9dcd9b3e4dd984fd3833f064f91773753c57942c58de719e62f"} Dec 11 11:20:22 crc kubenswrapper[4953]: I1211 11:20:22.383442 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s5msr" Dec 11 11:20:22 crc kubenswrapper[4953]: I1211 11:20:22.535702 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/805ceaca-013c-4192-971c-f1111954422a-catalog-content\") pod \"805ceaca-013c-4192-971c-f1111954422a\" (UID: \"805ceaca-013c-4192-971c-f1111954422a\") " Dec 11 11:20:22 crc kubenswrapper[4953]: I1211 11:20:22.535793 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ljgjx\" (UniqueName: \"kubernetes.io/projected/805ceaca-013c-4192-971c-f1111954422a-kube-api-access-ljgjx\") pod \"805ceaca-013c-4192-971c-f1111954422a\" (UID: \"805ceaca-013c-4192-971c-f1111954422a\") " Dec 11 11:20:22 crc kubenswrapper[4953]: I1211 11:20:22.535819 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/805ceaca-013c-4192-971c-f1111954422a-utilities\") pod \"805ceaca-013c-4192-971c-f1111954422a\" (UID: \"805ceaca-013c-4192-971c-f1111954422a\") " Dec 11 11:20:22 crc kubenswrapper[4953]: I1211 11:20:22.537195 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/805ceaca-013c-4192-971c-f1111954422a-utilities" (OuterVolumeSpecName: "utilities") pod "805ceaca-013c-4192-971c-f1111954422a" (UID: "805ceaca-013c-4192-971c-f1111954422a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 11:20:22 crc kubenswrapper[4953]: I1211 11:20:22.542272 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/805ceaca-013c-4192-971c-f1111954422a-kube-api-access-ljgjx" (OuterVolumeSpecName: "kube-api-access-ljgjx") pod "805ceaca-013c-4192-971c-f1111954422a" (UID: "805ceaca-013c-4192-971c-f1111954422a"). InnerVolumeSpecName "kube-api-access-ljgjx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 11:20:22 crc kubenswrapper[4953]: I1211 11:20:22.594372 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/805ceaca-013c-4192-971c-f1111954422a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "805ceaca-013c-4192-971c-f1111954422a" (UID: "805ceaca-013c-4192-971c-f1111954422a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 11:20:22 crc kubenswrapper[4953]: I1211 11:20:22.637665 4953 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/805ceaca-013c-4192-971c-f1111954422a-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 11:20:22 crc kubenswrapper[4953]: I1211 11:20:22.637832 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ljgjx\" (UniqueName: \"kubernetes.io/projected/805ceaca-013c-4192-971c-f1111954422a-kube-api-access-ljgjx\") on node \"crc\" DevicePath \"\"" Dec 11 11:20:22 crc kubenswrapper[4953]: I1211 11:20:22.637877 4953 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/805ceaca-013c-4192-971c-f1111954422a-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 11:20:23 crc kubenswrapper[4953]: I1211 11:20:23.119375 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s5msr" event={"ID":"805ceaca-013c-4192-971c-f1111954422a","Type":"ContainerDied","Data":"269d7a83e342a125cb5195aa066c2567626bfe24b2e91f66f4a646f7c22687b6"} Dec 11 11:20:23 crc kubenswrapper[4953]: I1211 11:20:23.119679 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s5msr" Dec 11 11:20:23 crc kubenswrapper[4953]: I1211 11:20:23.119698 4953 scope.go:117] "RemoveContainer" containerID="705a8d9c2be4e9dcd9b3e4dd984fd3833f064f91773753c57942c58de719e62f" Dec 11 11:20:23 crc kubenswrapper[4953]: I1211 11:20:23.141880 4953 scope.go:117] "RemoveContainer" containerID="d0aa6f1075747c2cc464478611e18cbe8f0fcec569f2a2bb6f9d5366586c78e5" Dec 11 11:20:23 crc kubenswrapper[4953]: I1211 11:20:23.156967 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-s5msr"] Dec 11 11:20:23 crc kubenswrapper[4953]: I1211 11:20:23.162602 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-s5msr"] Dec 11 11:20:23 crc kubenswrapper[4953]: I1211 11:20:23.179177 4953 scope.go:117] "RemoveContainer" containerID="3208cacf489443b856ad44fc935041d37268d2fc2931910bce9a1d233ffd095a" Dec 11 11:20:24 crc kubenswrapper[4953]: I1211 11:20:24.491675 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="805ceaca-013c-4192-971c-f1111954422a" path="/var/lib/kubelet/pods/805ceaca-013c-4192-971c-f1111954422a/volumes" Dec 11 11:20:48 crc kubenswrapper[4953]: I1211 11:20:48.194201 4953 patch_prober.go:28] interesting pod/machine-config-daemon-q2898 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 11:20:48 crc kubenswrapper[4953]: I1211 11:20:48.194868 4953 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 11:20:48 crc kubenswrapper[4953]: I1211 11:20:48.194924 4953 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q2898" Dec 11 11:20:48 crc kubenswrapper[4953]: I1211 11:20:48.195549 4953 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3cfab28c87780c4d4dda32247e636cecfc6f66e76b16aacbd60db038d590ac70"} pod="openshift-machine-config-operator/machine-config-daemon-q2898" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 11 11:20:48 crc kubenswrapper[4953]: I1211 11:20:48.195664 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" containerName="machine-config-daemon" containerID="cri-o://3cfab28c87780c4d4dda32247e636cecfc6f66e76b16aacbd60db038d590ac70" gracePeriod=600 Dec 11 11:20:48 crc kubenswrapper[4953]: I1211 11:20:48.365556 4953 generic.go:334] "Generic (PLEG): container finished" podID="ed741fb7-1326-48b7-a713-17c9f0243eac" containerID="3cfab28c87780c4d4dda32247e636cecfc6f66e76b16aacbd60db038d590ac70" exitCode=0 Dec 11 11:20:48 crc kubenswrapper[4953]: I1211 11:20:48.365608 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q2898" event={"ID":"ed741fb7-1326-48b7-a713-17c9f0243eac","Type":"ContainerDied","Data":"3cfab28c87780c4d4dda32247e636cecfc6f66e76b16aacbd60db038d590ac70"} Dec 11 11:20:48 crc kubenswrapper[4953]: I1211 11:20:48.365662 4953 scope.go:117] "RemoveContainer" containerID="fdfaa916f9a5003c8063d2dc716fea8730c3a8110f54d90c8b00ba3bedd729a9" Dec 11 11:20:49 crc kubenswrapper[4953]: I1211 11:20:49.374775 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q2898" event={"ID":"ed741fb7-1326-48b7-a713-17c9f0243eac","Type":"ContainerStarted","Data":"4634a6d8d4b364afe2a116a75b1386adda5bb4d5e2500c52df272bf1d390877f"} Dec 11 11:22:48 crc kubenswrapper[4953]: I1211 11:22:48.194359 4953 patch_prober.go:28] interesting pod/machine-config-daemon-q2898 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 11:22:48 crc kubenswrapper[4953]: I1211 11:22:48.195112 4953 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 11:23:19 crc kubenswrapper[4953]: I1211 11:23:18.781617 4953 patch_prober.go:28] interesting pod/machine-config-daemon-q2898 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 11:23:19 crc kubenswrapper[4953]: I1211 11:23:18.782281 4953 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 11:23:48 crc kubenswrapper[4953]: I1211 11:23:48.193989 4953 patch_prober.go:28] interesting pod/machine-config-daemon-q2898 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 11:23:48 crc kubenswrapper[4953]: I1211 11:23:48.194690 4953 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 11:23:48 crc kubenswrapper[4953]: I1211 11:23:48.194779 4953 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q2898" Dec 11 11:23:48 crc kubenswrapper[4953]: I1211 11:23:48.195536 4953 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4634a6d8d4b364afe2a116a75b1386adda5bb4d5e2500c52df272bf1d390877f"} pod="openshift-machine-config-operator/machine-config-daemon-q2898" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 11 11:23:48 crc kubenswrapper[4953]: I1211 11:23:48.195646 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" containerName="machine-config-daemon" containerID="cri-o://4634a6d8d4b364afe2a116a75b1386adda5bb4d5e2500c52df272bf1d390877f" gracePeriod=600 Dec 11 11:23:48 crc kubenswrapper[4953]: E1211 11:23:48.387189 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2898_openshift-machine-config-operator(ed741fb7-1326-48b7-a713-17c9f0243eac)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" Dec 11 11:23:49 crc kubenswrapper[4953]: I1211 11:23:49.033764 4953 generic.go:334] "Generic (PLEG): container finished" podID="ed741fb7-1326-48b7-a713-17c9f0243eac" containerID="4634a6d8d4b364afe2a116a75b1386adda5bb4d5e2500c52df272bf1d390877f" exitCode=0 Dec 11 11:23:49 crc kubenswrapper[4953]: I1211 11:23:49.033821 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q2898" event={"ID":"ed741fb7-1326-48b7-a713-17c9f0243eac","Type":"ContainerDied","Data":"4634a6d8d4b364afe2a116a75b1386adda5bb4d5e2500c52df272bf1d390877f"} Dec 11 11:23:49 crc kubenswrapper[4953]: I1211 11:23:49.034194 4953 scope.go:117] "RemoveContainer" containerID="3cfab28c87780c4d4dda32247e636cecfc6f66e76b16aacbd60db038d590ac70" Dec 11 11:23:49 crc kubenswrapper[4953]: I1211 11:23:49.034778 4953 scope.go:117] "RemoveContainer" containerID="4634a6d8d4b364afe2a116a75b1386adda5bb4d5e2500c52df272bf1d390877f" Dec 11 11:23:49 crc kubenswrapper[4953]: E1211 11:23:49.035105 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2898_openshift-machine-config-operator(ed741fb7-1326-48b7-a713-17c9f0243eac)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" Dec 11 11:24:04 crc kubenswrapper[4953]: I1211 11:24:04.474526 4953 scope.go:117] "RemoveContainer" containerID="4634a6d8d4b364afe2a116a75b1386adda5bb4d5e2500c52df272bf1d390877f" Dec 11 11:24:04 crc kubenswrapper[4953]: E1211 11:24:04.475471 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2898_openshift-machine-config-operator(ed741fb7-1326-48b7-a713-17c9f0243eac)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" Dec 11 11:24:17 crc kubenswrapper[4953]: I1211 11:24:17.473713 4953 scope.go:117] "RemoveContainer" containerID="4634a6d8d4b364afe2a116a75b1386adda5bb4d5e2500c52df272bf1d390877f" Dec 11 11:24:17 crc kubenswrapper[4953]: E1211 11:24:17.474376 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2898_openshift-machine-config-operator(ed741fb7-1326-48b7-a713-17c9f0243eac)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" Dec 11 11:24:30 crc kubenswrapper[4953]: I1211 11:24:30.474393 4953 scope.go:117] "RemoveContainer" containerID="4634a6d8d4b364afe2a116a75b1386adda5bb4d5e2500c52df272bf1d390877f" Dec 11 11:24:30 crc kubenswrapper[4953]: E1211 11:24:30.475735 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2898_openshift-machine-config-operator(ed741fb7-1326-48b7-a713-17c9f0243eac)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" Dec 11 11:24:42 crc kubenswrapper[4953]: I1211 11:24:42.480236 4953 scope.go:117] "RemoveContainer" containerID="4634a6d8d4b364afe2a116a75b1386adda5bb4d5e2500c52df272bf1d390877f" Dec 11 11:24:42 crc kubenswrapper[4953]: E1211 11:24:42.481241 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2898_openshift-machine-config-operator(ed741fb7-1326-48b7-a713-17c9f0243eac)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" Dec 11 11:24:55 crc kubenswrapper[4953]: I1211 11:24:55.474426 4953 scope.go:117] "RemoveContainer" containerID="4634a6d8d4b364afe2a116a75b1386adda5bb4d5e2500c52df272bf1d390877f" Dec 11 11:24:55 crc kubenswrapper[4953]: E1211 11:24:55.475277 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2898_openshift-machine-config-operator(ed741fb7-1326-48b7-a713-17c9f0243eac)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" Dec 11 11:25:06 crc kubenswrapper[4953]: I1211 11:25:06.473236 4953 scope.go:117] "RemoveContainer" containerID="4634a6d8d4b364afe2a116a75b1386adda5bb4d5e2500c52df272bf1d390877f" Dec 11 11:25:06 crc kubenswrapper[4953]: E1211 11:25:06.473774 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2898_openshift-machine-config-operator(ed741fb7-1326-48b7-a713-17c9f0243eac)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" Dec 11 11:25:08 crc kubenswrapper[4953]: I1211 11:25:08.884517 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rxlrq"] Dec 11 11:25:08 crc kubenswrapper[4953]: E1211 11:25:08.885452 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="805ceaca-013c-4192-971c-f1111954422a" containerName="extract-utilities" Dec 11 11:25:08 crc kubenswrapper[4953]: I1211 11:25:08.885493 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="805ceaca-013c-4192-971c-f1111954422a" containerName="extract-utilities" Dec 11 11:25:08 crc kubenswrapper[4953]: E1211 11:25:08.885517 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="805ceaca-013c-4192-971c-f1111954422a" containerName="registry-server" Dec 11 11:25:08 crc kubenswrapper[4953]: I1211 11:25:08.885529 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="805ceaca-013c-4192-971c-f1111954422a" containerName="registry-server" Dec 11 11:25:08 crc kubenswrapper[4953]: E1211 11:25:08.885568 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="805ceaca-013c-4192-971c-f1111954422a" containerName="extract-content" Dec 11 11:25:08 crc kubenswrapper[4953]: I1211 11:25:08.885608 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="805ceaca-013c-4192-971c-f1111954422a" containerName="extract-content" Dec 11 11:25:08 crc kubenswrapper[4953]: I1211 11:25:08.885877 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="805ceaca-013c-4192-971c-f1111954422a" containerName="registry-server" Dec 11 11:25:08 crc kubenswrapper[4953]: I1211 11:25:08.887712 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rxlrq" Dec 11 11:25:08 crc kubenswrapper[4953]: I1211 11:25:08.898446 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rxlrq"] Dec 11 11:25:08 crc kubenswrapper[4953]: I1211 11:25:08.917834 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vklc2\" (UniqueName: \"kubernetes.io/projected/0bd34e5f-a5e3-453d-b494-0619f84d60ac-kube-api-access-vklc2\") pod \"redhat-marketplace-rxlrq\" (UID: \"0bd34e5f-a5e3-453d-b494-0619f84d60ac\") " pod="openshift-marketplace/redhat-marketplace-rxlrq" Dec 11 11:25:08 crc kubenswrapper[4953]: I1211 11:25:08.918126 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0bd34e5f-a5e3-453d-b494-0619f84d60ac-utilities\") pod \"redhat-marketplace-rxlrq\" (UID: \"0bd34e5f-a5e3-453d-b494-0619f84d60ac\") " pod="openshift-marketplace/redhat-marketplace-rxlrq" Dec 11 11:25:08 crc kubenswrapper[4953]: I1211 11:25:08.918249 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0bd34e5f-a5e3-453d-b494-0619f84d60ac-catalog-content\") pod \"redhat-marketplace-rxlrq\" (UID: \"0bd34e5f-a5e3-453d-b494-0619f84d60ac\") " pod="openshift-marketplace/redhat-marketplace-rxlrq" Dec 11 11:25:09 crc kubenswrapper[4953]: I1211 11:25:09.019780 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0bd34e5f-a5e3-453d-b494-0619f84d60ac-utilities\") pod \"redhat-marketplace-rxlrq\" (UID: \"0bd34e5f-a5e3-453d-b494-0619f84d60ac\") " pod="openshift-marketplace/redhat-marketplace-rxlrq" Dec 11 11:25:09 crc kubenswrapper[4953]: I1211 11:25:09.019850 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0bd34e5f-a5e3-453d-b494-0619f84d60ac-catalog-content\") pod \"redhat-marketplace-rxlrq\" (UID: \"0bd34e5f-a5e3-453d-b494-0619f84d60ac\") " pod="openshift-marketplace/redhat-marketplace-rxlrq" Dec 11 11:25:09 crc kubenswrapper[4953]: I1211 11:25:09.019892 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vklc2\" (UniqueName: \"kubernetes.io/projected/0bd34e5f-a5e3-453d-b494-0619f84d60ac-kube-api-access-vklc2\") pod \"redhat-marketplace-rxlrq\" (UID: \"0bd34e5f-a5e3-453d-b494-0619f84d60ac\") " pod="openshift-marketplace/redhat-marketplace-rxlrq" Dec 11 11:25:09 crc kubenswrapper[4953]: I1211 11:25:09.020391 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0bd34e5f-a5e3-453d-b494-0619f84d60ac-utilities\") pod \"redhat-marketplace-rxlrq\" (UID: \"0bd34e5f-a5e3-453d-b494-0619f84d60ac\") " pod="openshift-marketplace/redhat-marketplace-rxlrq" Dec 11 11:25:09 crc kubenswrapper[4953]: I1211 11:25:09.020442 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0bd34e5f-a5e3-453d-b494-0619f84d60ac-catalog-content\") pod \"redhat-marketplace-rxlrq\" (UID: \"0bd34e5f-a5e3-453d-b494-0619f84d60ac\") " pod="openshift-marketplace/redhat-marketplace-rxlrq" Dec 11 11:25:09 crc kubenswrapper[4953]: I1211 11:25:09.050730 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vklc2\" (UniqueName: \"kubernetes.io/projected/0bd34e5f-a5e3-453d-b494-0619f84d60ac-kube-api-access-vklc2\") pod \"redhat-marketplace-rxlrq\" (UID: \"0bd34e5f-a5e3-453d-b494-0619f84d60ac\") " pod="openshift-marketplace/redhat-marketplace-rxlrq" Dec 11 11:25:09 crc kubenswrapper[4953]: I1211 11:25:09.246044 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rxlrq" Dec 11 11:25:09 crc kubenswrapper[4953]: I1211 11:25:09.772112 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rxlrq"] Dec 11 11:25:10 crc kubenswrapper[4953]: I1211 11:25:10.059344 4953 generic.go:334] "Generic (PLEG): container finished" podID="0bd34e5f-a5e3-453d-b494-0619f84d60ac" containerID="4a34230f0d27cbb7124e7bbe07574fd486618c770aa420e78cae2cac1dca9728" exitCode=0 Dec 11 11:25:10 crc kubenswrapper[4953]: I1211 11:25:10.059449 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rxlrq" event={"ID":"0bd34e5f-a5e3-453d-b494-0619f84d60ac","Type":"ContainerDied","Data":"4a34230f0d27cbb7124e7bbe07574fd486618c770aa420e78cae2cac1dca9728"} Dec 11 11:25:10 crc kubenswrapper[4953]: I1211 11:25:10.059627 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rxlrq" event={"ID":"0bd34e5f-a5e3-453d-b494-0619f84d60ac","Type":"ContainerStarted","Data":"809be60135d621cf5c27d66d6d832936c1ca537f04bdcca99972c7db2e4edc26"} Dec 11 11:25:10 crc kubenswrapper[4953]: I1211 11:25:10.061975 4953 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 11 11:25:12 crc kubenswrapper[4953]: I1211 11:25:12.075860 4953 generic.go:334] "Generic (PLEG): container finished" podID="0bd34e5f-a5e3-453d-b494-0619f84d60ac" containerID="3785c6135ff9c111d9debba657eaf65e2bd62c6e5a4114eea641182d6265a881" exitCode=0 Dec 11 11:25:12 crc kubenswrapper[4953]: I1211 11:25:12.075926 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rxlrq" event={"ID":"0bd34e5f-a5e3-453d-b494-0619f84d60ac","Type":"ContainerDied","Data":"3785c6135ff9c111d9debba657eaf65e2bd62c6e5a4114eea641182d6265a881"} Dec 11 11:25:13 crc kubenswrapper[4953]: I1211 11:25:13.084938 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rxlrq" event={"ID":"0bd34e5f-a5e3-453d-b494-0619f84d60ac","Type":"ContainerStarted","Data":"f7c42b46d2af261dc92f85a1573ec0a3499197ae9bcb5bc64a072489182fd468"} Dec 11 11:25:17 crc kubenswrapper[4953]: I1211 11:25:17.473625 4953 scope.go:117] "RemoveContainer" containerID="4634a6d8d4b364afe2a116a75b1386adda5bb4d5e2500c52df272bf1d390877f" Dec 11 11:25:17 crc kubenswrapper[4953]: E1211 11:25:17.474117 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2898_openshift-machine-config-operator(ed741fb7-1326-48b7-a713-17c9f0243eac)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" Dec 11 11:25:19 crc kubenswrapper[4953]: I1211 11:25:19.246119 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rxlrq" Dec 11 11:25:19 crc kubenswrapper[4953]: I1211 11:25:19.246205 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rxlrq" Dec 11 11:25:19 crc kubenswrapper[4953]: I1211 11:25:19.481105 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rxlrq" Dec 11 11:25:19 crc kubenswrapper[4953]: I1211 11:25:19.506618 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rxlrq" podStartSLOduration=8.713263754 podStartE2EDuration="11.506599572s" podCreationTimestamp="2025-12-11 11:25:08 +0000 UTC" firstStartedPulling="2025-12-11 11:25:10.061614335 +0000 UTC m=+4428.085473368" lastFinishedPulling="2025-12-11 11:25:12.854950143 +0000 UTC m=+4430.878809186" observedRunningTime="2025-12-11 11:25:13.105252553 +0000 UTC m=+4431.129111586" watchObservedRunningTime="2025-12-11 11:25:19.506599572 +0000 UTC m=+4437.530458595" Dec 11 11:25:20 crc kubenswrapper[4953]: I1211 11:25:20.199788 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rxlrq" Dec 11 11:25:20 crc kubenswrapper[4953]: I1211 11:25:20.262228 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rxlrq"] Dec 11 11:25:22 crc kubenswrapper[4953]: I1211 11:25:22.155824 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rxlrq" podUID="0bd34e5f-a5e3-453d-b494-0619f84d60ac" containerName="registry-server" containerID="cri-o://f7c42b46d2af261dc92f85a1573ec0a3499197ae9bcb5bc64a072489182fd468" gracePeriod=2 Dec 11 11:25:23 crc kubenswrapper[4953]: I1211 11:25:23.166843 4953 generic.go:334] "Generic (PLEG): container finished" podID="0bd34e5f-a5e3-453d-b494-0619f84d60ac" containerID="f7c42b46d2af261dc92f85a1573ec0a3499197ae9bcb5bc64a072489182fd468" exitCode=0 Dec 11 11:25:23 crc kubenswrapper[4953]: I1211 11:25:23.166930 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rxlrq" event={"ID":"0bd34e5f-a5e3-453d-b494-0619f84d60ac","Type":"ContainerDied","Data":"f7c42b46d2af261dc92f85a1573ec0a3499197ae9bcb5bc64a072489182fd468"} Dec 11 11:25:23 crc kubenswrapper[4953]: I1211 11:25:23.687193 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rxlrq" Dec 11 11:25:23 crc kubenswrapper[4953]: I1211 11:25:23.889597 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0bd34e5f-a5e3-453d-b494-0619f84d60ac-utilities\") pod \"0bd34e5f-a5e3-453d-b494-0619f84d60ac\" (UID: \"0bd34e5f-a5e3-453d-b494-0619f84d60ac\") " Dec 11 11:25:23 crc kubenswrapper[4953]: I1211 11:25:23.890371 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vklc2\" (UniqueName: \"kubernetes.io/projected/0bd34e5f-a5e3-453d-b494-0619f84d60ac-kube-api-access-vklc2\") pod \"0bd34e5f-a5e3-453d-b494-0619f84d60ac\" (UID: \"0bd34e5f-a5e3-453d-b494-0619f84d60ac\") " Dec 11 11:25:23 crc kubenswrapper[4953]: I1211 11:25:23.890631 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0bd34e5f-a5e3-453d-b494-0619f84d60ac-catalog-content\") pod \"0bd34e5f-a5e3-453d-b494-0619f84d60ac\" (UID: \"0bd34e5f-a5e3-453d-b494-0619f84d60ac\") " Dec 11 11:25:23 crc kubenswrapper[4953]: I1211 11:25:23.890482 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0bd34e5f-a5e3-453d-b494-0619f84d60ac-utilities" (OuterVolumeSpecName: "utilities") pod "0bd34e5f-a5e3-453d-b494-0619f84d60ac" (UID: "0bd34e5f-a5e3-453d-b494-0619f84d60ac"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 11:25:23 crc kubenswrapper[4953]: I1211 11:25:23.891483 4953 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0bd34e5f-a5e3-453d-b494-0619f84d60ac-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 11:25:23 crc kubenswrapper[4953]: I1211 11:25:23.899379 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0bd34e5f-a5e3-453d-b494-0619f84d60ac-kube-api-access-vklc2" (OuterVolumeSpecName: "kube-api-access-vklc2") pod "0bd34e5f-a5e3-453d-b494-0619f84d60ac" (UID: "0bd34e5f-a5e3-453d-b494-0619f84d60ac"). InnerVolumeSpecName "kube-api-access-vklc2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 11:25:23 crc kubenswrapper[4953]: I1211 11:25:23.913359 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0bd34e5f-a5e3-453d-b494-0619f84d60ac-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0bd34e5f-a5e3-453d-b494-0619f84d60ac" (UID: "0bd34e5f-a5e3-453d-b494-0619f84d60ac"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 11:25:23 crc kubenswrapper[4953]: I1211 11:25:23.992801 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vklc2\" (UniqueName: \"kubernetes.io/projected/0bd34e5f-a5e3-453d-b494-0619f84d60ac-kube-api-access-vklc2\") on node \"crc\" DevicePath \"\"" Dec 11 11:25:23 crc kubenswrapper[4953]: I1211 11:25:23.992836 4953 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0bd34e5f-a5e3-453d-b494-0619f84d60ac-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 11:25:24 crc kubenswrapper[4953]: I1211 11:25:24.183204 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rxlrq" event={"ID":"0bd34e5f-a5e3-453d-b494-0619f84d60ac","Type":"ContainerDied","Data":"809be60135d621cf5c27d66d6d832936c1ca537f04bdcca99972c7db2e4edc26"} Dec 11 11:25:24 crc kubenswrapper[4953]: I1211 11:25:24.183298 4953 scope.go:117] "RemoveContainer" containerID="f7c42b46d2af261dc92f85a1573ec0a3499197ae9bcb5bc64a072489182fd468" Dec 11 11:25:24 crc kubenswrapper[4953]: I1211 11:25:24.183342 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rxlrq" Dec 11 11:25:24 crc kubenswrapper[4953]: I1211 11:25:24.239744 4953 scope.go:117] "RemoveContainer" containerID="3785c6135ff9c111d9debba657eaf65e2bd62c6e5a4114eea641182d6265a881" Dec 11 11:25:24 crc kubenswrapper[4953]: I1211 11:25:24.239987 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rxlrq"] Dec 11 11:25:24 crc kubenswrapper[4953]: I1211 11:25:24.246636 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rxlrq"] Dec 11 11:25:24 crc kubenswrapper[4953]: I1211 11:25:24.264456 4953 scope.go:117] "RemoveContainer" containerID="4a34230f0d27cbb7124e7bbe07574fd486618c770aa420e78cae2cac1dca9728" Dec 11 11:25:24 crc kubenswrapper[4953]: I1211 11:25:24.485890 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0bd34e5f-a5e3-453d-b494-0619f84d60ac" path="/var/lib/kubelet/pods/0bd34e5f-a5e3-453d-b494-0619f84d60ac/volumes" Dec 11 11:25:31 crc kubenswrapper[4953]: I1211 11:25:31.473782 4953 scope.go:117] "RemoveContainer" containerID="4634a6d8d4b364afe2a116a75b1386adda5bb4d5e2500c52df272bf1d390877f" Dec 11 11:25:31 crc kubenswrapper[4953]: E1211 11:25:31.475157 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2898_openshift-machine-config-operator(ed741fb7-1326-48b7-a713-17c9f0243eac)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" Dec 11 11:25:45 crc kubenswrapper[4953]: I1211 11:25:45.474603 4953 scope.go:117] "RemoveContainer" containerID="4634a6d8d4b364afe2a116a75b1386adda5bb4d5e2500c52df272bf1d390877f" Dec 11 11:25:45 crc kubenswrapper[4953]: E1211 11:25:45.475671 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2898_openshift-machine-config-operator(ed741fb7-1326-48b7-a713-17c9f0243eac)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" Dec 11 11:26:00 crc kubenswrapper[4953]: I1211 11:26:00.474313 4953 scope.go:117] "RemoveContainer" containerID="4634a6d8d4b364afe2a116a75b1386adda5bb4d5e2500c52df272bf1d390877f" Dec 11 11:26:00 crc kubenswrapper[4953]: E1211 11:26:00.475226 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2898_openshift-machine-config-operator(ed741fb7-1326-48b7-a713-17c9f0243eac)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" Dec 11 11:26:11 crc kubenswrapper[4953]: I1211 11:26:11.473213 4953 scope.go:117] "RemoveContainer" containerID="4634a6d8d4b364afe2a116a75b1386adda5bb4d5e2500c52df272bf1d390877f" Dec 11 11:26:11 crc kubenswrapper[4953]: E1211 11:26:11.474657 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2898_openshift-machine-config-operator(ed741fb7-1326-48b7-a713-17c9f0243eac)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" Dec 11 11:26:23 crc kubenswrapper[4953]: I1211 11:26:23.473514 4953 scope.go:117] "RemoveContainer" containerID="4634a6d8d4b364afe2a116a75b1386adda5bb4d5e2500c52df272bf1d390877f" Dec 11 11:26:23 crc kubenswrapper[4953]: E1211 11:26:23.475107 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2898_openshift-machine-config-operator(ed741fb7-1326-48b7-a713-17c9f0243eac)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" Dec 11 11:26:34 crc kubenswrapper[4953]: I1211 11:26:34.475548 4953 scope.go:117] "RemoveContainer" containerID="4634a6d8d4b364afe2a116a75b1386adda5bb4d5e2500c52df272bf1d390877f" Dec 11 11:26:34 crc kubenswrapper[4953]: E1211 11:26:34.477224 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2898_openshift-machine-config-operator(ed741fb7-1326-48b7-a713-17c9f0243eac)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" Dec 11 11:26:46 crc kubenswrapper[4953]: I1211 11:26:46.473661 4953 scope.go:117] "RemoveContainer" containerID="4634a6d8d4b364afe2a116a75b1386adda5bb4d5e2500c52df272bf1d390877f" Dec 11 11:26:46 crc kubenswrapper[4953]: E1211 11:26:46.474557 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2898_openshift-machine-config-operator(ed741fb7-1326-48b7-a713-17c9f0243eac)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" Dec 11 11:26:57 crc kubenswrapper[4953]: I1211 11:26:57.486741 4953 scope.go:117] "RemoveContainer" containerID="4634a6d8d4b364afe2a116a75b1386adda5bb4d5e2500c52df272bf1d390877f" Dec 11 11:26:57 crc kubenswrapper[4953]: E1211 11:26:57.488948 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2898_openshift-machine-config-operator(ed741fb7-1326-48b7-a713-17c9f0243eac)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" Dec 11 11:27:12 crc kubenswrapper[4953]: I1211 11:27:12.477388 4953 scope.go:117] "RemoveContainer" containerID="4634a6d8d4b364afe2a116a75b1386adda5bb4d5e2500c52df272bf1d390877f" Dec 11 11:27:12 crc kubenswrapper[4953]: E1211 11:27:12.478238 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2898_openshift-machine-config-operator(ed741fb7-1326-48b7-a713-17c9f0243eac)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" Dec 11 11:27:23 crc kubenswrapper[4953]: I1211 11:27:23.473834 4953 scope.go:117] "RemoveContainer" containerID="4634a6d8d4b364afe2a116a75b1386adda5bb4d5e2500c52df272bf1d390877f" Dec 11 11:27:23 crc kubenswrapper[4953]: E1211 11:27:23.474474 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2898_openshift-machine-config-operator(ed741fb7-1326-48b7-a713-17c9f0243eac)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" Dec 11 11:27:38 crc kubenswrapper[4953]: I1211 11:27:38.473478 4953 scope.go:117] "RemoveContainer" containerID="4634a6d8d4b364afe2a116a75b1386adda5bb4d5e2500c52df272bf1d390877f" Dec 11 11:27:38 crc kubenswrapper[4953]: E1211 11:27:38.474229 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2898_openshift-machine-config-operator(ed741fb7-1326-48b7-a713-17c9f0243eac)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" Dec 11 11:27:50 crc kubenswrapper[4953]: I1211 11:27:50.473734 4953 scope.go:117] "RemoveContainer" containerID="4634a6d8d4b364afe2a116a75b1386adda5bb4d5e2500c52df272bf1d390877f" Dec 11 11:27:50 crc kubenswrapper[4953]: E1211 11:27:50.474565 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2898_openshift-machine-config-operator(ed741fb7-1326-48b7-a713-17c9f0243eac)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" Dec 11 11:28:04 crc kubenswrapper[4953]: I1211 11:28:04.473869 4953 scope.go:117] "RemoveContainer" containerID="4634a6d8d4b364afe2a116a75b1386adda5bb4d5e2500c52df272bf1d390877f" Dec 11 11:28:04 crc kubenswrapper[4953]: E1211 11:28:04.474629 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2898_openshift-machine-config-operator(ed741fb7-1326-48b7-a713-17c9f0243eac)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" Dec 11 11:28:17 crc kubenswrapper[4953]: I1211 11:28:17.500423 4953 scope.go:117] "RemoveContainer" containerID="4634a6d8d4b364afe2a116a75b1386adda5bb4d5e2500c52df272bf1d390877f" Dec 11 11:28:17 crc kubenswrapper[4953]: E1211 11:28:17.501264 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2898_openshift-machine-config-operator(ed741fb7-1326-48b7-a713-17c9f0243eac)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" Dec 11 11:28:28 crc kubenswrapper[4953]: I1211 11:28:28.473114 4953 scope.go:117] "RemoveContainer" containerID="4634a6d8d4b364afe2a116a75b1386adda5bb4d5e2500c52df272bf1d390877f" Dec 11 11:28:28 crc kubenswrapper[4953]: E1211 11:28:28.474102 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2898_openshift-machine-config-operator(ed741fb7-1326-48b7-a713-17c9f0243eac)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" Dec 11 11:28:43 crc kubenswrapper[4953]: I1211 11:28:43.473733 4953 scope.go:117] "RemoveContainer" containerID="4634a6d8d4b364afe2a116a75b1386adda5bb4d5e2500c52df272bf1d390877f" Dec 11 11:28:43 crc kubenswrapper[4953]: E1211 11:28:43.474522 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2898_openshift-machine-config-operator(ed741fb7-1326-48b7-a713-17c9f0243eac)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" Dec 11 11:28:57 crc kubenswrapper[4953]: I1211 11:28:57.473543 4953 scope.go:117] "RemoveContainer" containerID="4634a6d8d4b364afe2a116a75b1386adda5bb4d5e2500c52df272bf1d390877f" Dec 11 11:28:58 crc kubenswrapper[4953]: I1211 11:28:58.322884 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q2898" event={"ID":"ed741fb7-1326-48b7-a713-17c9f0243eac","Type":"ContainerStarted","Data":"3fa714d5edfb81bdbba0eb00b5ad25ac380f07f086e8da7b3fec27e11ee65c51"} Dec 11 11:28:59 crc kubenswrapper[4953]: I1211 11:28:59.321037 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-nqdlc"] Dec 11 11:28:59 crc kubenswrapper[4953]: E1211 11:28:59.322018 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bd34e5f-a5e3-453d-b494-0619f84d60ac" containerName="extract-content" Dec 11 11:28:59 crc kubenswrapper[4953]: I1211 11:28:59.322051 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bd34e5f-a5e3-453d-b494-0619f84d60ac" containerName="extract-content" Dec 11 11:28:59 crc kubenswrapper[4953]: E1211 11:28:59.322071 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bd34e5f-a5e3-453d-b494-0619f84d60ac" containerName="extract-utilities" Dec 11 11:28:59 crc kubenswrapper[4953]: I1211 11:28:59.322080 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bd34e5f-a5e3-453d-b494-0619f84d60ac" containerName="extract-utilities" Dec 11 11:28:59 crc kubenswrapper[4953]: E1211 11:28:59.322089 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bd34e5f-a5e3-453d-b494-0619f84d60ac" containerName="registry-server" Dec 11 11:28:59 crc kubenswrapper[4953]: I1211 11:28:59.322097 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bd34e5f-a5e3-453d-b494-0619f84d60ac" containerName="registry-server" Dec 11 11:28:59 crc kubenswrapper[4953]: I1211 11:28:59.322298 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="0bd34e5f-a5e3-453d-b494-0619f84d60ac" containerName="registry-server" Dec 11 11:28:59 crc kubenswrapper[4953]: I1211 11:28:59.323674 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nqdlc" Dec 11 11:28:59 crc kubenswrapper[4953]: I1211 11:28:59.334999 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6z6lb\" (UniqueName: \"kubernetes.io/projected/daad820d-39e4-445d-9a3d-555c7ed62b43-kube-api-access-6z6lb\") pod \"certified-operators-nqdlc\" (UID: \"daad820d-39e4-445d-9a3d-555c7ed62b43\") " pod="openshift-marketplace/certified-operators-nqdlc" Dec 11 11:28:59 crc kubenswrapper[4953]: I1211 11:28:59.335254 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/daad820d-39e4-445d-9a3d-555c7ed62b43-catalog-content\") pod \"certified-operators-nqdlc\" (UID: \"daad820d-39e4-445d-9a3d-555c7ed62b43\") " pod="openshift-marketplace/certified-operators-nqdlc" Dec 11 11:28:59 crc kubenswrapper[4953]: I1211 11:28:59.335348 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/daad820d-39e4-445d-9a3d-555c7ed62b43-utilities\") pod \"certified-operators-nqdlc\" (UID: \"daad820d-39e4-445d-9a3d-555c7ed62b43\") " pod="openshift-marketplace/certified-operators-nqdlc" Dec 11 11:28:59 crc kubenswrapper[4953]: I1211 11:28:59.350317 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nqdlc"] Dec 11 11:28:59 crc kubenswrapper[4953]: I1211 11:28:59.436393 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6z6lb\" (UniqueName: \"kubernetes.io/projected/daad820d-39e4-445d-9a3d-555c7ed62b43-kube-api-access-6z6lb\") pod \"certified-operators-nqdlc\" (UID: \"daad820d-39e4-445d-9a3d-555c7ed62b43\") " pod="openshift-marketplace/certified-operators-nqdlc" Dec 11 11:28:59 crc kubenswrapper[4953]: I1211 11:28:59.436514 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/daad820d-39e4-445d-9a3d-555c7ed62b43-catalog-content\") pod \"certified-operators-nqdlc\" (UID: \"daad820d-39e4-445d-9a3d-555c7ed62b43\") " pod="openshift-marketplace/certified-operators-nqdlc" Dec 11 11:28:59 crc kubenswrapper[4953]: I1211 11:28:59.436560 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/daad820d-39e4-445d-9a3d-555c7ed62b43-utilities\") pod \"certified-operators-nqdlc\" (UID: \"daad820d-39e4-445d-9a3d-555c7ed62b43\") " pod="openshift-marketplace/certified-operators-nqdlc" Dec 11 11:28:59 crc kubenswrapper[4953]: I1211 11:28:59.437235 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/daad820d-39e4-445d-9a3d-555c7ed62b43-catalog-content\") pod \"certified-operators-nqdlc\" (UID: \"daad820d-39e4-445d-9a3d-555c7ed62b43\") " pod="openshift-marketplace/certified-operators-nqdlc" Dec 11 11:28:59 crc kubenswrapper[4953]: I1211 11:28:59.437274 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/daad820d-39e4-445d-9a3d-555c7ed62b43-utilities\") pod \"certified-operators-nqdlc\" (UID: \"daad820d-39e4-445d-9a3d-555c7ed62b43\") " pod="openshift-marketplace/certified-operators-nqdlc" Dec 11 11:28:59 crc kubenswrapper[4953]: I1211 11:28:59.466113 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6z6lb\" (UniqueName: \"kubernetes.io/projected/daad820d-39e4-445d-9a3d-555c7ed62b43-kube-api-access-6z6lb\") pod \"certified-operators-nqdlc\" (UID: \"daad820d-39e4-445d-9a3d-555c7ed62b43\") " pod="openshift-marketplace/certified-operators-nqdlc" Dec 11 11:28:59 crc kubenswrapper[4953]: I1211 11:28:59.651542 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nqdlc" Dec 11 11:29:00 crc kubenswrapper[4953]: I1211 11:29:00.156134 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nqdlc"] Dec 11 11:29:00 crc kubenswrapper[4953]: I1211 11:29:00.341516 4953 generic.go:334] "Generic (PLEG): container finished" podID="daad820d-39e4-445d-9a3d-555c7ed62b43" containerID="7e4c8a252dac334e5089a7f2b9367774f02bde92523a7f1e3144e5403b0959f9" exitCode=0 Dec 11 11:29:00 crc kubenswrapper[4953]: I1211 11:29:00.341567 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nqdlc" event={"ID":"daad820d-39e4-445d-9a3d-555c7ed62b43","Type":"ContainerDied","Data":"7e4c8a252dac334e5089a7f2b9367774f02bde92523a7f1e3144e5403b0959f9"} Dec 11 11:29:00 crc kubenswrapper[4953]: I1211 11:29:00.341624 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nqdlc" event={"ID":"daad820d-39e4-445d-9a3d-555c7ed62b43","Type":"ContainerStarted","Data":"79c270e2c50629d7b9b8c9e26467d36793daf91d2431c79f0111a292b0f46a36"} Dec 11 11:29:04 crc kubenswrapper[4953]: I1211 11:29:04.381314 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nqdlc" event={"ID":"daad820d-39e4-445d-9a3d-555c7ed62b43","Type":"ContainerStarted","Data":"ac0093478ccad1905914eb7f6109430573e7014e8c181ce687e86d110504cdd5"} Dec 11 11:29:05 crc kubenswrapper[4953]: I1211 11:29:05.392273 4953 generic.go:334] "Generic (PLEG): container finished" podID="daad820d-39e4-445d-9a3d-555c7ed62b43" containerID="ac0093478ccad1905914eb7f6109430573e7014e8c181ce687e86d110504cdd5" exitCode=0 Dec 11 11:29:05 crc kubenswrapper[4953]: I1211 11:29:05.392353 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nqdlc" event={"ID":"daad820d-39e4-445d-9a3d-555c7ed62b43","Type":"ContainerDied","Data":"ac0093478ccad1905914eb7f6109430573e7014e8c181ce687e86d110504cdd5"} Dec 11 11:29:06 crc kubenswrapper[4953]: I1211 11:29:06.402063 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nqdlc" event={"ID":"daad820d-39e4-445d-9a3d-555c7ed62b43","Type":"ContainerStarted","Data":"ff1d851f9ba1b55f7a8c140ede092cd7c318189d0a0b92f258e27e6c88298f38"} Dec 11 11:29:06 crc kubenswrapper[4953]: I1211 11:29:06.428406 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-nqdlc" podStartSLOduration=1.705607862 podStartE2EDuration="7.428376009s" podCreationTimestamp="2025-12-11 11:28:59 +0000 UTC" firstStartedPulling="2025-12-11 11:29:00.343139779 +0000 UTC m=+4658.366998812" lastFinishedPulling="2025-12-11 11:29:06.065907886 +0000 UTC m=+4664.089766959" observedRunningTime="2025-12-11 11:29:06.426520471 +0000 UTC m=+4664.450379544" watchObservedRunningTime="2025-12-11 11:29:06.428376009 +0000 UTC m=+4664.452235052" Dec 11 11:29:09 crc kubenswrapper[4953]: I1211 11:29:09.651868 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-nqdlc" Dec 11 11:29:09 crc kubenswrapper[4953]: I1211 11:29:09.652277 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-nqdlc" Dec 11 11:29:09 crc kubenswrapper[4953]: I1211 11:29:09.717483 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-nqdlc" Dec 11 11:29:19 crc kubenswrapper[4953]: I1211 11:29:19.730244 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-nqdlc" Dec 11 11:29:19 crc kubenswrapper[4953]: I1211 11:29:19.807911 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nqdlc"] Dec 11 11:29:19 crc kubenswrapper[4953]: I1211 11:29:19.851241 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qpzsr"] Dec 11 11:29:19 crc kubenswrapper[4953]: I1211 11:29:19.851546 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-qpzsr" podUID="5ebb894a-b53e-4b23-a1e6-8b4e66388c5b" containerName="registry-server" containerID="cri-o://fe433190542a178c625fa28886f1e5702aa16a3a0da6e3b5f08918c2a134169f" gracePeriod=2 Dec 11 11:29:20 crc kubenswrapper[4953]: I1211 11:29:20.310548 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qpzsr" Dec 11 11:29:20 crc kubenswrapper[4953]: I1211 11:29:20.423705 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ebb894a-b53e-4b23-a1e6-8b4e66388c5b-utilities\") pod \"5ebb894a-b53e-4b23-a1e6-8b4e66388c5b\" (UID: \"5ebb894a-b53e-4b23-a1e6-8b4e66388c5b\") " Dec 11 11:29:20 crc kubenswrapper[4953]: I1211 11:29:20.423810 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-46554\" (UniqueName: \"kubernetes.io/projected/5ebb894a-b53e-4b23-a1e6-8b4e66388c5b-kube-api-access-46554\") pod \"5ebb894a-b53e-4b23-a1e6-8b4e66388c5b\" (UID: \"5ebb894a-b53e-4b23-a1e6-8b4e66388c5b\") " Dec 11 11:29:20 crc kubenswrapper[4953]: I1211 11:29:20.423899 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ebb894a-b53e-4b23-a1e6-8b4e66388c5b-catalog-content\") pod \"5ebb894a-b53e-4b23-a1e6-8b4e66388c5b\" (UID: \"5ebb894a-b53e-4b23-a1e6-8b4e66388c5b\") " Dec 11 11:29:20 crc kubenswrapper[4953]: I1211 11:29:20.425238 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ebb894a-b53e-4b23-a1e6-8b4e66388c5b-utilities" (OuterVolumeSpecName: "utilities") pod "5ebb894a-b53e-4b23-a1e6-8b4e66388c5b" (UID: "5ebb894a-b53e-4b23-a1e6-8b4e66388c5b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 11:29:20 crc kubenswrapper[4953]: I1211 11:29:20.429035 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ebb894a-b53e-4b23-a1e6-8b4e66388c5b-kube-api-access-46554" (OuterVolumeSpecName: "kube-api-access-46554") pod "5ebb894a-b53e-4b23-a1e6-8b4e66388c5b" (UID: "5ebb894a-b53e-4b23-a1e6-8b4e66388c5b"). InnerVolumeSpecName "kube-api-access-46554". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 11:29:20 crc kubenswrapper[4953]: I1211 11:29:20.482245 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ebb894a-b53e-4b23-a1e6-8b4e66388c5b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5ebb894a-b53e-4b23-a1e6-8b4e66388c5b" (UID: "5ebb894a-b53e-4b23-a1e6-8b4e66388c5b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 11:29:20 crc kubenswrapper[4953]: I1211 11:29:20.520986 4953 generic.go:334] "Generic (PLEG): container finished" podID="5ebb894a-b53e-4b23-a1e6-8b4e66388c5b" containerID="fe433190542a178c625fa28886f1e5702aa16a3a0da6e3b5f08918c2a134169f" exitCode=0 Dec 11 11:29:20 crc kubenswrapper[4953]: I1211 11:29:20.521042 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qpzsr" Dec 11 11:29:20 crc kubenswrapper[4953]: I1211 11:29:20.521058 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qpzsr" event={"ID":"5ebb894a-b53e-4b23-a1e6-8b4e66388c5b","Type":"ContainerDied","Data":"fe433190542a178c625fa28886f1e5702aa16a3a0da6e3b5f08918c2a134169f"} Dec 11 11:29:20 crc kubenswrapper[4953]: I1211 11:29:20.521520 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qpzsr" event={"ID":"5ebb894a-b53e-4b23-a1e6-8b4e66388c5b","Type":"ContainerDied","Data":"b4f4654f06c55415ef78aea3bbe860dca832be3bcbea916b95893e822ed19edd"} Dec 11 11:29:20 crc kubenswrapper[4953]: I1211 11:29:20.521564 4953 scope.go:117] "RemoveContainer" containerID="fe433190542a178c625fa28886f1e5702aa16a3a0da6e3b5f08918c2a134169f" Dec 11 11:29:20 crc kubenswrapper[4953]: I1211 11:29:20.525457 4953 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ebb894a-b53e-4b23-a1e6-8b4e66388c5b-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 11:29:20 crc kubenswrapper[4953]: I1211 11:29:20.525493 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-46554\" (UniqueName: \"kubernetes.io/projected/5ebb894a-b53e-4b23-a1e6-8b4e66388c5b-kube-api-access-46554\") on node \"crc\" DevicePath \"\"" Dec 11 11:29:20 crc kubenswrapper[4953]: I1211 11:29:20.525509 4953 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ebb894a-b53e-4b23-a1e6-8b4e66388c5b-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 11:29:20 crc kubenswrapper[4953]: I1211 11:29:20.544479 4953 scope.go:117] "RemoveContainer" containerID="00efb21a6b85f07cab010e9b7a4e633955e2cfe7a9ce5b4a2b7881aa6ad9115f" Dec 11 11:29:20 crc kubenswrapper[4953]: I1211 11:29:20.551749 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qpzsr"] Dec 11 11:29:20 crc kubenswrapper[4953]: I1211 11:29:20.565107 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-qpzsr"] Dec 11 11:29:20 crc kubenswrapper[4953]: I1211 11:29:20.568669 4953 scope.go:117] "RemoveContainer" containerID="381d0f57172eeefde4d093874ab702cde0bdd332d7bea774d681c0000b2bd628" Dec 11 11:29:20 crc kubenswrapper[4953]: I1211 11:29:20.593677 4953 scope.go:117] "RemoveContainer" containerID="fe433190542a178c625fa28886f1e5702aa16a3a0da6e3b5f08918c2a134169f" Dec 11 11:29:20 crc kubenswrapper[4953]: E1211 11:29:20.594120 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe433190542a178c625fa28886f1e5702aa16a3a0da6e3b5f08918c2a134169f\": container with ID starting with fe433190542a178c625fa28886f1e5702aa16a3a0da6e3b5f08918c2a134169f not found: ID does not exist" containerID="fe433190542a178c625fa28886f1e5702aa16a3a0da6e3b5f08918c2a134169f" Dec 11 11:29:20 crc kubenswrapper[4953]: I1211 11:29:20.594171 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe433190542a178c625fa28886f1e5702aa16a3a0da6e3b5f08918c2a134169f"} err="failed to get container status \"fe433190542a178c625fa28886f1e5702aa16a3a0da6e3b5f08918c2a134169f\": rpc error: code = NotFound desc = could not find container \"fe433190542a178c625fa28886f1e5702aa16a3a0da6e3b5f08918c2a134169f\": container with ID starting with fe433190542a178c625fa28886f1e5702aa16a3a0da6e3b5f08918c2a134169f not found: ID does not exist" Dec 11 11:29:20 crc kubenswrapper[4953]: I1211 11:29:20.594209 4953 scope.go:117] "RemoveContainer" containerID="00efb21a6b85f07cab010e9b7a4e633955e2cfe7a9ce5b4a2b7881aa6ad9115f" Dec 11 11:29:20 crc kubenswrapper[4953]: E1211 11:29:20.594541 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"00efb21a6b85f07cab010e9b7a4e633955e2cfe7a9ce5b4a2b7881aa6ad9115f\": container with ID starting with 00efb21a6b85f07cab010e9b7a4e633955e2cfe7a9ce5b4a2b7881aa6ad9115f not found: ID does not exist" containerID="00efb21a6b85f07cab010e9b7a4e633955e2cfe7a9ce5b4a2b7881aa6ad9115f" Dec 11 11:29:20 crc kubenswrapper[4953]: I1211 11:29:20.594586 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00efb21a6b85f07cab010e9b7a4e633955e2cfe7a9ce5b4a2b7881aa6ad9115f"} err="failed to get container status \"00efb21a6b85f07cab010e9b7a4e633955e2cfe7a9ce5b4a2b7881aa6ad9115f\": rpc error: code = NotFound desc = could not find container \"00efb21a6b85f07cab010e9b7a4e633955e2cfe7a9ce5b4a2b7881aa6ad9115f\": container with ID starting with 00efb21a6b85f07cab010e9b7a4e633955e2cfe7a9ce5b4a2b7881aa6ad9115f not found: ID does not exist" Dec 11 11:29:20 crc kubenswrapper[4953]: I1211 11:29:20.594605 4953 scope.go:117] "RemoveContainer" containerID="381d0f57172eeefde4d093874ab702cde0bdd332d7bea774d681c0000b2bd628" Dec 11 11:29:20 crc kubenswrapper[4953]: E1211 11:29:20.594891 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"381d0f57172eeefde4d093874ab702cde0bdd332d7bea774d681c0000b2bd628\": container with ID starting with 381d0f57172eeefde4d093874ab702cde0bdd332d7bea774d681c0000b2bd628 not found: ID does not exist" containerID="381d0f57172eeefde4d093874ab702cde0bdd332d7bea774d681c0000b2bd628" Dec 11 11:29:20 crc kubenswrapper[4953]: I1211 11:29:20.594922 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"381d0f57172eeefde4d093874ab702cde0bdd332d7bea774d681c0000b2bd628"} err="failed to get container status \"381d0f57172eeefde4d093874ab702cde0bdd332d7bea774d681c0000b2bd628\": rpc error: code = NotFound desc = could not find container \"381d0f57172eeefde4d093874ab702cde0bdd332d7bea774d681c0000b2bd628\": container with ID starting with 381d0f57172eeefde4d093874ab702cde0bdd332d7bea774d681c0000b2bd628 not found: ID does not exist" Dec 11 11:29:22 crc kubenswrapper[4953]: I1211 11:29:22.482334 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ebb894a-b53e-4b23-a1e6-8b4e66388c5b" path="/var/lib/kubelet/pods/5ebb894a-b53e-4b23-a1e6-8b4e66388c5b/volumes" Dec 11 11:30:00 crc kubenswrapper[4953]: I1211 11:30:00.180936 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424210-2kff9"] Dec 11 11:30:00 crc kubenswrapper[4953]: E1211 11:30:00.182173 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ebb894a-b53e-4b23-a1e6-8b4e66388c5b" containerName="extract-content" Dec 11 11:30:00 crc kubenswrapper[4953]: I1211 11:30:00.182193 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ebb894a-b53e-4b23-a1e6-8b4e66388c5b" containerName="extract-content" Dec 11 11:30:00 crc kubenswrapper[4953]: E1211 11:30:00.182232 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ebb894a-b53e-4b23-a1e6-8b4e66388c5b" containerName="extract-utilities" Dec 11 11:30:00 crc kubenswrapper[4953]: I1211 11:30:00.182242 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ebb894a-b53e-4b23-a1e6-8b4e66388c5b" containerName="extract-utilities" Dec 11 11:30:00 crc kubenswrapper[4953]: E1211 11:30:00.182257 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ebb894a-b53e-4b23-a1e6-8b4e66388c5b" containerName="registry-server" Dec 11 11:30:00 crc kubenswrapper[4953]: I1211 11:30:00.182297 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ebb894a-b53e-4b23-a1e6-8b4e66388c5b" containerName="registry-server" Dec 11 11:30:00 crc kubenswrapper[4953]: I1211 11:30:00.182529 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ebb894a-b53e-4b23-a1e6-8b4e66388c5b" containerName="registry-server" Dec 11 11:30:00 crc kubenswrapper[4953]: I1211 11:30:00.183453 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424210-2kff9" Dec 11 11:30:00 crc kubenswrapper[4953]: I1211 11:30:00.186094 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 11 11:30:00 crc kubenswrapper[4953]: I1211 11:30:00.186106 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 11 11:30:00 crc kubenswrapper[4953]: I1211 11:30:00.204414 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b902965d-9db2-492a-a6e4-a48b5d3f1b2b-secret-volume\") pod \"collect-profiles-29424210-2kff9\" (UID: \"b902965d-9db2-492a-a6e4-a48b5d3f1b2b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424210-2kff9" Dec 11 11:30:00 crc kubenswrapper[4953]: I1211 11:30:00.204492 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b902965d-9db2-492a-a6e4-a48b5d3f1b2b-config-volume\") pod \"collect-profiles-29424210-2kff9\" (UID: \"b902965d-9db2-492a-a6e4-a48b5d3f1b2b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424210-2kff9" Dec 11 11:30:00 crc kubenswrapper[4953]: I1211 11:30:00.204540 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jms2r\" (UniqueName: \"kubernetes.io/projected/b902965d-9db2-492a-a6e4-a48b5d3f1b2b-kube-api-access-jms2r\") pod \"collect-profiles-29424210-2kff9\" (UID: \"b902965d-9db2-492a-a6e4-a48b5d3f1b2b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424210-2kff9" Dec 11 11:30:00 crc kubenswrapper[4953]: I1211 11:30:00.222516 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424210-2kff9"] Dec 11 11:30:00 crc kubenswrapper[4953]: I1211 11:30:00.305482 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b902965d-9db2-492a-a6e4-a48b5d3f1b2b-config-volume\") pod \"collect-profiles-29424210-2kff9\" (UID: \"b902965d-9db2-492a-a6e4-a48b5d3f1b2b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424210-2kff9" Dec 11 11:30:00 crc kubenswrapper[4953]: I1211 11:30:00.305551 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jms2r\" (UniqueName: \"kubernetes.io/projected/b902965d-9db2-492a-a6e4-a48b5d3f1b2b-kube-api-access-jms2r\") pod \"collect-profiles-29424210-2kff9\" (UID: \"b902965d-9db2-492a-a6e4-a48b5d3f1b2b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424210-2kff9" Dec 11 11:30:00 crc kubenswrapper[4953]: I1211 11:30:00.305685 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b902965d-9db2-492a-a6e4-a48b5d3f1b2b-secret-volume\") pod \"collect-profiles-29424210-2kff9\" (UID: \"b902965d-9db2-492a-a6e4-a48b5d3f1b2b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424210-2kff9" Dec 11 11:30:00 crc kubenswrapper[4953]: I1211 11:30:00.306706 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b902965d-9db2-492a-a6e4-a48b5d3f1b2b-config-volume\") pod \"collect-profiles-29424210-2kff9\" (UID: \"b902965d-9db2-492a-a6e4-a48b5d3f1b2b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424210-2kff9" Dec 11 11:30:00 crc kubenswrapper[4953]: I1211 11:30:00.312775 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b902965d-9db2-492a-a6e4-a48b5d3f1b2b-secret-volume\") pod \"collect-profiles-29424210-2kff9\" (UID: \"b902965d-9db2-492a-a6e4-a48b5d3f1b2b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424210-2kff9" Dec 11 11:30:00 crc kubenswrapper[4953]: I1211 11:30:00.323894 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jms2r\" (UniqueName: \"kubernetes.io/projected/b902965d-9db2-492a-a6e4-a48b5d3f1b2b-kube-api-access-jms2r\") pod \"collect-profiles-29424210-2kff9\" (UID: \"b902965d-9db2-492a-a6e4-a48b5d3f1b2b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424210-2kff9" Dec 11 11:30:00 crc kubenswrapper[4953]: I1211 11:30:00.537812 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424210-2kff9" Dec 11 11:30:01 crc kubenswrapper[4953]: I1211 11:30:01.088613 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424210-2kff9"] Dec 11 11:30:01 crc kubenswrapper[4953]: I1211 11:30:01.863175 4953 generic.go:334] "Generic (PLEG): container finished" podID="b902965d-9db2-492a-a6e4-a48b5d3f1b2b" containerID="ce8c4316df5d51d7b3394b40c8ccfe8810adff1a00749a6c143d81c508022e23" exitCode=0 Dec 11 11:30:01 crc kubenswrapper[4953]: I1211 11:30:01.863315 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29424210-2kff9" event={"ID":"b902965d-9db2-492a-a6e4-a48b5d3f1b2b","Type":"ContainerDied","Data":"ce8c4316df5d51d7b3394b40c8ccfe8810adff1a00749a6c143d81c508022e23"} Dec 11 11:30:01 crc kubenswrapper[4953]: I1211 11:30:01.863503 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29424210-2kff9" event={"ID":"b902965d-9db2-492a-a6e4-a48b5d3f1b2b","Type":"ContainerStarted","Data":"d96f8f48bc5f2f1332b697d4932d60dd882abf828e4744b50eb771a4bc7f4ab2"} Dec 11 11:30:03 crc kubenswrapper[4953]: I1211 11:30:03.148383 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424210-2kff9" Dec 11 11:30:03 crc kubenswrapper[4953]: I1211 11:30:03.308988 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b902965d-9db2-492a-a6e4-a48b5d3f1b2b-secret-volume\") pod \"b902965d-9db2-492a-a6e4-a48b5d3f1b2b\" (UID: \"b902965d-9db2-492a-a6e4-a48b5d3f1b2b\") " Dec 11 11:30:03 crc kubenswrapper[4953]: I1211 11:30:03.309059 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jms2r\" (UniqueName: \"kubernetes.io/projected/b902965d-9db2-492a-a6e4-a48b5d3f1b2b-kube-api-access-jms2r\") pod \"b902965d-9db2-492a-a6e4-a48b5d3f1b2b\" (UID: \"b902965d-9db2-492a-a6e4-a48b5d3f1b2b\") " Dec 11 11:30:03 crc kubenswrapper[4953]: I1211 11:30:03.309124 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b902965d-9db2-492a-a6e4-a48b5d3f1b2b-config-volume\") pod \"b902965d-9db2-492a-a6e4-a48b5d3f1b2b\" (UID: \"b902965d-9db2-492a-a6e4-a48b5d3f1b2b\") " Dec 11 11:30:03 crc kubenswrapper[4953]: I1211 11:30:03.310103 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b902965d-9db2-492a-a6e4-a48b5d3f1b2b-config-volume" (OuterVolumeSpecName: "config-volume") pod "b902965d-9db2-492a-a6e4-a48b5d3f1b2b" (UID: "b902965d-9db2-492a-a6e4-a48b5d3f1b2b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 11:30:03 crc kubenswrapper[4953]: I1211 11:30:03.310566 4953 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b902965d-9db2-492a-a6e4-a48b5d3f1b2b-config-volume\") on node \"crc\" DevicePath \"\"" Dec 11 11:30:03 crc kubenswrapper[4953]: I1211 11:30:03.315296 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b902965d-9db2-492a-a6e4-a48b5d3f1b2b-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "b902965d-9db2-492a-a6e4-a48b5d3f1b2b" (UID: "b902965d-9db2-492a-a6e4-a48b5d3f1b2b"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 11:30:03 crc kubenswrapper[4953]: I1211 11:30:03.315952 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b902965d-9db2-492a-a6e4-a48b5d3f1b2b-kube-api-access-jms2r" (OuterVolumeSpecName: "kube-api-access-jms2r") pod "b902965d-9db2-492a-a6e4-a48b5d3f1b2b" (UID: "b902965d-9db2-492a-a6e4-a48b5d3f1b2b"). InnerVolumeSpecName "kube-api-access-jms2r". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 11:30:03 crc kubenswrapper[4953]: I1211 11:30:03.412170 4953 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b902965d-9db2-492a-a6e4-a48b5d3f1b2b-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 11 11:30:03 crc kubenswrapper[4953]: I1211 11:30:03.412215 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jms2r\" (UniqueName: \"kubernetes.io/projected/b902965d-9db2-492a-a6e4-a48b5d3f1b2b-kube-api-access-jms2r\") on node \"crc\" DevicePath \"\"" Dec 11 11:30:03 crc kubenswrapper[4953]: I1211 11:30:03.884883 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29424210-2kff9" event={"ID":"b902965d-9db2-492a-a6e4-a48b5d3f1b2b","Type":"ContainerDied","Data":"d96f8f48bc5f2f1332b697d4932d60dd882abf828e4744b50eb771a4bc7f4ab2"} Dec 11 11:30:03 crc kubenswrapper[4953]: I1211 11:30:03.884977 4953 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d96f8f48bc5f2f1332b697d4932d60dd882abf828e4744b50eb771a4bc7f4ab2" Dec 11 11:30:03 crc kubenswrapper[4953]: I1211 11:30:03.885125 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424210-2kff9" Dec 11 11:30:04 crc kubenswrapper[4953]: I1211 11:30:04.240995 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424165-mdqxf"] Dec 11 11:30:04 crc kubenswrapper[4953]: I1211 11:30:04.246625 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424165-mdqxf"] Dec 11 11:30:04 crc kubenswrapper[4953]: I1211 11:30:04.488276 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bfb5011e-d0ec-46b3-ae64-fbdf81f24461" path="/var/lib/kubelet/pods/bfb5011e-d0ec-46b3-ae64-fbdf81f24461/volumes" Dec 11 11:30:18 crc kubenswrapper[4953]: I1211 11:30:18.491735 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-n45hw"] Dec 11 11:30:18 crc kubenswrapper[4953]: E1211 11:30:18.493190 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b902965d-9db2-492a-a6e4-a48b5d3f1b2b" containerName="collect-profiles" Dec 11 11:30:18 crc kubenswrapper[4953]: I1211 11:30:18.493229 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="b902965d-9db2-492a-a6e4-a48b5d3f1b2b" containerName="collect-profiles" Dec 11 11:30:18 crc kubenswrapper[4953]: I1211 11:30:18.493637 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="b902965d-9db2-492a-a6e4-a48b5d3f1b2b" containerName="collect-profiles" Dec 11 11:30:18 crc kubenswrapper[4953]: I1211 11:30:18.496293 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n45hw" Dec 11 11:30:18 crc kubenswrapper[4953]: I1211 11:30:18.505049 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-n45hw"] Dec 11 11:30:18 crc kubenswrapper[4953]: I1211 11:30:18.608868 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtsqv\" (UniqueName: \"kubernetes.io/projected/dab513ce-7833-4d8a-917d-8242dc09ec34-kube-api-access-gtsqv\") pod \"community-operators-n45hw\" (UID: \"dab513ce-7833-4d8a-917d-8242dc09ec34\") " pod="openshift-marketplace/community-operators-n45hw" Dec 11 11:30:18 crc kubenswrapper[4953]: I1211 11:30:18.608943 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dab513ce-7833-4d8a-917d-8242dc09ec34-catalog-content\") pod \"community-operators-n45hw\" (UID: \"dab513ce-7833-4d8a-917d-8242dc09ec34\") " pod="openshift-marketplace/community-operators-n45hw" Dec 11 11:30:18 crc kubenswrapper[4953]: I1211 11:30:18.609060 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dab513ce-7833-4d8a-917d-8242dc09ec34-utilities\") pod \"community-operators-n45hw\" (UID: \"dab513ce-7833-4d8a-917d-8242dc09ec34\") " pod="openshift-marketplace/community-operators-n45hw" Dec 11 11:30:18 crc kubenswrapper[4953]: I1211 11:30:18.710738 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dab513ce-7833-4d8a-917d-8242dc09ec34-catalog-content\") pod \"community-operators-n45hw\" (UID: \"dab513ce-7833-4d8a-917d-8242dc09ec34\") " pod="openshift-marketplace/community-operators-n45hw" Dec 11 11:30:18 crc kubenswrapper[4953]: I1211 11:30:18.711163 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dab513ce-7833-4d8a-917d-8242dc09ec34-utilities\") pod \"community-operators-n45hw\" (UID: \"dab513ce-7833-4d8a-917d-8242dc09ec34\") " pod="openshift-marketplace/community-operators-n45hw" Dec 11 11:30:18 crc kubenswrapper[4953]: I1211 11:30:18.711230 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gtsqv\" (UniqueName: \"kubernetes.io/projected/dab513ce-7833-4d8a-917d-8242dc09ec34-kube-api-access-gtsqv\") pod \"community-operators-n45hw\" (UID: \"dab513ce-7833-4d8a-917d-8242dc09ec34\") " pod="openshift-marketplace/community-operators-n45hw" Dec 11 11:30:18 crc kubenswrapper[4953]: I1211 11:30:18.711329 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dab513ce-7833-4d8a-917d-8242dc09ec34-catalog-content\") pod \"community-operators-n45hw\" (UID: \"dab513ce-7833-4d8a-917d-8242dc09ec34\") " pod="openshift-marketplace/community-operators-n45hw" Dec 11 11:30:18 crc kubenswrapper[4953]: I1211 11:30:18.711700 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dab513ce-7833-4d8a-917d-8242dc09ec34-utilities\") pod \"community-operators-n45hw\" (UID: \"dab513ce-7833-4d8a-917d-8242dc09ec34\") " pod="openshift-marketplace/community-operators-n45hw" Dec 11 11:30:18 crc kubenswrapper[4953]: I1211 11:30:18.733290 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gtsqv\" (UniqueName: \"kubernetes.io/projected/dab513ce-7833-4d8a-917d-8242dc09ec34-kube-api-access-gtsqv\") pod \"community-operators-n45hw\" (UID: \"dab513ce-7833-4d8a-917d-8242dc09ec34\") " pod="openshift-marketplace/community-operators-n45hw" Dec 11 11:30:18 crc kubenswrapper[4953]: I1211 11:30:18.824535 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n45hw" Dec 11 11:30:19 crc kubenswrapper[4953]: I1211 11:30:19.147398 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-n45hw"] Dec 11 11:30:19 crc kubenswrapper[4953]: E1211 11:30:19.610594 4953 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddab513ce_7833_4d8a_917d_8242dc09ec34.slice/crio-conmon-6f1f3409646af8dcffdefae15e77fc64839e3938b4d61688caff8d888ce2c666.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddab513ce_7833_4d8a_917d_8242dc09ec34.slice/crio-6f1f3409646af8dcffdefae15e77fc64839e3938b4d61688caff8d888ce2c666.scope\": RecentStats: unable to find data in memory cache]" Dec 11 11:30:20 crc kubenswrapper[4953]: I1211 11:30:20.139709 4953 generic.go:334] "Generic (PLEG): container finished" podID="dab513ce-7833-4d8a-917d-8242dc09ec34" containerID="6f1f3409646af8dcffdefae15e77fc64839e3938b4d61688caff8d888ce2c666" exitCode=0 Dec 11 11:30:20 crc kubenswrapper[4953]: I1211 11:30:20.139881 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n45hw" event={"ID":"dab513ce-7833-4d8a-917d-8242dc09ec34","Type":"ContainerDied","Data":"6f1f3409646af8dcffdefae15e77fc64839e3938b4d61688caff8d888ce2c666"} Dec 11 11:30:20 crc kubenswrapper[4953]: I1211 11:30:20.140128 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n45hw" event={"ID":"dab513ce-7833-4d8a-917d-8242dc09ec34","Type":"ContainerStarted","Data":"4bcb4b2308548e9954917ca9f67de7a15fd6f5ccfff8cb98a54717b4daa06f03"} Dec 11 11:30:20 crc kubenswrapper[4953]: I1211 11:30:20.142058 4953 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 11 11:30:21 crc kubenswrapper[4953]: I1211 11:30:21.147863 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n45hw" event={"ID":"dab513ce-7833-4d8a-917d-8242dc09ec34","Type":"ContainerStarted","Data":"cc8a92b3a3bcd337cf4ace80dd0345931f6bb92b979cfd57c197fc8e1e4ead97"} Dec 11 11:30:22 crc kubenswrapper[4953]: I1211 11:30:22.161039 4953 generic.go:334] "Generic (PLEG): container finished" podID="dab513ce-7833-4d8a-917d-8242dc09ec34" containerID="cc8a92b3a3bcd337cf4ace80dd0345931f6bb92b979cfd57c197fc8e1e4ead97" exitCode=0 Dec 11 11:30:22 crc kubenswrapper[4953]: I1211 11:30:22.162187 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n45hw" event={"ID":"dab513ce-7833-4d8a-917d-8242dc09ec34","Type":"ContainerDied","Data":"cc8a92b3a3bcd337cf4ace80dd0345931f6bb92b979cfd57c197fc8e1e4ead97"} Dec 11 11:30:24 crc kubenswrapper[4953]: I1211 11:30:24.178817 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n45hw" event={"ID":"dab513ce-7833-4d8a-917d-8242dc09ec34","Type":"ContainerStarted","Data":"cd665c87ffbdd34298d63185d8f0d96ddaf4be6f62d8214fd2b51cdd942f0d37"} Dec 11 11:30:24 crc kubenswrapper[4953]: I1211 11:30:24.211321 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-n45hw" podStartSLOduration=3.379266958 podStartE2EDuration="6.211301579s" podCreationTimestamp="2025-12-11 11:30:18 +0000 UTC" firstStartedPulling="2025-12-11 11:30:20.141865857 +0000 UTC m=+4738.165724890" lastFinishedPulling="2025-12-11 11:30:22.973900438 +0000 UTC m=+4740.997759511" observedRunningTime="2025-12-11 11:30:24.208242804 +0000 UTC m=+4742.232101867" watchObservedRunningTime="2025-12-11 11:30:24.211301579 +0000 UTC m=+4742.235160622" Dec 11 11:30:28 crc kubenswrapper[4953]: I1211 11:30:28.825340 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-n45hw" Dec 11 11:30:28 crc kubenswrapper[4953]: I1211 11:30:28.826867 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-n45hw" Dec 11 11:30:28 crc kubenswrapper[4953]: I1211 11:30:28.896219 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-n45hw" Dec 11 11:30:29 crc kubenswrapper[4953]: I1211 11:30:29.304628 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-n45hw" Dec 11 11:30:29 crc kubenswrapper[4953]: I1211 11:30:29.372291 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-n45hw"] Dec 11 11:30:31 crc kubenswrapper[4953]: I1211 11:30:31.243279 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-n45hw" podUID="dab513ce-7833-4d8a-917d-8242dc09ec34" containerName="registry-server" containerID="cri-o://cd665c87ffbdd34298d63185d8f0d96ddaf4be6f62d8214fd2b51cdd942f0d37" gracePeriod=2 Dec 11 11:30:32 crc kubenswrapper[4953]: I1211 11:30:32.255870 4953 generic.go:334] "Generic (PLEG): container finished" podID="dab513ce-7833-4d8a-917d-8242dc09ec34" containerID="cd665c87ffbdd34298d63185d8f0d96ddaf4be6f62d8214fd2b51cdd942f0d37" exitCode=0 Dec 11 11:30:32 crc kubenswrapper[4953]: I1211 11:30:32.255954 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n45hw" event={"ID":"dab513ce-7833-4d8a-917d-8242dc09ec34","Type":"ContainerDied","Data":"cd665c87ffbdd34298d63185d8f0d96ddaf4be6f62d8214fd2b51cdd942f0d37"} Dec 11 11:30:32 crc kubenswrapper[4953]: I1211 11:30:32.389475 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n45hw" Dec 11 11:30:32 crc kubenswrapper[4953]: I1211 11:30:32.553794 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dab513ce-7833-4d8a-917d-8242dc09ec34-utilities\") pod \"dab513ce-7833-4d8a-917d-8242dc09ec34\" (UID: \"dab513ce-7833-4d8a-917d-8242dc09ec34\") " Dec 11 11:30:32 crc kubenswrapper[4953]: I1211 11:30:32.554061 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gtsqv\" (UniqueName: \"kubernetes.io/projected/dab513ce-7833-4d8a-917d-8242dc09ec34-kube-api-access-gtsqv\") pod \"dab513ce-7833-4d8a-917d-8242dc09ec34\" (UID: \"dab513ce-7833-4d8a-917d-8242dc09ec34\") " Dec 11 11:30:32 crc kubenswrapper[4953]: I1211 11:30:32.554250 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dab513ce-7833-4d8a-917d-8242dc09ec34-catalog-content\") pod \"dab513ce-7833-4d8a-917d-8242dc09ec34\" (UID: \"dab513ce-7833-4d8a-917d-8242dc09ec34\") " Dec 11 11:30:32 crc kubenswrapper[4953]: I1211 11:30:32.555015 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dab513ce-7833-4d8a-917d-8242dc09ec34-utilities" (OuterVolumeSpecName: "utilities") pod "dab513ce-7833-4d8a-917d-8242dc09ec34" (UID: "dab513ce-7833-4d8a-917d-8242dc09ec34"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 11:30:32 crc kubenswrapper[4953]: I1211 11:30:32.563984 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dab513ce-7833-4d8a-917d-8242dc09ec34-kube-api-access-gtsqv" (OuterVolumeSpecName: "kube-api-access-gtsqv") pod "dab513ce-7833-4d8a-917d-8242dc09ec34" (UID: "dab513ce-7833-4d8a-917d-8242dc09ec34"). InnerVolumeSpecName "kube-api-access-gtsqv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 11:30:32 crc kubenswrapper[4953]: I1211 11:30:32.614597 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dab513ce-7833-4d8a-917d-8242dc09ec34-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dab513ce-7833-4d8a-917d-8242dc09ec34" (UID: "dab513ce-7833-4d8a-917d-8242dc09ec34"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 11:30:32 crc kubenswrapper[4953]: I1211 11:30:32.656529 4953 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dab513ce-7833-4d8a-917d-8242dc09ec34-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 11:30:32 crc kubenswrapper[4953]: I1211 11:30:32.656561 4953 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dab513ce-7833-4d8a-917d-8242dc09ec34-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 11:30:32 crc kubenswrapper[4953]: I1211 11:30:32.656587 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gtsqv\" (UniqueName: \"kubernetes.io/projected/dab513ce-7833-4d8a-917d-8242dc09ec34-kube-api-access-gtsqv\") on node \"crc\" DevicePath \"\"" Dec 11 11:30:33 crc kubenswrapper[4953]: I1211 11:30:33.266906 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n45hw" event={"ID":"dab513ce-7833-4d8a-917d-8242dc09ec34","Type":"ContainerDied","Data":"4bcb4b2308548e9954917ca9f67de7a15fd6f5ccfff8cb98a54717b4daa06f03"} Dec 11 11:30:33 crc kubenswrapper[4953]: I1211 11:30:33.266969 4953 scope.go:117] "RemoveContainer" containerID="cd665c87ffbdd34298d63185d8f0d96ddaf4be6f62d8214fd2b51cdd942f0d37" Dec 11 11:30:33 crc kubenswrapper[4953]: I1211 11:30:33.267343 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n45hw" Dec 11 11:30:33 crc kubenswrapper[4953]: I1211 11:30:33.301164 4953 scope.go:117] "RemoveContainer" containerID="cc8a92b3a3bcd337cf4ace80dd0345931f6bb92b979cfd57c197fc8e1e4ead97" Dec 11 11:30:33 crc kubenswrapper[4953]: I1211 11:30:33.305374 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-n45hw"] Dec 11 11:30:33 crc kubenswrapper[4953]: I1211 11:30:33.324803 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-n45hw"] Dec 11 11:30:33 crc kubenswrapper[4953]: I1211 11:30:33.334201 4953 scope.go:117] "RemoveContainer" containerID="6f1f3409646af8dcffdefae15e77fc64839e3938b4d61688caff8d888ce2c666" Dec 11 11:30:34 crc kubenswrapper[4953]: I1211 11:30:34.485172 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dab513ce-7833-4d8a-917d-8242dc09ec34" path="/var/lib/kubelet/pods/dab513ce-7833-4d8a-917d-8242dc09ec34/volumes" Dec 11 11:30:34 crc kubenswrapper[4953]: I1211 11:30:34.884534 4953 scope.go:117] "RemoveContainer" containerID="56ebdf6585838df8ae69a142f953c34c4c160dbb5a86cd58fc4df875b981a04a" Dec 11 11:31:18 crc kubenswrapper[4953]: I1211 11:31:18.194714 4953 patch_prober.go:28] interesting pod/machine-config-daemon-q2898 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 11:31:18 crc kubenswrapper[4953]: I1211 11:31:18.195446 4953 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 11:31:48 crc kubenswrapper[4953]: I1211 11:31:48.194285 4953 patch_prober.go:28] interesting pod/machine-config-daemon-q2898 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 11:31:48 crc kubenswrapper[4953]: I1211 11:31:48.195081 4953 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 11:32:18 crc kubenswrapper[4953]: I1211 11:32:18.195344 4953 patch_prober.go:28] interesting pod/machine-config-daemon-q2898 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 11:32:18 crc kubenswrapper[4953]: I1211 11:32:18.196077 4953 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 11:32:18 crc kubenswrapper[4953]: I1211 11:32:18.196140 4953 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q2898" Dec 11 11:32:18 crc kubenswrapper[4953]: I1211 11:32:18.196865 4953 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3fa714d5edfb81bdbba0eb00b5ad25ac380f07f086e8da7b3fec27e11ee65c51"} pod="openshift-machine-config-operator/machine-config-daemon-q2898" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 11 11:32:18 crc kubenswrapper[4953]: I1211 11:32:18.196982 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" containerName="machine-config-daemon" containerID="cri-o://3fa714d5edfb81bdbba0eb00b5ad25ac380f07f086e8da7b3fec27e11ee65c51" gracePeriod=600 Dec 11 11:32:18 crc kubenswrapper[4953]: I1211 11:32:18.465463 4953 generic.go:334] "Generic (PLEG): container finished" podID="ed741fb7-1326-48b7-a713-17c9f0243eac" containerID="3fa714d5edfb81bdbba0eb00b5ad25ac380f07f086e8da7b3fec27e11ee65c51" exitCode=0 Dec 11 11:32:18 crc kubenswrapper[4953]: I1211 11:32:18.465562 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q2898" event={"ID":"ed741fb7-1326-48b7-a713-17c9f0243eac","Type":"ContainerDied","Data":"3fa714d5edfb81bdbba0eb00b5ad25ac380f07f086e8da7b3fec27e11ee65c51"} Dec 11 11:32:18 crc kubenswrapper[4953]: I1211 11:32:18.465811 4953 scope.go:117] "RemoveContainer" containerID="4634a6d8d4b364afe2a116a75b1386adda5bb4d5e2500c52df272bf1d390877f" Dec 11 11:32:19 crc kubenswrapper[4953]: I1211 11:32:19.477075 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q2898" event={"ID":"ed741fb7-1326-48b7-a713-17c9f0243eac","Type":"ContainerStarted","Data":"df367e14114b8c622936df53030f66057d63bf1f69a7125bce980e674a017a51"} Dec 11 11:33:48 crc kubenswrapper[4953]: I1211 11:33:48.115195 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-bb4gt"] Dec 11 11:33:48 crc kubenswrapper[4953]: E1211 11:33:48.117439 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dab513ce-7833-4d8a-917d-8242dc09ec34" containerName="extract-content" Dec 11 11:33:48 crc kubenswrapper[4953]: I1211 11:33:48.117651 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="dab513ce-7833-4d8a-917d-8242dc09ec34" containerName="extract-content" Dec 11 11:33:48 crc kubenswrapper[4953]: E1211 11:33:48.117672 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dab513ce-7833-4d8a-917d-8242dc09ec34" containerName="registry-server" Dec 11 11:33:48 crc kubenswrapper[4953]: I1211 11:33:48.117679 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="dab513ce-7833-4d8a-917d-8242dc09ec34" containerName="registry-server" Dec 11 11:33:48 crc kubenswrapper[4953]: E1211 11:33:48.117691 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dab513ce-7833-4d8a-917d-8242dc09ec34" containerName="extract-utilities" Dec 11 11:33:48 crc kubenswrapper[4953]: I1211 11:33:48.117698 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="dab513ce-7833-4d8a-917d-8242dc09ec34" containerName="extract-utilities" Dec 11 11:33:48 crc kubenswrapper[4953]: I1211 11:33:48.117861 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="dab513ce-7833-4d8a-917d-8242dc09ec34" containerName="registry-server" Dec 11 11:33:48 crc kubenswrapper[4953]: I1211 11:33:48.118901 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bb4gt" Dec 11 11:33:48 crc kubenswrapper[4953]: I1211 11:33:48.139203 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f957cb83-d834-4ba8-9660-0175dfc0a151-utilities\") pod \"redhat-operators-bb4gt\" (UID: \"f957cb83-d834-4ba8-9660-0175dfc0a151\") " pod="openshift-marketplace/redhat-operators-bb4gt" Dec 11 11:33:48 crc kubenswrapper[4953]: I1211 11:33:48.139264 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f957cb83-d834-4ba8-9660-0175dfc0a151-catalog-content\") pod \"redhat-operators-bb4gt\" (UID: \"f957cb83-d834-4ba8-9660-0175dfc0a151\") " pod="openshift-marketplace/redhat-operators-bb4gt" Dec 11 11:33:48 crc kubenswrapper[4953]: I1211 11:33:48.139358 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdrhr\" (UniqueName: \"kubernetes.io/projected/f957cb83-d834-4ba8-9660-0175dfc0a151-kube-api-access-xdrhr\") pod \"redhat-operators-bb4gt\" (UID: \"f957cb83-d834-4ba8-9660-0175dfc0a151\") " pod="openshift-marketplace/redhat-operators-bb4gt" Dec 11 11:33:48 crc kubenswrapper[4953]: I1211 11:33:48.173746 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bb4gt"] Dec 11 11:33:48 crc kubenswrapper[4953]: I1211 11:33:48.240181 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f957cb83-d834-4ba8-9660-0175dfc0a151-utilities\") pod \"redhat-operators-bb4gt\" (UID: \"f957cb83-d834-4ba8-9660-0175dfc0a151\") " pod="openshift-marketplace/redhat-operators-bb4gt" Dec 11 11:33:48 crc kubenswrapper[4953]: I1211 11:33:48.240232 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f957cb83-d834-4ba8-9660-0175dfc0a151-catalog-content\") pod \"redhat-operators-bb4gt\" (UID: \"f957cb83-d834-4ba8-9660-0175dfc0a151\") " pod="openshift-marketplace/redhat-operators-bb4gt" Dec 11 11:33:48 crc kubenswrapper[4953]: I1211 11:33:48.240270 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xdrhr\" (UniqueName: \"kubernetes.io/projected/f957cb83-d834-4ba8-9660-0175dfc0a151-kube-api-access-xdrhr\") pod \"redhat-operators-bb4gt\" (UID: \"f957cb83-d834-4ba8-9660-0175dfc0a151\") " pod="openshift-marketplace/redhat-operators-bb4gt" Dec 11 11:33:48 crc kubenswrapper[4953]: I1211 11:33:48.242763 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f957cb83-d834-4ba8-9660-0175dfc0a151-utilities\") pod \"redhat-operators-bb4gt\" (UID: \"f957cb83-d834-4ba8-9660-0175dfc0a151\") " pod="openshift-marketplace/redhat-operators-bb4gt" Dec 11 11:33:48 crc kubenswrapper[4953]: I1211 11:33:48.243023 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f957cb83-d834-4ba8-9660-0175dfc0a151-catalog-content\") pod \"redhat-operators-bb4gt\" (UID: \"f957cb83-d834-4ba8-9660-0175dfc0a151\") " pod="openshift-marketplace/redhat-operators-bb4gt" Dec 11 11:33:48 crc kubenswrapper[4953]: I1211 11:33:48.263600 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xdrhr\" (UniqueName: \"kubernetes.io/projected/f957cb83-d834-4ba8-9660-0175dfc0a151-kube-api-access-xdrhr\") pod \"redhat-operators-bb4gt\" (UID: \"f957cb83-d834-4ba8-9660-0175dfc0a151\") " pod="openshift-marketplace/redhat-operators-bb4gt" Dec 11 11:33:48 crc kubenswrapper[4953]: I1211 11:33:48.436969 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bb4gt" Dec 11 11:33:48 crc kubenswrapper[4953]: I1211 11:33:48.702692 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bb4gt"] Dec 11 11:33:49 crc kubenswrapper[4953]: I1211 11:33:49.251746 4953 generic.go:334] "Generic (PLEG): container finished" podID="f957cb83-d834-4ba8-9660-0175dfc0a151" containerID="d35d877acfc574760b584077c3ded968f541671edd460330f4873b8e9d20f446" exitCode=0 Dec 11 11:33:49 crc kubenswrapper[4953]: I1211 11:33:49.252093 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bb4gt" event={"ID":"f957cb83-d834-4ba8-9660-0175dfc0a151","Type":"ContainerDied","Data":"d35d877acfc574760b584077c3ded968f541671edd460330f4873b8e9d20f446"} Dec 11 11:33:49 crc kubenswrapper[4953]: I1211 11:33:49.252126 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bb4gt" event={"ID":"f957cb83-d834-4ba8-9660-0175dfc0a151","Type":"ContainerStarted","Data":"a21cb50e15f35fca6bc82e59e5fd5dc398f127ed605a44ba450ab8f5c653653e"} Dec 11 11:33:50 crc kubenswrapper[4953]: I1211 11:33:50.264387 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bb4gt" event={"ID":"f957cb83-d834-4ba8-9660-0175dfc0a151","Type":"ContainerStarted","Data":"6f7e99607345cc80b0759967e74a64ed5644a844386dfad3dab2b2921e9e1ab2"} Dec 11 11:33:51 crc kubenswrapper[4953]: I1211 11:33:51.279851 4953 generic.go:334] "Generic (PLEG): container finished" podID="f957cb83-d834-4ba8-9660-0175dfc0a151" containerID="6f7e99607345cc80b0759967e74a64ed5644a844386dfad3dab2b2921e9e1ab2" exitCode=0 Dec 11 11:33:51 crc kubenswrapper[4953]: I1211 11:33:51.279964 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bb4gt" event={"ID":"f957cb83-d834-4ba8-9660-0175dfc0a151","Type":"ContainerDied","Data":"6f7e99607345cc80b0759967e74a64ed5644a844386dfad3dab2b2921e9e1ab2"} Dec 11 11:33:53 crc kubenswrapper[4953]: I1211 11:33:53.306204 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bb4gt" event={"ID":"f957cb83-d834-4ba8-9660-0175dfc0a151","Type":"ContainerStarted","Data":"3a4d6a3c0e2774cd4dd97a026a64c4958502675ac56512b59af0a06caf621636"} Dec 11 11:33:53 crc kubenswrapper[4953]: I1211 11:33:53.345299 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-bb4gt" podStartSLOduration=1.7717764360000001 podStartE2EDuration="5.345268983s" podCreationTimestamp="2025-12-11 11:33:48 +0000 UTC" firstStartedPulling="2025-12-11 11:33:49.253597632 +0000 UTC m=+4947.277456665" lastFinishedPulling="2025-12-11 11:33:52.827090159 +0000 UTC m=+4950.850949212" observedRunningTime="2025-12-11 11:33:53.330961343 +0000 UTC m=+4951.354820406" watchObservedRunningTime="2025-12-11 11:33:53.345268983 +0000 UTC m=+4951.369128056" Dec 11 11:33:58 crc kubenswrapper[4953]: I1211 11:33:58.438938 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-bb4gt" Dec 11 11:33:58 crc kubenswrapper[4953]: I1211 11:33:58.439431 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-bb4gt" Dec 11 11:33:59 crc kubenswrapper[4953]: I1211 11:33:59.479858 4953 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-bb4gt" podUID="f957cb83-d834-4ba8-9660-0175dfc0a151" containerName="registry-server" probeResult="failure" output=< Dec 11 11:33:59 crc kubenswrapper[4953]: timeout: failed to connect service ":50051" within 1s Dec 11 11:33:59 crc kubenswrapper[4953]: > Dec 11 11:34:08 crc kubenswrapper[4953]: I1211 11:34:08.486722 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-bb4gt" Dec 11 11:34:08 crc kubenswrapper[4953]: I1211 11:34:08.549045 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-bb4gt" Dec 11 11:34:08 crc kubenswrapper[4953]: I1211 11:34:08.727348 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bb4gt"] Dec 11 11:34:10 crc kubenswrapper[4953]: I1211 11:34:10.452838 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-bb4gt" podUID="f957cb83-d834-4ba8-9660-0175dfc0a151" containerName="registry-server" containerID="cri-o://3a4d6a3c0e2774cd4dd97a026a64c4958502675ac56512b59af0a06caf621636" gracePeriod=2 Dec 11 11:34:10 crc kubenswrapper[4953]: I1211 11:34:10.905651 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bb4gt" Dec 11 11:34:10 crc kubenswrapper[4953]: I1211 11:34:10.925484 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f957cb83-d834-4ba8-9660-0175dfc0a151-catalog-content\") pod \"f957cb83-d834-4ba8-9660-0175dfc0a151\" (UID: \"f957cb83-d834-4ba8-9660-0175dfc0a151\") " Dec 11 11:34:10 crc kubenswrapper[4953]: I1211 11:34:10.925565 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xdrhr\" (UniqueName: \"kubernetes.io/projected/f957cb83-d834-4ba8-9660-0175dfc0a151-kube-api-access-xdrhr\") pod \"f957cb83-d834-4ba8-9660-0175dfc0a151\" (UID: \"f957cb83-d834-4ba8-9660-0175dfc0a151\") " Dec 11 11:34:10 crc kubenswrapper[4953]: I1211 11:34:10.933603 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f957cb83-d834-4ba8-9660-0175dfc0a151-kube-api-access-xdrhr" (OuterVolumeSpecName: "kube-api-access-xdrhr") pod "f957cb83-d834-4ba8-9660-0175dfc0a151" (UID: "f957cb83-d834-4ba8-9660-0175dfc0a151"). InnerVolumeSpecName "kube-api-access-xdrhr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 11:34:11 crc kubenswrapper[4953]: I1211 11:34:11.026977 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f957cb83-d834-4ba8-9660-0175dfc0a151-utilities\") pod \"f957cb83-d834-4ba8-9660-0175dfc0a151\" (UID: \"f957cb83-d834-4ba8-9660-0175dfc0a151\") " Dec 11 11:34:11 crc kubenswrapper[4953]: I1211 11:34:11.027701 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xdrhr\" (UniqueName: \"kubernetes.io/projected/f957cb83-d834-4ba8-9660-0175dfc0a151-kube-api-access-xdrhr\") on node \"crc\" DevicePath \"\"" Dec 11 11:34:11 crc kubenswrapper[4953]: I1211 11:34:11.027934 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f957cb83-d834-4ba8-9660-0175dfc0a151-utilities" (OuterVolumeSpecName: "utilities") pod "f957cb83-d834-4ba8-9660-0175dfc0a151" (UID: "f957cb83-d834-4ba8-9660-0175dfc0a151"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 11:34:11 crc kubenswrapper[4953]: I1211 11:34:11.047825 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f957cb83-d834-4ba8-9660-0175dfc0a151-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f957cb83-d834-4ba8-9660-0175dfc0a151" (UID: "f957cb83-d834-4ba8-9660-0175dfc0a151"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 11:34:11 crc kubenswrapper[4953]: I1211 11:34:11.129203 4953 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f957cb83-d834-4ba8-9660-0175dfc0a151-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 11:34:11 crc kubenswrapper[4953]: I1211 11:34:11.129272 4953 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f957cb83-d834-4ba8-9660-0175dfc0a151-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 11:34:11 crc kubenswrapper[4953]: I1211 11:34:11.465214 4953 generic.go:334] "Generic (PLEG): container finished" podID="f957cb83-d834-4ba8-9660-0175dfc0a151" containerID="3a4d6a3c0e2774cd4dd97a026a64c4958502675ac56512b59af0a06caf621636" exitCode=0 Dec 11 11:34:11 crc kubenswrapper[4953]: I1211 11:34:11.465276 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bb4gt" event={"ID":"f957cb83-d834-4ba8-9660-0175dfc0a151","Type":"ContainerDied","Data":"3a4d6a3c0e2774cd4dd97a026a64c4958502675ac56512b59af0a06caf621636"} Dec 11 11:34:11 crc kubenswrapper[4953]: I1211 11:34:11.465339 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bb4gt" event={"ID":"f957cb83-d834-4ba8-9660-0175dfc0a151","Type":"ContainerDied","Data":"a21cb50e15f35fca6bc82e59e5fd5dc398f127ed605a44ba450ab8f5c653653e"} Dec 11 11:34:11 crc kubenswrapper[4953]: I1211 11:34:11.465361 4953 scope.go:117] "RemoveContainer" containerID="3a4d6a3c0e2774cd4dd97a026a64c4958502675ac56512b59af0a06caf621636" Dec 11 11:34:11 crc kubenswrapper[4953]: I1211 11:34:11.466173 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bb4gt" Dec 11 11:34:11 crc kubenswrapper[4953]: I1211 11:34:11.501968 4953 scope.go:117] "RemoveContainer" containerID="6f7e99607345cc80b0759967e74a64ed5644a844386dfad3dab2b2921e9e1ab2" Dec 11 11:34:11 crc kubenswrapper[4953]: I1211 11:34:11.520228 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bb4gt"] Dec 11 11:34:11 crc kubenswrapper[4953]: I1211 11:34:11.526676 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-bb4gt"] Dec 11 11:34:11 crc kubenswrapper[4953]: I1211 11:34:11.545034 4953 scope.go:117] "RemoveContainer" containerID="d35d877acfc574760b584077c3ded968f541671edd460330f4873b8e9d20f446" Dec 11 11:34:11 crc kubenswrapper[4953]: I1211 11:34:11.569212 4953 scope.go:117] "RemoveContainer" containerID="3a4d6a3c0e2774cd4dd97a026a64c4958502675ac56512b59af0a06caf621636" Dec 11 11:34:11 crc kubenswrapper[4953]: E1211 11:34:11.569836 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a4d6a3c0e2774cd4dd97a026a64c4958502675ac56512b59af0a06caf621636\": container with ID starting with 3a4d6a3c0e2774cd4dd97a026a64c4958502675ac56512b59af0a06caf621636 not found: ID does not exist" containerID="3a4d6a3c0e2774cd4dd97a026a64c4958502675ac56512b59af0a06caf621636" Dec 11 11:34:11 crc kubenswrapper[4953]: I1211 11:34:11.569887 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a4d6a3c0e2774cd4dd97a026a64c4958502675ac56512b59af0a06caf621636"} err="failed to get container status \"3a4d6a3c0e2774cd4dd97a026a64c4958502675ac56512b59af0a06caf621636\": rpc error: code = NotFound desc = could not find container \"3a4d6a3c0e2774cd4dd97a026a64c4958502675ac56512b59af0a06caf621636\": container with ID starting with 3a4d6a3c0e2774cd4dd97a026a64c4958502675ac56512b59af0a06caf621636 not found: ID does not exist" Dec 11 11:34:11 crc kubenswrapper[4953]: I1211 11:34:11.569927 4953 scope.go:117] "RemoveContainer" containerID="6f7e99607345cc80b0759967e74a64ed5644a844386dfad3dab2b2921e9e1ab2" Dec 11 11:34:11 crc kubenswrapper[4953]: E1211 11:34:11.570384 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f7e99607345cc80b0759967e74a64ed5644a844386dfad3dab2b2921e9e1ab2\": container with ID starting with 6f7e99607345cc80b0759967e74a64ed5644a844386dfad3dab2b2921e9e1ab2 not found: ID does not exist" containerID="6f7e99607345cc80b0759967e74a64ed5644a844386dfad3dab2b2921e9e1ab2" Dec 11 11:34:11 crc kubenswrapper[4953]: I1211 11:34:11.570415 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f7e99607345cc80b0759967e74a64ed5644a844386dfad3dab2b2921e9e1ab2"} err="failed to get container status \"6f7e99607345cc80b0759967e74a64ed5644a844386dfad3dab2b2921e9e1ab2\": rpc error: code = NotFound desc = could not find container \"6f7e99607345cc80b0759967e74a64ed5644a844386dfad3dab2b2921e9e1ab2\": container with ID starting with 6f7e99607345cc80b0759967e74a64ed5644a844386dfad3dab2b2921e9e1ab2 not found: ID does not exist" Dec 11 11:34:11 crc kubenswrapper[4953]: I1211 11:34:11.570438 4953 scope.go:117] "RemoveContainer" containerID="d35d877acfc574760b584077c3ded968f541671edd460330f4873b8e9d20f446" Dec 11 11:34:11 crc kubenswrapper[4953]: E1211 11:34:11.570908 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d35d877acfc574760b584077c3ded968f541671edd460330f4873b8e9d20f446\": container with ID starting with d35d877acfc574760b584077c3ded968f541671edd460330f4873b8e9d20f446 not found: ID does not exist" containerID="d35d877acfc574760b584077c3ded968f541671edd460330f4873b8e9d20f446" Dec 11 11:34:11 crc kubenswrapper[4953]: I1211 11:34:11.570937 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d35d877acfc574760b584077c3ded968f541671edd460330f4873b8e9d20f446"} err="failed to get container status \"d35d877acfc574760b584077c3ded968f541671edd460330f4873b8e9d20f446\": rpc error: code = NotFound desc = could not find container \"d35d877acfc574760b584077c3ded968f541671edd460330f4873b8e9d20f446\": container with ID starting with d35d877acfc574760b584077c3ded968f541671edd460330f4873b8e9d20f446 not found: ID does not exist" Dec 11 11:34:12 crc kubenswrapper[4953]: I1211 11:34:12.485143 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f957cb83-d834-4ba8-9660-0175dfc0a151" path="/var/lib/kubelet/pods/f957cb83-d834-4ba8-9660-0175dfc0a151/volumes" Dec 11 11:34:18 crc kubenswrapper[4953]: I1211 11:34:18.195775 4953 patch_prober.go:28] interesting pod/machine-config-daemon-q2898 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 11:34:18 crc kubenswrapper[4953]: I1211 11:34:18.196868 4953 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 11:34:48 crc kubenswrapper[4953]: I1211 11:34:48.194539 4953 patch_prober.go:28] interesting pod/machine-config-daemon-q2898 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 11:34:48 crc kubenswrapper[4953]: I1211 11:34:48.195250 4953 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 11:35:18 crc kubenswrapper[4953]: I1211 11:35:18.193798 4953 patch_prober.go:28] interesting pod/machine-config-daemon-q2898 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 11:35:18 crc kubenswrapper[4953]: I1211 11:35:18.194457 4953 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 11:35:18 crc kubenswrapper[4953]: I1211 11:35:18.194518 4953 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q2898" Dec 11 11:35:18 crc kubenswrapper[4953]: I1211 11:35:18.195254 4953 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"df367e14114b8c622936df53030f66057d63bf1f69a7125bce980e674a017a51"} pod="openshift-machine-config-operator/machine-config-daemon-q2898" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 11 11:35:18 crc kubenswrapper[4953]: I1211 11:35:18.195339 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" containerName="machine-config-daemon" containerID="cri-o://df367e14114b8c622936df53030f66057d63bf1f69a7125bce980e674a017a51" gracePeriod=600 Dec 11 11:35:18 crc kubenswrapper[4953]: E1211 11:35:18.316704 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2898_openshift-machine-config-operator(ed741fb7-1326-48b7-a713-17c9f0243eac)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" Dec 11 11:35:19 crc kubenswrapper[4953]: I1211 11:35:19.310080 4953 generic.go:334] "Generic (PLEG): container finished" podID="ed741fb7-1326-48b7-a713-17c9f0243eac" containerID="df367e14114b8c622936df53030f66057d63bf1f69a7125bce980e674a017a51" exitCode=0 Dec 11 11:35:19 crc kubenswrapper[4953]: I1211 11:35:19.310161 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q2898" event={"ID":"ed741fb7-1326-48b7-a713-17c9f0243eac","Type":"ContainerDied","Data":"df367e14114b8c622936df53030f66057d63bf1f69a7125bce980e674a017a51"} Dec 11 11:35:19 crc kubenswrapper[4953]: I1211 11:35:19.310210 4953 scope.go:117] "RemoveContainer" containerID="3fa714d5edfb81bdbba0eb00b5ad25ac380f07f086e8da7b3fec27e11ee65c51" Dec 11 11:35:19 crc kubenswrapper[4953]: I1211 11:35:19.311049 4953 scope.go:117] "RemoveContainer" containerID="df367e14114b8c622936df53030f66057d63bf1f69a7125bce980e674a017a51" Dec 11 11:35:19 crc kubenswrapper[4953]: E1211 11:35:19.311426 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2898_openshift-machine-config-operator(ed741fb7-1326-48b7-a713-17c9f0243eac)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" Dec 11 11:35:22 crc kubenswrapper[4953]: I1211 11:35:22.830288 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-nqq8j/must-gather-q72q5"] Dec 11 11:35:22 crc kubenswrapper[4953]: E1211 11:35:22.832326 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f957cb83-d834-4ba8-9660-0175dfc0a151" containerName="extract-utilities" Dec 11 11:35:22 crc kubenswrapper[4953]: I1211 11:35:22.832430 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="f957cb83-d834-4ba8-9660-0175dfc0a151" containerName="extract-utilities" Dec 11 11:35:22 crc kubenswrapper[4953]: E1211 11:35:22.832510 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f957cb83-d834-4ba8-9660-0175dfc0a151" containerName="extract-content" Dec 11 11:35:22 crc kubenswrapper[4953]: I1211 11:35:22.832613 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="f957cb83-d834-4ba8-9660-0175dfc0a151" containerName="extract-content" Dec 11 11:35:22 crc kubenswrapper[4953]: E1211 11:35:22.832717 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f957cb83-d834-4ba8-9660-0175dfc0a151" containerName="registry-server" Dec 11 11:35:22 crc kubenswrapper[4953]: I1211 11:35:22.832779 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="f957cb83-d834-4ba8-9660-0175dfc0a151" containerName="registry-server" Dec 11 11:35:22 crc kubenswrapper[4953]: I1211 11:35:22.832991 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="f957cb83-d834-4ba8-9660-0175dfc0a151" containerName="registry-server" Dec 11 11:35:22 crc kubenswrapper[4953]: I1211 11:35:22.833996 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-nqq8j/must-gather-q72q5" Dec 11 11:35:22 crc kubenswrapper[4953]: I1211 11:35:22.836027 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-nqq8j"/"openshift-service-ca.crt" Dec 11 11:35:22 crc kubenswrapper[4953]: I1211 11:35:22.837409 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-nqq8j"/"kube-root-ca.crt" Dec 11 11:35:22 crc kubenswrapper[4953]: I1211 11:35:22.837525 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-nqq8j"/"default-dockercfg-wqhdl" Dec 11 11:35:22 crc kubenswrapper[4953]: I1211 11:35:22.845170 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-nqq8j/must-gather-q72q5"] Dec 11 11:35:23 crc kubenswrapper[4953]: I1211 11:35:23.000125 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcf4n\" (UniqueName: \"kubernetes.io/projected/8848bf4e-7085-45ec-bcff-138f1472d76e-kube-api-access-xcf4n\") pod \"must-gather-q72q5\" (UID: \"8848bf4e-7085-45ec-bcff-138f1472d76e\") " pod="openshift-must-gather-nqq8j/must-gather-q72q5" Dec 11 11:35:23 crc kubenswrapper[4953]: I1211 11:35:23.000544 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/8848bf4e-7085-45ec-bcff-138f1472d76e-must-gather-output\") pod \"must-gather-q72q5\" (UID: \"8848bf4e-7085-45ec-bcff-138f1472d76e\") " pod="openshift-must-gather-nqq8j/must-gather-q72q5" Dec 11 11:35:23 crc kubenswrapper[4953]: I1211 11:35:23.101890 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/8848bf4e-7085-45ec-bcff-138f1472d76e-must-gather-output\") pod \"must-gather-q72q5\" (UID: \"8848bf4e-7085-45ec-bcff-138f1472d76e\") " pod="openshift-must-gather-nqq8j/must-gather-q72q5" Dec 11 11:35:23 crc kubenswrapper[4953]: I1211 11:35:23.101947 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xcf4n\" (UniqueName: \"kubernetes.io/projected/8848bf4e-7085-45ec-bcff-138f1472d76e-kube-api-access-xcf4n\") pod \"must-gather-q72q5\" (UID: \"8848bf4e-7085-45ec-bcff-138f1472d76e\") " pod="openshift-must-gather-nqq8j/must-gather-q72q5" Dec 11 11:35:23 crc kubenswrapper[4953]: I1211 11:35:23.102453 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/8848bf4e-7085-45ec-bcff-138f1472d76e-must-gather-output\") pod \"must-gather-q72q5\" (UID: \"8848bf4e-7085-45ec-bcff-138f1472d76e\") " pod="openshift-must-gather-nqq8j/must-gather-q72q5" Dec 11 11:35:23 crc kubenswrapper[4953]: I1211 11:35:23.140833 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xcf4n\" (UniqueName: \"kubernetes.io/projected/8848bf4e-7085-45ec-bcff-138f1472d76e-kube-api-access-xcf4n\") pod \"must-gather-q72q5\" (UID: \"8848bf4e-7085-45ec-bcff-138f1472d76e\") " pod="openshift-must-gather-nqq8j/must-gather-q72q5" Dec 11 11:35:23 crc kubenswrapper[4953]: I1211 11:35:23.155528 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-nqq8j/must-gather-q72q5" Dec 11 11:35:23 crc kubenswrapper[4953]: I1211 11:35:23.615950 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-nqq8j/must-gather-q72q5"] Dec 11 11:35:23 crc kubenswrapper[4953]: W1211 11:35:23.638777 4953 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8848bf4e_7085_45ec_bcff_138f1472d76e.slice/crio-ab49739f99f6d8ac4f5a20b87b8a0b22c118fa4889892b9f206af55bce0765c1 WatchSource:0}: Error finding container ab49739f99f6d8ac4f5a20b87b8a0b22c118fa4889892b9f206af55bce0765c1: Status 404 returned error can't find the container with id ab49739f99f6d8ac4f5a20b87b8a0b22c118fa4889892b9f206af55bce0765c1 Dec 11 11:35:23 crc kubenswrapper[4953]: I1211 11:35:23.644404 4953 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 11 11:35:24 crc kubenswrapper[4953]: I1211 11:35:24.353764 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-nqq8j/must-gather-q72q5" event={"ID":"8848bf4e-7085-45ec-bcff-138f1472d76e","Type":"ContainerStarted","Data":"ab49739f99f6d8ac4f5a20b87b8a0b22c118fa4889892b9f206af55bce0765c1"} Dec 11 11:35:31 crc kubenswrapper[4953]: I1211 11:35:31.427981 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-nqq8j/must-gather-q72q5" event={"ID":"8848bf4e-7085-45ec-bcff-138f1472d76e","Type":"ContainerStarted","Data":"8042c82f8784ae068297a8da25885e0e3b2ebb5f580a1426700ed291c9a0fbce"} Dec 11 11:35:32 crc kubenswrapper[4953]: I1211 11:35:32.435647 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-nqq8j/must-gather-q72q5" event={"ID":"8848bf4e-7085-45ec-bcff-138f1472d76e","Type":"ContainerStarted","Data":"00751482310a6941d8755f79ecc1cba843ec127b6a0bfcad68f2866b1bb8f861"} Dec 11 11:35:32 crc kubenswrapper[4953]: I1211 11:35:32.463455 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-nqq8j/must-gather-q72q5" podStartSLOduration=3.014984939 podStartE2EDuration="10.463435371s" podCreationTimestamp="2025-12-11 11:35:22 +0000 UTC" firstStartedPulling="2025-12-11 11:35:23.644188348 +0000 UTC m=+5041.668047381" lastFinishedPulling="2025-12-11 11:35:31.09263875 +0000 UTC m=+5049.116497813" observedRunningTime="2025-12-11 11:35:32.459921821 +0000 UTC m=+5050.483780874" watchObservedRunningTime="2025-12-11 11:35:32.463435371 +0000 UTC m=+5050.487294404" Dec 11 11:35:32 crc kubenswrapper[4953]: I1211 11:35:32.478927 4953 scope.go:117] "RemoveContainer" containerID="df367e14114b8c622936df53030f66057d63bf1f69a7125bce980e674a017a51" Dec 11 11:35:32 crc kubenswrapper[4953]: E1211 11:35:32.479149 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2898_openshift-machine-config-operator(ed741fb7-1326-48b7-a713-17c9f0243eac)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" Dec 11 11:35:44 crc kubenswrapper[4953]: I1211 11:35:44.473420 4953 scope.go:117] "RemoveContainer" containerID="df367e14114b8c622936df53030f66057d63bf1f69a7125bce980e674a017a51" Dec 11 11:35:44 crc kubenswrapper[4953]: E1211 11:35:44.474546 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2898_openshift-machine-config-operator(ed741fb7-1326-48b7-a713-17c9f0243eac)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" Dec 11 11:35:56 crc kubenswrapper[4953]: I1211 11:35:56.474560 4953 scope.go:117] "RemoveContainer" containerID="df367e14114b8c622936df53030f66057d63bf1f69a7125bce980e674a017a51" Dec 11 11:35:56 crc kubenswrapper[4953]: E1211 11:35:56.475336 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2898_openshift-machine-config-operator(ed741fb7-1326-48b7-a713-17c9f0243eac)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" Dec 11 11:36:08 crc kubenswrapper[4953]: I1211 11:36:08.473498 4953 scope.go:117] "RemoveContainer" containerID="df367e14114b8c622936df53030f66057d63bf1f69a7125bce980e674a017a51" Dec 11 11:36:08 crc kubenswrapper[4953]: E1211 11:36:08.474398 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2898_openshift-machine-config-operator(ed741fb7-1326-48b7-a713-17c9f0243eac)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" Dec 11 11:36:10 crc kubenswrapper[4953]: I1211 11:36:10.724516 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-tlrdc"] Dec 11 11:36:10 crc kubenswrapper[4953]: I1211 11:36:10.728101 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tlrdc" Dec 11 11:36:10 crc kubenswrapper[4953]: I1211 11:36:10.777357 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tlrdc"] Dec 11 11:36:10 crc kubenswrapper[4953]: I1211 11:36:10.893128 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76270eef-12b8-494b-955d-f19b9dc51255-catalog-content\") pod \"redhat-marketplace-tlrdc\" (UID: \"76270eef-12b8-494b-955d-f19b9dc51255\") " pod="openshift-marketplace/redhat-marketplace-tlrdc" Dec 11 11:36:10 crc kubenswrapper[4953]: I1211 11:36:10.893206 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hr7bh\" (UniqueName: \"kubernetes.io/projected/76270eef-12b8-494b-955d-f19b9dc51255-kube-api-access-hr7bh\") pod \"redhat-marketplace-tlrdc\" (UID: \"76270eef-12b8-494b-955d-f19b9dc51255\") " pod="openshift-marketplace/redhat-marketplace-tlrdc" Dec 11 11:36:10 crc kubenswrapper[4953]: I1211 11:36:10.893419 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76270eef-12b8-494b-955d-f19b9dc51255-utilities\") pod \"redhat-marketplace-tlrdc\" (UID: \"76270eef-12b8-494b-955d-f19b9dc51255\") " pod="openshift-marketplace/redhat-marketplace-tlrdc" Dec 11 11:36:10 crc kubenswrapper[4953]: I1211 11:36:10.994712 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76270eef-12b8-494b-955d-f19b9dc51255-catalog-content\") pod \"redhat-marketplace-tlrdc\" (UID: \"76270eef-12b8-494b-955d-f19b9dc51255\") " pod="openshift-marketplace/redhat-marketplace-tlrdc" Dec 11 11:36:10 crc kubenswrapper[4953]: I1211 11:36:10.994787 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hr7bh\" (UniqueName: \"kubernetes.io/projected/76270eef-12b8-494b-955d-f19b9dc51255-kube-api-access-hr7bh\") pod \"redhat-marketplace-tlrdc\" (UID: \"76270eef-12b8-494b-955d-f19b9dc51255\") " pod="openshift-marketplace/redhat-marketplace-tlrdc" Dec 11 11:36:10 crc kubenswrapper[4953]: I1211 11:36:10.994838 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76270eef-12b8-494b-955d-f19b9dc51255-utilities\") pod \"redhat-marketplace-tlrdc\" (UID: \"76270eef-12b8-494b-955d-f19b9dc51255\") " pod="openshift-marketplace/redhat-marketplace-tlrdc" Dec 11 11:36:10 crc kubenswrapper[4953]: I1211 11:36:10.995408 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76270eef-12b8-494b-955d-f19b9dc51255-catalog-content\") pod \"redhat-marketplace-tlrdc\" (UID: \"76270eef-12b8-494b-955d-f19b9dc51255\") " pod="openshift-marketplace/redhat-marketplace-tlrdc" Dec 11 11:36:10 crc kubenswrapper[4953]: I1211 11:36:10.995429 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76270eef-12b8-494b-955d-f19b9dc51255-utilities\") pod \"redhat-marketplace-tlrdc\" (UID: \"76270eef-12b8-494b-955d-f19b9dc51255\") " pod="openshift-marketplace/redhat-marketplace-tlrdc" Dec 11 11:36:11 crc kubenswrapper[4953]: I1211 11:36:11.019675 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hr7bh\" (UniqueName: \"kubernetes.io/projected/76270eef-12b8-494b-955d-f19b9dc51255-kube-api-access-hr7bh\") pod \"redhat-marketplace-tlrdc\" (UID: \"76270eef-12b8-494b-955d-f19b9dc51255\") " pod="openshift-marketplace/redhat-marketplace-tlrdc" Dec 11 11:36:11 crc kubenswrapper[4953]: I1211 11:36:11.079319 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tlrdc" Dec 11 11:36:11 crc kubenswrapper[4953]: I1211 11:36:11.577276 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tlrdc"] Dec 11 11:36:11 crc kubenswrapper[4953]: I1211 11:36:11.873098 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tlrdc" event={"ID":"76270eef-12b8-494b-955d-f19b9dc51255","Type":"ContainerStarted","Data":"8fee06062e30e2dadb9715a5b98fe7f143d3115abf02476fd7b6303265ce8d9c"} Dec 11 11:36:11 crc kubenswrapper[4953]: I1211 11:36:11.873400 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tlrdc" event={"ID":"76270eef-12b8-494b-955d-f19b9dc51255","Type":"ContainerStarted","Data":"5e05bedcf47d119c78ccacebe15d44320d80380fb6b7b6698c0c8eeea5a02416"} Dec 11 11:36:12 crc kubenswrapper[4953]: I1211 11:36:12.896553 4953 generic.go:334] "Generic (PLEG): container finished" podID="76270eef-12b8-494b-955d-f19b9dc51255" containerID="8fee06062e30e2dadb9715a5b98fe7f143d3115abf02476fd7b6303265ce8d9c" exitCode=0 Dec 11 11:36:12 crc kubenswrapper[4953]: I1211 11:36:12.896627 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tlrdc" event={"ID":"76270eef-12b8-494b-955d-f19b9dc51255","Type":"ContainerDied","Data":"8fee06062e30e2dadb9715a5b98fe7f143d3115abf02476fd7b6303265ce8d9c"} Dec 11 11:36:13 crc kubenswrapper[4953]: I1211 11:36:13.906394 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tlrdc" event={"ID":"76270eef-12b8-494b-955d-f19b9dc51255","Type":"ContainerStarted","Data":"26123693c68bb922d70e2731a8b7519637d642f56da0a805aedeffbf23662888"} Dec 11 11:36:14 crc kubenswrapper[4953]: I1211 11:36:14.919019 4953 generic.go:334] "Generic (PLEG): container finished" podID="76270eef-12b8-494b-955d-f19b9dc51255" containerID="26123693c68bb922d70e2731a8b7519637d642f56da0a805aedeffbf23662888" exitCode=0 Dec 11 11:36:14 crc kubenswrapper[4953]: I1211 11:36:14.919098 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tlrdc" event={"ID":"76270eef-12b8-494b-955d-f19b9dc51255","Type":"ContainerDied","Data":"26123693c68bb922d70e2731a8b7519637d642f56da0a805aedeffbf23662888"} Dec 11 11:36:15 crc kubenswrapper[4953]: I1211 11:36:15.928938 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tlrdc" event={"ID":"76270eef-12b8-494b-955d-f19b9dc51255","Type":"ContainerStarted","Data":"d75b9002f2156bfd85d01319d10cafb9f89daa3a57decd4f818de90dd2cf0a70"} Dec 11 11:36:15 crc kubenswrapper[4953]: I1211 11:36:15.949370 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-tlrdc" podStartSLOduration=3.181168087 podStartE2EDuration="5.949352923s" podCreationTimestamp="2025-12-11 11:36:10 +0000 UTC" firstStartedPulling="2025-12-11 11:36:12.901826605 +0000 UTC m=+5090.925685648" lastFinishedPulling="2025-12-11 11:36:15.670011451 +0000 UTC m=+5093.693870484" observedRunningTime="2025-12-11 11:36:15.946973997 +0000 UTC m=+5093.970833030" watchObservedRunningTime="2025-12-11 11:36:15.949352923 +0000 UTC m=+5093.973211956" Dec 11 11:36:19 crc kubenswrapper[4953]: I1211 11:36:19.473377 4953 scope.go:117] "RemoveContainer" containerID="df367e14114b8c622936df53030f66057d63bf1f69a7125bce980e674a017a51" Dec 11 11:36:19 crc kubenswrapper[4953]: E1211 11:36:19.474103 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2898_openshift-machine-config-operator(ed741fb7-1326-48b7-a713-17c9f0243eac)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" Dec 11 11:36:21 crc kubenswrapper[4953]: I1211 11:36:21.082783 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-tlrdc" Dec 11 11:36:21 crc kubenswrapper[4953]: I1211 11:36:21.083117 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-tlrdc" Dec 11 11:36:21 crc kubenswrapper[4953]: I1211 11:36:21.155030 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-tlrdc" Dec 11 11:36:22 crc kubenswrapper[4953]: I1211 11:36:22.031228 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-tlrdc" Dec 11 11:36:22 crc kubenswrapper[4953]: I1211 11:36:22.082666 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-tlrdc"] Dec 11 11:36:23 crc kubenswrapper[4953]: I1211 11:36:23.987432 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-tlrdc" podUID="76270eef-12b8-494b-955d-f19b9dc51255" containerName="registry-server" containerID="cri-o://d75b9002f2156bfd85d01319d10cafb9f89daa3a57decd4f818de90dd2cf0a70" gracePeriod=2 Dec 11 11:36:25 crc kubenswrapper[4953]: I1211 11:36:25.001961 4953 generic.go:334] "Generic (PLEG): container finished" podID="76270eef-12b8-494b-955d-f19b9dc51255" containerID="d75b9002f2156bfd85d01319d10cafb9f89daa3a57decd4f818de90dd2cf0a70" exitCode=0 Dec 11 11:36:25 crc kubenswrapper[4953]: I1211 11:36:25.002063 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tlrdc" event={"ID":"76270eef-12b8-494b-955d-f19b9dc51255","Type":"ContainerDied","Data":"d75b9002f2156bfd85d01319d10cafb9f89daa3a57decd4f818de90dd2cf0a70"} Dec 11 11:36:26 crc kubenswrapper[4953]: I1211 11:36:26.001226 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tlrdc" Dec 11 11:36:26 crc kubenswrapper[4953]: I1211 11:36:26.011323 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tlrdc" event={"ID":"76270eef-12b8-494b-955d-f19b9dc51255","Type":"ContainerDied","Data":"5e05bedcf47d119c78ccacebe15d44320d80380fb6b7b6698c0c8eeea5a02416"} Dec 11 11:36:26 crc kubenswrapper[4953]: I1211 11:36:26.011382 4953 scope.go:117] "RemoveContainer" containerID="d75b9002f2156bfd85d01319d10cafb9f89daa3a57decd4f818de90dd2cf0a70" Dec 11 11:36:26 crc kubenswrapper[4953]: I1211 11:36:26.011396 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tlrdc" Dec 11 11:36:26 crc kubenswrapper[4953]: I1211 11:36:26.042179 4953 scope.go:117] "RemoveContainer" containerID="26123693c68bb922d70e2731a8b7519637d642f56da0a805aedeffbf23662888" Dec 11 11:36:26 crc kubenswrapper[4953]: I1211 11:36:26.051190 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hr7bh\" (UniqueName: \"kubernetes.io/projected/76270eef-12b8-494b-955d-f19b9dc51255-kube-api-access-hr7bh\") pod \"76270eef-12b8-494b-955d-f19b9dc51255\" (UID: \"76270eef-12b8-494b-955d-f19b9dc51255\") " Dec 11 11:36:26 crc kubenswrapper[4953]: I1211 11:36:26.051374 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76270eef-12b8-494b-955d-f19b9dc51255-utilities\") pod \"76270eef-12b8-494b-955d-f19b9dc51255\" (UID: \"76270eef-12b8-494b-955d-f19b9dc51255\") " Dec 11 11:36:26 crc kubenswrapper[4953]: I1211 11:36:26.051427 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76270eef-12b8-494b-955d-f19b9dc51255-catalog-content\") pod \"76270eef-12b8-494b-955d-f19b9dc51255\" (UID: \"76270eef-12b8-494b-955d-f19b9dc51255\") " Dec 11 11:36:26 crc kubenswrapper[4953]: I1211 11:36:26.055939 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76270eef-12b8-494b-955d-f19b9dc51255-utilities" (OuterVolumeSpecName: "utilities") pod "76270eef-12b8-494b-955d-f19b9dc51255" (UID: "76270eef-12b8-494b-955d-f19b9dc51255"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 11:36:26 crc kubenswrapper[4953]: I1211 11:36:26.062083 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76270eef-12b8-494b-955d-f19b9dc51255-kube-api-access-hr7bh" (OuterVolumeSpecName: "kube-api-access-hr7bh") pod "76270eef-12b8-494b-955d-f19b9dc51255" (UID: "76270eef-12b8-494b-955d-f19b9dc51255"). InnerVolumeSpecName "kube-api-access-hr7bh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 11:36:26 crc kubenswrapper[4953]: I1211 11:36:26.062840 4953 scope.go:117] "RemoveContainer" containerID="8fee06062e30e2dadb9715a5b98fe7f143d3115abf02476fd7b6303265ce8d9c" Dec 11 11:36:26 crc kubenswrapper[4953]: I1211 11:36:26.091321 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76270eef-12b8-494b-955d-f19b9dc51255-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "76270eef-12b8-494b-955d-f19b9dc51255" (UID: "76270eef-12b8-494b-955d-f19b9dc51255"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 11:36:26 crc kubenswrapper[4953]: I1211 11:36:26.152591 4953 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76270eef-12b8-494b-955d-f19b9dc51255-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 11:36:26 crc kubenswrapper[4953]: I1211 11:36:26.152633 4953 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76270eef-12b8-494b-955d-f19b9dc51255-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 11:36:26 crc kubenswrapper[4953]: I1211 11:36:26.152648 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hr7bh\" (UniqueName: \"kubernetes.io/projected/76270eef-12b8-494b-955d-f19b9dc51255-kube-api-access-hr7bh\") on node \"crc\" DevicePath \"\"" Dec 11 11:36:26 crc kubenswrapper[4953]: I1211 11:36:26.364884 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-tlrdc"] Dec 11 11:36:26 crc kubenswrapper[4953]: I1211 11:36:26.384944 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-tlrdc"] Dec 11 11:36:26 crc kubenswrapper[4953]: I1211 11:36:26.482537 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76270eef-12b8-494b-955d-f19b9dc51255" path="/var/lib/kubelet/pods/76270eef-12b8-494b-955d-f19b9dc51255/volumes" Dec 11 11:36:31 crc kubenswrapper[4953]: I1211 11:36:31.502236 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7d9dfd778-4qtwl_6b26e336-7c68-4ba3-979b-211c05708639/kube-rbac-proxy/0.log" Dec 11 11:36:31 crc kubenswrapper[4953]: I1211 11:36:31.590267 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7d9dfd778-4qtwl_6b26e336-7c68-4ba3-979b-211c05708639/manager/0.log" Dec 11 11:36:31 crc kubenswrapper[4953]: I1211 11:36:31.783080 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-6c677c69b-jz7t2_95010a68-4a99-4e84-8785-cb970f7085e1/kube-rbac-proxy/0.log" Dec 11 11:36:31 crc kubenswrapper[4953]: I1211 11:36:31.792437 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-6c677c69b-jz7t2_95010a68-4a99-4e84-8785-cb970f7085e1/manager/0.log" Dec 11 11:36:31 crc kubenswrapper[4953]: I1211 11:36:31.929520 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d48c0f09d4f85e6997d52028ed3b1a3994c4f4b73b6b561c4a3bbe5136czw6z_6612d320-600d-4d86-a518-8594611f0a3c/util/0.log" Dec 11 11:36:32 crc kubenswrapper[4953]: I1211 11:36:32.100346 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d48c0f09d4f85e6997d52028ed3b1a3994c4f4b73b6b561c4a3bbe5136czw6z_6612d320-600d-4d86-a518-8594611f0a3c/util/0.log" Dec 11 11:36:32 crc kubenswrapper[4953]: I1211 11:36:32.114831 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d48c0f09d4f85e6997d52028ed3b1a3994c4f4b73b6b561c4a3bbe5136czw6z_6612d320-600d-4d86-a518-8594611f0a3c/pull/0.log" Dec 11 11:36:32 crc kubenswrapper[4953]: I1211 11:36:32.137565 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d48c0f09d4f85e6997d52028ed3b1a3994c4f4b73b6b561c4a3bbe5136czw6z_6612d320-600d-4d86-a518-8594611f0a3c/pull/0.log" Dec 11 11:36:32 crc kubenswrapper[4953]: I1211 11:36:32.329403 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d48c0f09d4f85e6997d52028ed3b1a3994c4f4b73b6b561c4a3bbe5136czw6z_6612d320-600d-4d86-a518-8594611f0a3c/util/0.log" Dec 11 11:36:32 crc kubenswrapper[4953]: I1211 11:36:32.343195 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d48c0f09d4f85e6997d52028ed3b1a3994c4f4b73b6b561c4a3bbe5136czw6z_6612d320-600d-4d86-a518-8594611f0a3c/extract/0.log" Dec 11 11:36:32 crc kubenswrapper[4953]: I1211 11:36:32.362722 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d48c0f09d4f85e6997d52028ed3b1a3994c4f4b73b6b561c4a3bbe5136czw6z_6612d320-600d-4d86-a518-8594611f0a3c/pull/0.log" Dec 11 11:36:32 crc kubenswrapper[4953]: I1211 11:36:32.503872 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-697fb699cf-c8jqf_b97b8317-f4e7-440c-8d72-df1cf55afe09/kube-rbac-proxy/0.log" Dec 11 11:36:32 crc kubenswrapper[4953]: I1211 11:36:32.570913 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-697fb699cf-c8jqf_b97b8317-f4e7-440c-8d72-df1cf55afe09/manager/0.log" Dec 11 11:36:32 crc kubenswrapper[4953]: I1211 11:36:32.619293 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-5697bb5779-dx46k_c77a72a9-141b-4be9-99e2-406e16b68c2b/kube-rbac-proxy/0.log" Dec 11 11:36:32 crc kubenswrapper[4953]: I1211 11:36:32.770265 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-5697bb5779-dx46k_c77a72a9-141b-4be9-99e2-406e16b68c2b/manager/0.log" Dec 11 11:36:32 crc kubenswrapper[4953]: I1211 11:36:32.828698 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5f64f6f8bb-zqt7s_43c3d99b-4ce9-421a-9212-c99b50e671af/kube-rbac-proxy/0.log" Dec 11 11:36:32 crc kubenswrapper[4953]: I1211 11:36:32.867722 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5f64f6f8bb-zqt7s_43c3d99b-4ce9-421a-9212-c99b50e671af/manager/0.log" Dec 11 11:36:33 crc kubenswrapper[4953]: I1211 11:36:33.095542 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c6d99b8f-jz2zb_b81d3c69-eb5d-406f-8f14-330eaf0edec3/kube-rbac-proxy/0.log" Dec 11 11:36:33 crc kubenswrapper[4953]: I1211 11:36:33.117826 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c6d99b8f-jz2zb_b81d3c69-eb5d-406f-8f14-330eaf0edec3/manager/0.log" Dec 11 11:36:33 crc kubenswrapper[4953]: I1211 11:36:33.261272 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-78d48bff9d-nczzd_954a102c-a60d-405a-b579-450e6b8e5c8b/kube-rbac-proxy/0.log" Dec 11 11:36:33 crc kubenswrapper[4953]: I1211 11:36:33.321525 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-967d97867-f75n2_b7626052-8b4d-46d2-8f66-5774f43643a0/kube-rbac-proxy/0.log" Dec 11 11:36:33 crc kubenswrapper[4953]: I1211 11:36:33.460562 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-967d97867-f75n2_b7626052-8b4d-46d2-8f66-5774f43643a0/manager/0.log" Dec 11 11:36:33 crc kubenswrapper[4953]: I1211 11:36:33.541429 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-78d48bff9d-nczzd_954a102c-a60d-405a-b579-450e6b8e5c8b/manager/0.log" Dec 11 11:36:33 crc kubenswrapper[4953]: I1211 11:36:33.584329 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7765d96ddf-vz6q6_33ec47dc-5b73-4fd2-b0e1-eee01b12110f/kube-rbac-proxy/0.log" Dec 11 11:36:33 crc kubenswrapper[4953]: I1211 11:36:33.821234 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7765d96ddf-vz6q6_33ec47dc-5b73-4fd2-b0e1-eee01b12110f/manager/0.log" Dec 11 11:36:33 crc kubenswrapper[4953]: I1211 11:36:33.843030 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-5b5fd79c9c-c95px_b7ddbee0-c6cd-4571-912d-09744da61237/kube-rbac-proxy/0.log" Dec 11 11:36:33 crc kubenswrapper[4953]: I1211 11:36:33.867980 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-5b5fd79c9c-c95px_b7ddbee0-c6cd-4571-912d-09744da61237/manager/0.log" Dec 11 11:36:34 crc kubenswrapper[4953]: I1211 11:36:34.006378 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-79c8c4686c-c2pkq_0ab27af2-4f6b-4e0f-b399-bef9b137ce63/kube-rbac-proxy/0.log" Dec 11 11:36:34 crc kubenswrapper[4953]: I1211 11:36:34.110286 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-79c8c4686c-c2pkq_0ab27af2-4f6b-4e0f-b399-bef9b137ce63/manager/0.log" Dec 11 11:36:34 crc kubenswrapper[4953]: I1211 11:36:34.198092 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5fdfd5b6b5-z6knw_46ad2123-023a-4bcb-9b05-2a6b223c2d02/kube-rbac-proxy/0.log" Dec 11 11:36:34 crc kubenswrapper[4953]: I1211 11:36:34.283005 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5fdfd5b6b5-z6knw_46ad2123-023a-4bcb-9b05-2a6b223c2d02/manager/0.log" Dec 11 11:36:34 crc kubenswrapper[4953]: I1211 11:36:34.392181 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-697bc559fc-wvw7n_9feb23b4-0b52-42c6-98a8-6b1de2241028/kube-rbac-proxy/0.log" Dec 11 11:36:34 crc kubenswrapper[4953]: I1211 11:36:34.473872 4953 scope.go:117] "RemoveContainer" containerID="df367e14114b8c622936df53030f66057d63bf1f69a7125bce980e674a017a51" Dec 11 11:36:34 crc kubenswrapper[4953]: E1211 11:36:34.474504 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2898_openshift-machine-config-operator(ed741fb7-1326-48b7-a713-17c9f0243eac)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" Dec 11 11:36:34 crc kubenswrapper[4953]: I1211 11:36:34.888559 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-998648c74-6nwkm_e905c779-8570-480f-a7b9-7bba299bee6b/kube-rbac-proxy/0.log" Dec 11 11:36:34 crc kubenswrapper[4953]: I1211 11:36:34.944172 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-697bc559fc-wvw7n_9feb23b4-0b52-42c6-98a8-6b1de2241028/manager/0.log" Dec 11 11:36:34 crc kubenswrapper[4953]: I1211 11:36:34.963835 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-998648c74-6nwkm_e905c779-8570-480f-a7b9-7bba299bee6b/manager/0.log" Dec 11 11:36:35 crc kubenswrapper[4953]: I1211 11:36:35.255739 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-7f95dc5b94bcm8w_e7802018-7972-4d69-8b66-ea4bb637ff7f/manager/0.log" Dec 11 11:36:35 crc kubenswrapper[4953]: I1211 11:36:35.255906 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-7f95dc5b94bcm8w_e7802018-7972-4d69-8b66-ea4bb637ff7f/kube-rbac-proxy/0.log" Dec 11 11:36:35 crc kubenswrapper[4953]: I1211 11:36:35.561123 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-966884dd6-rflkh_02775e13-9835-4032-95b6-b554fd29bde1/operator/0.log" Dec 11 11:36:35 crc kubenswrapper[4953]: I1211 11:36:35.704487 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-dx7jh_523ebe34-8eb2-4a92-ba2f-180e03f29d3a/registry-server/0.log" Dec 11 11:36:35 crc kubenswrapper[4953]: I1211 11:36:35.769837 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-b6456fdb6-gkctw_a27b4200-b26e-434d-be23-2940fe7a57c7/kube-rbac-proxy/0.log" Dec 11 11:36:35 crc kubenswrapper[4953]: I1211 11:36:35.854245 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-b6456fdb6-gkctw_a27b4200-b26e-434d-be23-2940fe7a57c7/manager/0.log" Dec 11 11:36:35 crc kubenswrapper[4953]: I1211 11:36:35.944371 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-78f8948974-5ths7_a5873dea-ac09-449b-95ae-fc5f77f0e8d4/kube-rbac-proxy/0.log" Dec 11 11:36:36 crc kubenswrapper[4953]: I1211 11:36:36.032748 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-78f8948974-5ths7_a5873dea-ac09-449b-95ae-fc5f77f0e8d4/manager/0.log" Dec 11 11:36:36 crc kubenswrapper[4953]: I1211 11:36:36.434941 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-85cbc5886b-lxtqb_1c0f14ca-80dd-4704-989d-ca02d722bf43/manager/0.log" Dec 11 11:36:36 crc kubenswrapper[4953]: I1211 11:36:36.498950 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-k9zt4_e94f5882-5902-4e23-82b7-374766161807/operator/0.log" Dec 11 11:36:36 crc kubenswrapper[4953]: I1211 11:36:36.544204 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-9d58d64bc-qx85w_876fe2ae-127d-4e15-943a-3d3496252660/kube-rbac-proxy/0.log" Dec 11 11:36:36 crc kubenswrapper[4953]: I1211 11:36:36.579688 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-9d58d64bc-qx85w_876fe2ae-127d-4e15-943a-3d3496252660/manager/0.log" Dec 11 11:36:36 crc kubenswrapper[4953]: I1211 11:36:36.675500 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-58d5ff84df-5zt8b_47b393e6-75c0-493f-83f5-d7e9d67ef5dd/kube-rbac-proxy/0.log" Dec 11 11:36:36 crc kubenswrapper[4953]: I1211 11:36:36.789332 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-58d5ff84df-5zt8b_47b393e6-75c0-493f-83f5-d7e9d67ef5dd/manager/0.log" Dec 11 11:36:36 crc kubenswrapper[4953]: I1211 11:36:36.825324 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5854674fcc-7n7sr_905bc7ea-6d15-4d73-ad1c-71041c90e83f/kube-rbac-proxy/0.log" Dec 11 11:36:36 crc kubenswrapper[4953]: I1211 11:36:36.890485 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5854674fcc-7n7sr_905bc7ea-6d15-4d73-ad1c-71041c90e83f/manager/0.log" Dec 11 11:36:36 crc kubenswrapper[4953]: I1211 11:36:36.962309 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-75944c9b7-gkqq9_d5261918-b44c-4d64-93d3-ab0742fdde80/kube-rbac-proxy/0.log" Dec 11 11:36:37 crc kubenswrapper[4953]: I1211 11:36:37.036060 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-75944c9b7-gkqq9_d5261918-b44c-4d64-93d3-ab0742fdde80/manager/0.log" Dec 11 11:36:46 crc kubenswrapper[4953]: I1211 11:36:46.473415 4953 scope.go:117] "RemoveContainer" containerID="df367e14114b8c622936df53030f66057d63bf1f69a7125bce980e674a017a51" Dec 11 11:36:46 crc kubenswrapper[4953]: E1211 11:36:46.474218 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2898_openshift-machine-config-operator(ed741fb7-1326-48b7-a713-17c9f0243eac)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" Dec 11 11:36:57 crc kubenswrapper[4953]: I1211 11:36:57.549338 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-vrm5k_24a3a305-afdf-4c02-b335-b8c173651e93/control-plane-machine-set-operator/0.log" Dec 11 11:36:58 crc kubenswrapper[4953]: I1211 11:36:58.121324 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-nzrxl_908334c7-0bff-48d7-b294-70e88f29aa95/machine-api-operator/0.log" Dec 11 11:36:58 crc kubenswrapper[4953]: I1211 11:36:58.150915 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-nzrxl_908334c7-0bff-48d7-b294-70e88f29aa95/kube-rbac-proxy/0.log" Dec 11 11:36:59 crc kubenswrapper[4953]: I1211 11:36:59.472879 4953 scope.go:117] "RemoveContainer" containerID="df367e14114b8c622936df53030f66057d63bf1f69a7125bce980e674a017a51" Dec 11 11:36:59 crc kubenswrapper[4953]: E1211 11:36:59.473455 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2898_openshift-machine-config-operator(ed741fb7-1326-48b7-a713-17c9f0243eac)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" Dec 11 11:37:10 crc kubenswrapper[4953]: I1211 11:37:10.918301 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-86cb77c54b-zpjqb_b04280da-0938-44cd-8c87-04fadceb003c/cert-manager-controller/0.log" Dec 11 11:37:11 crc kubenswrapper[4953]: I1211 11:37:11.071951 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-855d9ccff4-ht7z2_a6c256f8-7cf4-4196-b3ee-4124af7fed31/cert-manager-cainjector/0.log" Dec 11 11:37:11 crc kubenswrapper[4953]: I1211 11:37:11.142958 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-f4fb5df64-r2rsb_096fad5f-94d2-43d5-93d5-d3daf6438972/cert-manager-webhook/0.log" Dec 11 11:37:12 crc kubenswrapper[4953]: I1211 11:37:12.478597 4953 scope.go:117] "RemoveContainer" containerID="df367e14114b8c622936df53030f66057d63bf1f69a7125bce980e674a017a51" Dec 11 11:37:12 crc kubenswrapper[4953]: E1211 11:37:12.479022 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2898_openshift-machine-config-operator(ed741fb7-1326-48b7-a713-17c9f0243eac)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" Dec 11 11:37:24 crc kubenswrapper[4953]: I1211 11:37:24.090616 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-6ff7998486-rwg62_30d483a8-3c69-4a93-bb46-58c753550b0e/nmstate-console-plugin/0.log" Dec 11 11:37:24 crc kubenswrapper[4953]: I1211 11:37:24.265346 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-5dnlx_79a8dd6f-7ac1-4129-bf6e-e77efc13a47b/nmstate-handler/0.log" Dec 11 11:37:24 crc kubenswrapper[4953]: I1211 11:37:24.271210 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f7f7578db-cl2xq_3c81d2de-4aed-4ff5-ad24-066959716a5b/kube-rbac-proxy/0.log" Dec 11 11:37:24 crc kubenswrapper[4953]: I1211 11:37:24.294400 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f7f7578db-cl2xq_3c81d2de-4aed-4ff5-ad24-066959716a5b/nmstate-metrics/0.log" Dec 11 11:37:24 crc kubenswrapper[4953]: I1211 11:37:24.473496 4953 scope.go:117] "RemoveContainer" containerID="df367e14114b8c622936df53030f66057d63bf1f69a7125bce980e674a017a51" Dec 11 11:37:24 crc kubenswrapper[4953]: E1211 11:37:24.473791 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2898_openshift-machine-config-operator(ed741fb7-1326-48b7-a713-17c9f0243eac)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" Dec 11 11:37:24 crc kubenswrapper[4953]: I1211 11:37:24.506722 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-6769fb99d-qnz6f_6a26b7e3-7f9d-4532-9070-aa467b57f0e4/nmstate-operator/0.log" Dec 11 11:37:24 crc kubenswrapper[4953]: I1211 11:37:24.607198 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-f8fb84555-6vgqn_092d166e-69a2-487f-8790-77067bc1e7c6/nmstate-webhook/0.log" Dec 11 11:37:37 crc kubenswrapper[4953]: I1211 11:37:37.473491 4953 scope.go:117] "RemoveContainer" containerID="df367e14114b8c622936df53030f66057d63bf1f69a7125bce980e674a017a51" Dec 11 11:37:37 crc kubenswrapper[4953]: E1211 11:37:37.474294 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2898_openshift-machine-config-operator(ed741fb7-1326-48b7-a713-17c9f0243eac)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" Dec 11 11:37:38 crc kubenswrapper[4953]: I1211 11:37:38.896790 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-5bddd4b946-5t68s_15c4c986-25bb-43ac-93b3-ea7b2dd1e707/kube-rbac-proxy/0.log" Dec 11 11:37:39 crc kubenswrapper[4953]: I1211 11:37:39.082282 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7784b6fcf-wzplr_e417a1bf-f380-4cd9-8f0b-a9b1766c578a/frr-k8s-webhook-server/0.log" Dec 11 11:37:39 crc kubenswrapper[4953]: I1211 11:37:39.124251 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-5bddd4b946-5t68s_15c4c986-25bb-43ac-93b3-ea7b2dd1e707/controller/0.log" Dec 11 11:37:39 crc kubenswrapper[4953]: I1211 11:37:39.235325 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wsh66_97ad19c9-42d1-49f4-a634-baa459c11c80/cp-frr-files/0.log" Dec 11 11:37:39 crc kubenswrapper[4953]: I1211 11:37:39.413127 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wsh66_97ad19c9-42d1-49f4-a634-baa459c11c80/cp-frr-files/0.log" Dec 11 11:37:39 crc kubenswrapper[4953]: I1211 11:37:39.420448 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wsh66_97ad19c9-42d1-49f4-a634-baa459c11c80/cp-metrics/0.log" Dec 11 11:37:39 crc kubenswrapper[4953]: I1211 11:37:39.421318 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wsh66_97ad19c9-42d1-49f4-a634-baa459c11c80/cp-reloader/0.log" Dec 11 11:37:39 crc kubenswrapper[4953]: I1211 11:37:39.450082 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wsh66_97ad19c9-42d1-49f4-a634-baa459c11c80/cp-reloader/0.log" Dec 11 11:37:39 crc kubenswrapper[4953]: I1211 11:37:39.577488 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wsh66_97ad19c9-42d1-49f4-a634-baa459c11c80/cp-frr-files/0.log" Dec 11 11:37:39 crc kubenswrapper[4953]: I1211 11:37:39.601697 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wsh66_97ad19c9-42d1-49f4-a634-baa459c11c80/cp-metrics/0.log" Dec 11 11:37:39 crc kubenswrapper[4953]: I1211 11:37:39.632604 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wsh66_97ad19c9-42d1-49f4-a634-baa459c11c80/cp-metrics/0.log" Dec 11 11:37:39 crc kubenswrapper[4953]: I1211 11:37:39.638551 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wsh66_97ad19c9-42d1-49f4-a634-baa459c11c80/cp-reloader/0.log" Dec 11 11:37:39 crc kubenswrapper[4953]: I1211 11:37:39.797748 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wsh66_97ad19c9-42d1-49f4-a634-baa459c11c80/cp-reloader/0.log" Dec 11 11:37:39 crc kubenswrapper[4953]: I1211 11:37:39.797919 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wsh66_97ad19c9-42d1-49f4-a634-baa459c11c80/cp-frr-files/0.log" Dec 11 11:37:39 crc kubenswrapper[4953]: I1211 11:37:39.856704 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wsh66_97ad19c9-42d1-49f4-a634-baa459c11c80/cp-metrics/0.log" Dec 11 11:37:39 crc kubenswrapper[4953]: I1211 11:37:39.889791 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wsh66_97ad19c9-42d1-49f4-a634-baa459c11c80/controller/0.log" Dec 11 11:37:39 crc kubenswrapper[4953]: I1211 11:37:39.973376 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wsh66_97ad19c9-42d1-49f4-a634-baa459c11c80/frr-metrics/0.log" Dec 11 11:37:40 crc kubenswrapper[4953]: I1211 11:37:40.008196 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wsh66_97ad19c9-42d1-49f4-a634-baa459c11c80/kube-rbac-proxy/0.log" Dec 11 11:37:40 crc kubenswrapper[4953]: I1211 11:37:40.107732 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wsh66_97ad19c9-42d1-49f4-a634-baa459c11c80/kube-rbac-proxy-frr/0.log" Dec 11 11:37:40 crc kubenswrapper[4953]: I1211 11:37:40.164364 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wsh66_97ad19c9-42d1-49f4-a634-baa459c11c80/reloader/0.log" Dec 11 11:37:40 crc kubenswrapper[4953]: I1211 11:37:40.347955 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-7df594d648-xsbbg_9dd77619-44b9-4c77-a3c9-da9aca01ebdf/manager/0.log" Dec 11 11:37:40 crc kubenswrapper[4953]: I1211 11:37:40.453325 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-6774c47879-2wtwj_0b1f3560-9351-4283-b171-7df165a2bedc/webhook-server/0.log" Dec 11 11:37:40 crc kubenswrapper[4953]: I1211 11:37:40.633799 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-z79z5_94d8c17f-ab97-41a7-a7e5-bb8fa013b562/kube-rbac-proxy/0.log" Dec 11 11:37:41 crc kubenswrapper[4953]: I1211 11:37:41.091174 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-z79z5_94d8c17f-ab97-41a7-a7e5-bb8fa013b562/speaker/0.log" Dec 11 11:37:41 crc kubenswrapper[4953]: I1211 11:37:41.724697 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wsh66_97ad19c9-42d1-49f4-a634-baa459c11c80/frr/0.log" Dec 11 11:37:51 crc kubenswrapper[4953]: I1211 11:37:51.473711 4953 scope.go:117] "RemoveContainer" containerID="df367e14114b8c622936df53030f66057d63bf1f69a7125bce980e674a017a51" Dec 11 11:37:51 crc kubenswrapper[4953]: E1211 11:37:51.474467 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2898_openshift-machine-config-operator(ed741fb7-1326-48b7-a713-17c9f0243eac)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" Dec 11 11:37:53 crc kubenswrapper[4953]: I1211 11:37:53.894113 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ah2sdg_c0689468-4037-474c-b76c-3580965a01fc/util/0.log" Dec 11 11:37:54 crc kubenswrapper[4953]: I1211 11:37:54.089713 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ah2sdg_c0689468-4037-474c-b76c-3580965a01fc/util/0.log" Dec 11 11:37:54 crc kubenswrapper[4953]: I1211 11:37:54.137595 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ah2sdg_c0689468-4037-474c-b76c-3580965a01fc/pull/0.log" Dec 11 11:37:54 crc kubenswrapper[4953]: I1211 11:37:54.166428 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ah2sdg_c0689468-4037-474c-b76c-3580965a01fc/pull/0.log" Dec 11 11:37:54 crc kubenswrapper[4953]: I1211 11:37:54.319564 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ah2sdg_c0689468-4037-474c-b76c-3580965a01fc/util/0.log" Dec 11 11:37:54 crc kubenswrapper[4953]: I1211 11:37:54.358499 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ah2sdg_c0689468-4037-474c-b76c-3580965a01fc/extract/0.log" Dec 11 11:37:54 crc kubenswrapper[4953]: I1211 11:37:54.376215 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ah2sdg_c0689468-4037-474c-b76c-3580965a01fc/pull/0.log" Dec 11 11:37:54 crc kubenswrapper[4953]: I1211 11:37:54.545714 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zjdtx_40077d7f-4309-4339-91d0-7596ed662f75/util/0.log" Dec 11 11:37:54 crc kubenswrapper[4953]: I1211 11:37:54.665125 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zjdtx_40077d7f-4309-4339-91d0-7596ed662f75/util/0.log" Dec 11 11:37:54 crc kubenswrapper[4953]: I1211 11:37:54.702746 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zjdtx_40077d7f-4309-4339-91d0-7596ed662f75/pull/0.log" Dec 11 11:37:54 crc kubenswrapper[4953]: I1211 11:37:54.703825 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zjdtx_40077d7f-4309-4339-91d0-7596ed662f75/pull/0.log" Dec 11 11:37:54 crc kubenswrapper[4953]: I1211 11:37:54.853725 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zjdtx_40077d7f-4309-4339-91d0-7596ed662f75/pull/0.log" Dec 11 11:37:54 crc kubenswrapper[4953]: I1211 11:37:54.856128 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zjdtx_40077d7f-4309-4339-91d0-7596ed662f75/util/0.log" Dec 11 11:37:54 crc kubenswrapper[4953]: I1211 11:37:54.879553 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zjdtx_40077d7f-4309-4339-91d0-7596ed662f75/extract/0.log" Dec 11 11:37:55 crc kubenswrapper[4953]: I1211 11:37:55.040702 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8k2qpg_e3eae748-2ab6-4203-826e-b7555edb049a/util/0.log" Dec 11 11:37:55 crc kubenswrapper[4953]: I1211 11:37:55.211215 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8k2qpg_e3eae748-2ab6-4203-826e-b7555edb049a/util/0.log" Dec 11 11:37:55 crc kubenswrapper[4953]: I1211 11:37:55.216621 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8k2qpg_e3eae748-2ab6-4203-826e-b7555edb049a/pull/0.log" Dec 11 11:37:55 crc kubenswrapper[4953]: I1211 11:37:55.230724 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8k2qpg_e3eae748-2ab6-4203-826e-b7555edb049a/pull/0.log" Dec 11 11:37:55 crc kubenswrapper[4953]: I1211 11:37:55.541047 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8k2qpg_e3eae748-2ab6-4203-826e-b7555edb049a/extract/0.log" Dec 11 11:37:55 crc kubenswrapper[4953]: I1211 11:37:55.547253 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8k2qpg_e3eae748-2ab6-4203-826e-b7555edb049a/util/0.log" Dec 11 11:37:55 crc kubenswrapper[4953]: I1211 11:37:55.550829 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8k2qpg_e3eae748-2ab6-4203-826e-b7555edb049a/pull/0.log" Dec 11 11:37:55 crc kubenswrapper[4953]: I1211 11:37:55.722925 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-nqdlc_daad820d-39e4-445d-9a3d-555c7ed62b43/extract-utilities/0.log" Dec 11 11:37:55 crc kubenswrapper[4953]: I1211 11:37:55.901052 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-nqdlc_daad820d-39e4-445d-9a3d-555c7ed62b43/extract-utilities/0.log" Dec 11 11:37:55 crc kubenswrapper[4953]: I1211 11:37:55.926017 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-nqdlc_daad820d-39e4-445d-9a3d-555c7ed62b43/extract-content/0.log" Dec 11 11:37:55 crc kubenswrapper[4953]: I1211 11:37:55.957380 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-nqdlc_daad820d-39e4-445d-9a3d-555c7ed62b43/extract-content/0.log" Dec 11 11:37:56 crc kubenswrapper[4953]: I1211 11:37:56.129102 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-nqdlc_daad820d-39e4-445d-9a3d-555c7ed62b43/extract-content/0.log" Dec 11 11:37:56 crc kubenswrapper[4953]: I1211 11:37:56.197111 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-nqdlc_daad820d-39e4-445d-9a3d-555c7ed62b43/extract-utilities/0.log" Dec 11 11:37:56 crc kubenswrapper[4953]: I1211 11:37:56.333022 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-nqdlc_daad820d-39e4-445d-9a3d-555c7ed62b43/registry-server/0.log" Dec 11 11:37:56 crc kubenswrapper[4953]: I1211 11:37:56.372474 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-kbqqj_c7f97b11-818a-4328-8013-3501f45516ef/extract-utilities/0.log" Dec 11 11:37:56 crc kubenswrapper[4953]: I1211 11:37:56.507929 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-kbqqj_c7f97b11-818a-4328-8013-3501f45516ef/extract-content/0.log" Dec 11 11:37:56 crc kubenswrapper[4953]: I1211 11:37:56.553984 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-kbqqj_c7f97b11-818a-4328-8013-3501f45516ef/extract-utilities/0.log" Dec 11 11:37:56 crc kubenswrapper[4953]: I1211 11:37:56.588673 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-kbqqj_c7f97b11-818a-4328-8013-3501f45516ef/extract-content/0.log" Dec 11 11:37:56 crc kubenswrapper[4953]: I1211 11:37:56.701559 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-kbqqj_c7f97b11-818a-4328-8013-3501f45516ef/extract-utilities/0.log" Dec 11 11:37:56 crc kubenswrapper[4953]: I1211 11:37:56.712678 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-kbqqj_c7f97b11-818a-4328-8013-3501f45516ef/extract-content/0.log" Dec 11 11:37:56 crc kubenswrapper[4953]: I1211 11:37:56.965927 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-2zqkh_1de6fb85-e275-4bf8-84f3-ab4a3b1e5565/marketplace-operator/0.log" Dec 11 11:37:57 crc kubenswrapper[4953]: I1211 11:37:57.036871 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-jtjgm_b0483db9-edf7-4df4-8cd4-b64966d77014/extract-utilities/0.log" Dec 11 11:37:57 crc kubenswrapper[4953]: I1211 11:37:57.294465 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-jtjgm_b0483db9-edf7-4df4-8cd4-b64966d77014/extract-content/0.log" Dec 11 11:37:57 crc kubenswrapper[4953]: I1211 11:37:57.335169 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-jtjgm_b0483db9-edf7-4df4-8cd4-b64966d77014/extract-utilities/0.log" Dec 11 11:37:57 crc kubenswrapper[4953]: I1211 11:37:57.351886 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-jtjgm_b0483db9-edf7-4df4-8cd4-b64966d77014/extract-content/0.log" Dec 11 11:37:57 crc kubenswrapper[4953]: I1211 11:37:57.508314 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-kbqqj_c7f97b11-818a-4328-8013-3501f45516ef/registry-server/0.log" Dec 11 11:37:57 crc kubenswrapper[4953]: I1211 11:37:57.546144 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-jtjgm_b0483db9-edf7-4df4-8cd4-b64966d77014/extract-utilities/0.log" Dec 11 11:37:57 crc kubenswrapper[4953]: I1211 11:37:57.573867 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-jtjgm_b0483db9-edf7-4df4-8cd4-b64966d77014/extract-content/0.log" Dec 11 11:37:57 crc kubenswrapper[4953]: I1211 11:37:57.729929 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-jdxzs_65c394e3-db2a-449c-9963-a880e17adbb2/extract-utilities/0.log" Dec 11 11:37:57 crc kubenswrapper[4953]: I1211 11:37:57.795868 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-jtjgm_b0483db9-edf7-4df4-8cd4-b64966d77014/registry-server/0.log" Dec 11 11:37:57 crc kubenswrapper[4953]: I1211 11:37:57.897031 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-jdxzs_65c394e3-db2a-449c-9963-a880e17adbb2/extract-utilities/0.log" Dec 11 11:37:57 crc kubenswrapper[4953]: I1211 11:37:57.904132 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-jdxzs_65c394e3-db2a-449c-9963-a880e17adbb2/extract-content/0.log" Dec 11 11:37:57 crc kubenswrapper[4953]: I1211 11:37:57.920689 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-jdxzs_65c394e3-db2a-449c-9963-a880e17adbb2/extract-content/0.log" Dec 11 11:37:58 crc kubenswrapper[4953]: I1211 11:37:58.104425 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-jdxzs_65c394e3-db2a-449c-9963-a880e17adbb2/extract-content/0.log" Dec 11 11:37:58 crc kubenswrapper[4953]: I1211 11:37:58.111525 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-jdxzs_65c394e3-db2a-449c-9963-a880e17adbb2/extract-utilities/0.log" Dec 11 11:37:58 crc kubenswrapper[4953]: I1211 11:37:58.761102 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-jdxzs_65c394e3-db2a-449c-9963-a880e17adbb2/registry-server/0.log" Dec 11 11:38:06 crc kubenswrapper[4953]: I1211 11:38:06.473862 4953 scope.go:117] "RemoveContainer" containerID="df367e14114b8c622936df53030f66057d63bf1f69a7125bce980e674a017a51" Dec 11 11:38:06 crc kubenswrapper[4953]: E1211 11:38:06.474672 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2898_openshift-machine-config-operator(ed741fb7-1326-48b7-a713-17c9f0243eac)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" Dec 11 11:38:18 crc kubenswrapper[4953]: I1211 11:38:18.474257 4953 scope.go:117] "RemoveContainer" containerID="df367e14114b8c622936df53030f66057d63bf1f69a7125bce980e674a017a51" Dec 11 11:38:18 crc kubenswrapper[4953]: E1211 11:38:18.475071 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2898_openshift-machine-config-operator(ed741fb7-1326-48b7-a713-17c9f0243eac)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" Dec 11 11:38:30 crc kubenswrapper[4953]: I1211 11:38:30.473328 4953 scope.go:117] "RemoveContainer" containerID="df367e14114b8c622936df53030f66057d63bf1f69a7125bce980e674a017a51" Dec 11 11:38:30 crc kubenswrapper[4953]: E1211 11:38:30.474165 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2898_openshift-machine-config-operator(ed741fb7-1326-48b7-a713-17c9f0243eac)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" Dec 11 11:38:45 crc kubenswrapper[4953]: I1211 11:38:45.473309 4953 scope.go:117] "RemoveContainer" containerID="df367e14114b8c622936df53030f66057d63bf1f69a7125bce980e674a017a51" Dec 11 11:38:45 crc kubenswrapper[4953]: E1211 11:38:45.474159 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2898_openshift-machine-config-operator(ed741fb7-1326-48b7-a713-17c9f0243eac)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" Dec 11 11:38:59 crc kubenswrapper[4953]: I1211 11:38:59.473631 4953 scope.go:117] "RemoveContainer" containerID="df367e14114b8c622936df53030f66057d63bf1f69a7125bce980e674a017a51" Dec 11 11:38:59 crc kubenswrapper[4953]: E1211 11:38:59.474537 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2898_openshift-machine-config-operator(ed741fb7-1326-48b7-a713-17c9f0243eac)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" Dec 11 11:39:10 crc kubenswrapper[4953]: I1211 11:39:10.989953 4953 generic.go:334] "Generic (PLEG): container finished" podID="8848bf4e-7085-45ec-bcff-138f1472d76e" containerID="8042c82f8784ae068297a8da25885e0e3b2ebb5f580a1426700ed291c9a0fbce" exitCode=0 Dec 11 11:39:10 crc kubenswrapper[4953]: I1211 11:39:10.990027 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-nqq8j/must-gather-q72q5" event={"ID":"8848bf4e-7085-45ec-bcff-138f1472d76e","Type":"ContainerDied","Data":"8042c82f8784ae068297a8da25885e0e3b2ebb5f580a1426700ed291c9a0fbce"} Dec 11 11:39:10 crc kubenswrapper[4953]: I1211 11:39:10.991390 4953 scope.go:117] "RemoveContainer" containerID="8042c82f8784ae068297a8da25885e0e3b2ebb5f580a1426700ed291c9a0fbce" Dec 11 11:39:11 crc kubenswrapper[4953]: I1211 11:39:11.050845 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-nqq8j_must-gather-q72q5_8848bf4e-7085-45ec-bcff-138f1472d76e/gather/0.log" Dec 11 11:39:14 crc kubenswrapper[4953]: I1211 11:39:14.473652 4953 scope.go:117] "RemoveContainer" containerID="df367e14114b8c622936df53030f66057d63bf1f69a7125bce980e674a017a51" Dec 11 11:39:14 crc kubenswrapper[4953]: E1211 11:39:14.474452 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2898_openshift-machine-config-operator(ed741fb7-1326-48b7-a713-17c9f0243eac)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" Dec 11 11:39:18 crc kubenswrapper[4953]: I1211 11:39:18.704349 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-nqq8j/must-gather-q72q5"] Dec 11 11:39:18 crc kubenswrapper[4953]: I1211 11:39:18.705211 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-nqq8j/must-gather-q72q5" podUID="8848bf4e-7085-45ec-bcff-138f1472d76e" containerName="copy" containerID="cri-o://00751482310a6941d8755f79ecc1cba843ec127b6a0bfcad68f2866b1bb8f861" gracePeriod=2 Dec 11 11:39:18 crc kubenswrapper[4953]: I1211 11:39:18.711527 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-nqq8j/must-gather-q72q5"] Dec 11 11:39:19 crc kubenswrapper[4953]: I1211 11:39:19.067038 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-nqq8j_must-gather-q72q5_8848bf4e-7085-45ec-bcff-138f1472d76e/copy/0.log" Dec 11 11:39:19 crc kubenswrapper[4953]: I1211 11:39:19.067364 4953 generic.go:334] "Generic (PLEG): container finished" podID="8848bf4e-7085-45ec-bcff-138f1472d76e" containerID="00751482310a6941d8755f79ecc1cba843ec127b6a0bfcad68f2866b1bb8f861" exitCode=143 Dec 11 11:39:19 crc kubenswrapper[4953]: I1211 11:39:19.122116 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-nqq8j_must-gather-q72q5_8848bf4e-7085-45ec-bcff-138f1472d76e/copy/0.log" Dec 11 11:39:19 crc kubenswrapper[4953]: I1211 11:39:19.122840 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-nqq8j/must-gather-q72q5" Dec 11 11:39:19 crc kubenswrapper[4953]: I1211 11:39:19.272665 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcf4n\" (UniqueName: \"kubernetes.io/projected/8848bf4e-7085-45ec-bcff-138f1472d76e-kube-api-access-xcf4n\") pod \"8848bf4e-7085-45ec-bcff-138f1472d76e\" (UID: \"8848bf4e-7085-45ec-bcff-138f1472d76e\") " Dec 11 11:39:19 crc kubenswrapper[4953]: I1211 11:39:19.272735 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/8848bf4e-7085-45ec-bcff-138f1472d76e-must-gather-output\") pod \"8848bf4e-7085-45ec-bcff-138f1472d76e\" (UID: \"8848bf4e-7085-45ec-bcff-138f1472d76e\") " Dec 11 11:39:19 crc kubenswrapper[4953]: I1211 11:39:19.288429 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8848bf4e-7085-45ec-bcff-138f1472d76e-kube-api-access-xcf4n" (OuterVolumeSpecName: "kube-api-access-xcf4n") pod "8848bf4e-7085-45ec-bcff-138f1472d76e" (UID: "8848bf4e-7085-45ec-bcff-138f1472d76e"). InnerVolumeSpecName "kube-api-access-xcf4n". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 11:39:19 crc kubenswrapper[4953]: I1211 11:39:19.374114 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcf4n\" (UniqueName: \"kubernetes.io/projected/8848bf4e-7085-45ec-bcff-138f1472d76e-kube-api-access-xcf4n\") on node \"crc\" DevicePath \"\"" Dec 11 11:39:19 crc kubenswrapper[4953]: I1211 11:39:19.374318 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8848bf4e-7085-45ec-bcff-138f1472d76e-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "8848bf4e-7085-45ec-bcff-138f1472d76e" (UID: "8848bf4e-7085-45ec-bcff-138f1472d76e"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 11:39:19 crc kubenswrapper[4953]: I1211 11:39:19.475495 4953 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/8848bf4e-7085-45ec-bcff-138f1472d76e-must-gather-output\") on node \"crc\" DevicePath \"\"" Dec 11 11:39:20 crc kubenswrapper[4953]: I1211 11:39:20.086570 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-nqq8j_must-gather-q72q5_8848bf4e-7085-45ec-bcff-138f1472d76e/copy/0.log" Dec 11 11:39:20 crc kubenswrapper[4953]: I1211 11:39:20.087079 4953 scope.go:117] "RemoveContainer" containerID="00751482310a6941d8755f79ecc1cba843ec127b6a0bfcad68f2866b1bb8f861" Dec 11 11:39:20 crc kubenswrapper[4953]: I1211 11:39:20.087113 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-nqq8j/must-gather-q72q5" Dec 11 11:39:20 crc kubenswrapper[4953]: I1211 11:39:20.106599 4953 scope.go:117] "RemoveContainer" containerID="8042c82f8784ae068297a8da25885e0e3b2ebb5f580a1426700ed291c9a0fbce" Dec 11 11:39:20 crc kubenswrapper[4953]: I1211 11:39:20.483894 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8848bf4e-7085-45ec-bcff-138f1472d76e" path="/var/lib/kubelet/pods/8848bf4e-7085-45ec-bcff-138f1472d76e/volumes" Dec 11 11:39:26 crc kubenswrapper[4953]: I1211 11:39:26.473038 4953 scope.go:117] "RemoveContainer" containerID="df367e14114b8c622936df53030f66057d63bf1f69a7125bce980e674a017a51" Dec 11 11:39:26 crc kubenswrapper[4953]: E1211 11:39:26.473618 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2898_openshift-machine-config-operator(ed741fb7-1326-48b7-a713-17c9f0243eac)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" Dec 11 11:39:41 crc kubenswrapper[4953]: I1211 11:39:41.473381 4953 scope.go:117] "RemoveContainer" containerID="df367e14114b8c622936df53030f66057d63bf1f69a7125bce980e674a017a51" Dec 11 11:39:41 crc kubenswrapper[4953]: E1211 11:39:41.474305 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2898_openshift-machine-config-operator(ed741fb7-1326-48b7-a713-17c9f0243eac)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" Dec 11 11:39:56 crc kubenswrapper[4953]: I1211 11:39:56.609523 4953 scope.go:117] "RemoveContainer" containerID="df367e14114b8c622936df53030f66057d63bf1f69a7125bce980e674a017a51" Dec 11 11:39:56 crc kubenswrapper[4953]: E1211 11:39:56.610517 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2898_openshift-machine-config-operator(ed741fb7-1326-48b7-a713-17c9f0243eac)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" Dec 11 11:40:08 crc kubenswrapper[4953]: I1211 11:40:08.474233 4953 scope.go:117] "RemoveContainer" containerID="df367e14114b8c622936df53030f66057d63bf1f69a7125bce980e674a017a51" Dec 11 11:40:08 crc kubenswrapper[4953]: E1211 11:40:08.477743 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2898_openshift-machine-config-operator(ed741fb7-1326-48b7-a713-17c9f0243eac)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2898" podUID="ed741fb7-1326-48b7-a713-17c9f0243eac" Dec 11 11:40:22 crc kubenswrapper[4953]: I1211 11:40:22.477020 4953 scope.go:117] "RemoveContainer" containerID="df367e14114b8c622936df53030f66057d63bf1f69a7125bce980e674a017a51" Dec 11 11:40:22 crc kubenswrapper[4953]: I1211 11:40:22.909734 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q2898" event={"ID":"ed741fb7-1326-48b7-a713-17c9f0243eac","Type":"ContainerStarted","Data":"b52c256aabc38aad588eae18ee678509b5be4c564e52a89dfb2e77628cf23093"} Dec 11 11:41:09 crc kubenswrapper[4953]: I1211 11:41:09.754757 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-j95rk"] Dec 11 11:41:09 crc kubenswrapper[4953]: E1211 11:41:09.755688 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76270eef-12b8-494b-955d-f19b9dc51255" containerName="registry-server" Dec 11 11:41:09 crc kubenswrapper[4953]: I1211 11:41:09.755716 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="76270eef-12b8-494b-955d-f19b9dc51255" containerName="registry-server" Dec 11 11:41:09 crc kubenswrapper[4953]: E1211 11:41:09.755737 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8848bf4e-7085-45ec-bcff-138f1472d76e" containerName="copy" Dec 11 11:41:09 crc kubenswrapper[4953]: I1211 11:41:09.755745 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="8848bf4e-7085-45ec-bcff-138f1472d76e" containerName="copy" Dec 11 11:41:09 crc kubenswrapper[4953]: E1211 11:41:09.755768 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76270eef-12b8-494b-955d-f19b9dc51255" containerName="extract-content" Dec 11 11:41:09 crc kubenswrapper[4953]: I1211 11:41:09.755778 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="76270eef-12b8-494b-955d-f19b9dc51255" containerName="extract-content" Dec 11 11:41:09 crc kubenswrapper[4953]: E1211 11:41:09.755793 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76270eef-12b8-494b-955d-f19b9dc51255" containerName="extract-utilities" Dec 11 11:41:09 crc kubenswrapper[4953]: I1211 11:41:09.755800 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="76270eef-12b8-494b-955d-f19b9dc51255" containerName="extract-utilities" Dec 11 11:41:09 crc kubenswrapper[4953]: E1211 11:41:09.755812 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8848bf4e-7085-45ec-bcff-138f1472d76e" containerName="gather" Dec 11 11:41:09 crc kubenswrapper[4953]: I1211 11:41:09.755818 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="8848bf4e-7085-45ec-bcff-138f1472d76e" containerName="gather" Dec 11 11:41:09 crc kubenswrapper[4953]: I1211 11:41:09.756082 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="76270eef-12b8-494b-955d-f19b9dc51255" containerName="registry-server" Dec 11 11:41:09 crc kubenswrapper[4953]: I1211 11:41:09.756099 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="8848bf4e-7085-45ec-bcff-138f1472d76e" containerName="gather" Dec 11 11:41:09 crc kubenswrapper[4953]: I1211 11:41:09.756122 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="8848bf4e-7085-45ec-bcff-138f1472d76e" containerName="copy" Dec 11 11:41:09 crc kubenswrapper[4953]: I1211 11:41:09.757341 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j95rk" Dec 11 11:41:09 crc kubenswrapper[4953]: I1211 11:41:09.780133 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-j95rk"] Dec 11 11:41:09 crc kubenswrapper[4953]: I1211 11:41:09.874969 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2h9ck\" (UniqueName: \"kubernetes.io/projected/bd7e46c8-af2b-46e9-9379-b44ce8676698-kube-api-access-2h9ck\") pod \"community-operators-j95rk\" (UID: \"bd7e46c8-af2b-46e9-9379-b44ce8676698\") " pod="openshift-marketplace/community-operators-j95rk" Dec 11 11:41:09 crc kubenswrapper[4953]: I1211 11:41:09.875135 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd7e46c8-af2b-46e9-9379-b44ce8676698-catalog-content\") pod \"community-operators-j95rk\" (UID: \"bd7e46c8-af2b-46e9-9379-b44ce8676698\") " pod="openshift-marketplace/community-operators-j95rk" Dec 11 11:41:09 crc kubenswrapper[4953]: I1211 11:41:09.875224 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd7e46c8-af2b-46e9-9379-b44ce8676698-utilities\") pod \"community-operators-j95rk\" (UID: \"bd7e46c8-af2b-46e9-9379-b44ce8676698\") " pod="openshift-marketplace/community-operators-j95rk" Dec 11 11:41:09 crc kubenswrapper[4953]: I1211 11:41:09.949179 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-lhzgm"] Dec 11 11:41:09 crc kubenswrapper[4953]: I1211 11:41:09.951072 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lhzgm" Dec 11 11:41:09 crc kubenswrapper[4953]: I1211 11:41:09.963962 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lhzgm"] Dec 11 11:41:09 crc kubenswrapper[4953]: I1211 11:41:09.976774 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd7e46c8-af2b-46e9-9379-b44ce8676698-utilities\") pod \"community-operators-j95rk\" (UID: \"bd7e46c8-af2b-46e9-9379-b44ce8676698\") " pod="openshift-marketplace/community-operators-j95rk" Dec 11 11:41:09 crc kubenswrapper[4953]: I1211 11:41:09.976894 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2h9ck\" (UniqueName: \"kubernetes.io/projected/bd7e46c8-af2b-46e9-9379-b44ce8676698-kube-api-access-2h9ck\") pod \"community-operators-j95rk\" (UID: \"bd7e46c8-af2b-46e9-9379-b44ce8676698\") " pod="openshift-marketplace/community-operators-j95rk" Dec 11 11:41:09 crc kubenswrapper[4953]: I1211 11:41:09.976931 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52fd7f85-8e6b-42cc-ad11-c625aef75a4d-utilities\") pod \"certified-operators-lhzgm\" (UID: \"52fd7f85-8e6b-42cc-ad11-c625aef75a4d\") " pod="openshift-marketplace/certified-operators-lhzgm" Dec 11 11:41:09 crc kubenswrapper[4953]: I1211 11:41:09.976967 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd7e46c8-af2b-46e9-9379-b44ce8676698-catalog-content\") pod \"community-operators-j95rk\" (UID: \"bd7e46c8-af2b-46e9-9379-b44ce8676698\") " pod="openshift-marketplace/community-operators-j95rk" Dec 11 11:41:09 crc kubenswrapper[4953]: I1211 11:41:09.977043 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52fd7f85-8e6b-42cc-ad11-c625aef75a4d-catalog-content\") pod \"certified-operators-lhzgm\" (UID: \"52fd7f85-8e6b-42cc-ad11-c625aef75a4d\") " pod="openshift-marketplace/certified-operators-lhzgm" Dec 11 11:41:09 crc kubenswrapper[4953]: I1211 11:41:09.977128 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrgt2\" (UniqueName: \"kubernetes.io/projected/52fd7f85-8e6b-42cc-ad11-c625aef75a4d-kube-api-access-wrgt2\") pod \"certified-operators-lhzgm\" (UID: \"52fd7f85-8e6b-42cc-ad11-c625aef75a4d\") " pod="openshift-marketplace/certified-operators-lhzgm" Dec 11 11:41:09 crc kubenswrapper[4953]: I1211 11:41:09.977340 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd7e46c8-af2b-46e9-9379-b44ce8676698-utilities\") pod \"community-operators-j95rk\" (UID: \"bd7e46c8-af2b-46e9-9379-b44ce8676698\") " pod="openshift-marketplace/community-operators-j95rk" Dec 11 11:41:09 crc kubenswrapper[4953]: I1211 11:41:09.977533 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd7e46c8-af2b-46e9-9379-b44ce8676698-catalog-content\") pod \"community-operators-j95rk\" (UID: \"bd7e46c8-af2b-46e9-9379-b44ce8676698\") " pod="openshift-marketplace/community-operators-j95rk" Dec 11 11:41:10 crc kubenswrapper[4953]: I1211 11:41:10.003151 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2h9ck\" (UniqueName: \"kubernetes.io/projected/bd7e46c8-af2b-46e9-9379-b44ce8676698-kube-api-access-2h9ck\") pod \"community-operators-j95rk\" (UID: \"bd7e46c8-af2b-46e9-9379-b44ce8676698\") " pod="openshift-marketplace/community-operators-j95rk" Dec 11 11:41:10 crc kubenswrapper[4953]: I1211 11:41:10.154768 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j95rk" Dec 11 11:41:10 crc kubenswrapper[4953]: I1211 11:41:10.155140 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wrgt2\" (UniqueName: \"kubernetes.io/projected/52fd7f85-8e6b-42cc-ad11-c625aef75a4d-kube-api-access-wrgt2\") pod \"certified-operators-lhzgm\" (UID: \"52fd7f85-8e6b-42cc-ad11-c625aef75a4d\") " pod="openshift-marketplace/certified-operators-lhzgm" Dec 11 11:41:10 crc kubenswrapper[4953]: I1211 11:41:10.155239 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52fd7f85-8e6b-42cc-ad11-c625aef75a4d-utilities\") pod \"certified-operators-lhzgm\" (UID: \"52fd7f85-8e6b-42cc-ad11-c625aef75a4d\") " pod="openshift-marketplace/certified-operators-lhzgm" Dec 11 11:41:10 crc kubenswrapper[4953]: I1211 11:41:10.155301 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52fd7f85-8e6b-42cc-ad11-c625aef75a4d-catalog-content\") pod \"certified-operators-lhzgm\" (UID: \"52fd7f85-8e6b-42cc-ad11-c625aef75a4d\") " pod="openshift-marketplace/certified-operators-lhzgm" Dec 11 11:41:10 crc kubenswrapper[4953]: I1211 11:41:10.155857 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52fd7f85-8e6b-42cc-ad11-c625aef75a4d-catalog-content\") pod \"certified-operators-lhzgm\" (UID: \"52fd7f85-8e6b-42cc-ad11-c625aef75a4d\") " pod="openshift-marketplace/certified-operators-lhzgm" Dec 11 11:41:10 crc kubenswrapper[4953]: I1211 11:41:10.155929 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52fd7f85-8e6b-42cc-ad11-c625aef75a4d-utilities\") pod \"certified-operators-lhzgm\" (UID: \"52fd7f85-8e6b-42cc-ad11-c625aef75a4d\") " pod="openshift-marketplace/certified-operators-lhzgm" Dec 11 11:41:10 crc kubenswrapper[4953]: I1211 11:41:10.181594 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrgt2\" (UniqueName: \"kubernetes.io/projected/52fd7f85-8e6b-42cc-ad11-c625aef75a4d-kube-api-access-wrgt2\") pod \"certified-operators-lhzgm\" (UID: \"52fd7f85-8e6b-42cc-ad11-c625aef75a4d\") " pod="openshift-marketplace/certified-operators-lhzgm" Dec 11 11:41:10 crc kubenswrapper[4953]: I1211 11:41:10.270091 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lhzgm" Dec 11 11:41:10 crc kubenswrapper[4953]: I1211 11:41:10.560786 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-j95rk"] Dec 11 11:41:10 crc kubenswrapper[4953]: I1211 11:41:10.942072 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lhzgm"] Dec 11 11:41:11 crc kubenswrapper[4953]: I1211 11:41:11.329165 4953 generic.go:334] "Generic (PLEG): container finished" podID="52fd7f85-8e6b-42cc-ad11-c625aef75a4d" containerID="0a5067b0df671a0c153ae94bc82d0c972d9efd6f2e29ec767762ed0f88a499b6" exitCode=0 Dec 11 11:41:11 crc kubenswrapper[4953]: I1211 11:41:11.329244 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lhzgm" event={"ID":"52fd7f85-8e6b-42cc-ad11-c625aef75a4d","Type":"ContainerDied","Data":"0a5067b0df671a0c153ae94bc82d0c972d9efd6f2e29ec767762ed0f88a499b6"} Dec 11 11:41:11 crc kubenswrapper[4953]: I1211 11:41:11.329539 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lhzgm" event={"ID":"52fd7f85-8e6b-42cc-ad11-c625aef75a4d","Type":"ContainerStarted","Data":"a61ac9bfeb4793c99912606f33842399ecc21c939a215bd7fc34055de42a2b95"} Dec 11 11:41:11 crc kubenswrapper[4953]: I1211 11:41:11.331714 4953 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 11 11:41:11 crc kubenswrapper[4953]: I1211 11:41:11.332324 4953 generic.go:334] "Generic (PLEG): container finished" podID="bd7e46c8-af2b-46e9-9379-b44ce8676698" containerID="0b4cd8aaa01c5802dbe7f91cbd33d12c2917c30d73e66b262bf1eb1e6283563a" exitCode=0 Dec 11 11:41:11 crc kubenswrapper[4953]: I1211 11:41:11.332418 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j95rk" event={"ID":"bd7e46c8-af2b-46e9-9379-b44ce8676698","Type":"ContainerDied","Data":"0b4cd8aaa01c5802dbe7f91cbd33d12c2917c30d73e66b262bf1eb1e6283563a"} Dec 11 11:41:11 crc kubenswrapper[4953]: I1211 11:41:11.332478 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j95rk" event={"ID":"bd7e46c8-af2b-46e9-9379-b44ce8676698","Type":"ContainerStarted","Data":"dc0d6eb0dd411fed2e6143848e4e8895bc9a2e15bbb4876207481e8d6088f385"} Dec 11 11:41:12 crc kubenswrapper[4953]: I1211 11:41:12.410002 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j95rk" event={"ID":"bd7e46c8-af2b-46e9-9379-b44ce8676698","Type":"ContainerStarted","Data":"f7643b3ee3999181d5cf5711d34c01f3917e9a9fa13db055aebc3c287b9e5222"} Dec 11 11:41:13 crc kubenswrapper[4953]: I1211 11:41:13.421736 4953 generic.go:334] "Generic (PLEG): container finished" podID="52fd7f85-8e6b-42cc-ad11-c625aef75a4d" containerID="79bdd5082a741c8fd0f002251c4b12d0c336974b84f066f3591439412c3c8a74" exitCode=0 Dec 11 11:41:13 crc kubenswrapper[4953]: I1211 11:41:13.421972 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lhzgm" event={"ID":"52fd7f85-8e6b-42cc-ad11-c625aef75a4d","Type":"ContainerDied","Data":"79bdd5082a741c8fd0f002251c4b12d0c336974b84f066f3591439412c3c8a74"} Dec 11 11:41:13 crc kubenswrapper[4953]: I1211 11:41:13.559042 4953 generic.go:334] "Generic (PLEG): container finished" podID="bd7e46c8-af2b-46e9-9379-b44ce8676698" containerID="f7643b3ee3999181d5cf5711d34c01f3917e9a9fa13db055aebc3c287b9e5222" exitCode=0 Dec 11 11:41:13 crc kubenswrapper[4953]: I1211 11:41:13.559096 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j95rk" event={"ID":"bd7e46c8-af2b-46e9-9379-b44ce8676698","Type":"ContainerDied","Data":"f7643b3ee3999181d5cf5711d34c01f3917e9a9fa13db055aebc3c287b9e5222"} Dec 11 11:41:14 crc kubenswrapper[4953]: I1211 11:41:14.569051 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j95rk" event={"ID":"bd7e46c8-af2b-46e9-9379-b44ce8676698","Type":"ContainerStarted","Data":"c679fe83e0914f0186411178ade77df0b766f5b8a1f139a5ab6c969a84e844db"} Dec 11 11:41:14 crc kubenswrapper[4953]: I1211 11:41:14.573178 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lhzgm" event={"ID":"52fd7f85-8e6b-42cc-ad11-c625aef75a4d","Type":"ContainerStarted","Data":"990def9658a57673c466c3cebc6fbe76892a679e0b39f457e6c6dd1b739e9cbf"} Dec 11 11:41:14 crc kubenswrapper[4953]: I1211 11:41:14.586908 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-j95rk" podStartSLOduration=2.6971239860000003 podStartE2EDuration="5.586885684s" podCreationTimestamp="2025-12-11 11:41:09 +0000 UTC" firstStartedPulling="2025-12-11 11:41:11.334124466 +0000 UTC m=+5389.357983549" lastFinishedPulling="2025-12-11 11:41:14.223886214 +0000 UTC m=+5392.247745247" observedRunningTime="2025-12-11 11:41:14.586700818 +0000 UTC m=+5392.610559871" watchObservedRunningTime="2025-12-11 11:41:14.586885684 +0000 UTC m=+5392.610744717" Dec 11 11:41:14 crc kubenswrapper[4953]: I1211 11:41:14.606871 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-lhzgm" podStartSLOduration=2.697725256 podStartE2EDuration="5.606849072s" podCreationTimestamp="2025-12-11 11:41:09 +0000 UTC" firstStartedPulling="2025-12-11 11:41:11.33139157 +0000 UTC m=+5389.355250603" lastFinishedPulling="2025-12-11 11:41:14.240515366 +0000 UTC m=+5392.264374419" observedRunningTime="2025-12-11 11:41:14.6023431 +0000 UTC m=+5392.626202153" watchObservedRunningTime="2025-12-11 11:41:14.606849072 +0000 UTC m=+5392.630708105" Dec 11 11:41:20 crc kubenswrapper[4953]: I1211 11:41:20.155128 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-j95rk" Dec 11 11:41:20 crc kubenswrapper[4953]: I1211 11:41:20.155727 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-j95rk" Dec 11 11:41:20 crc kubenswrapper[4953]: I1211 11:41:20.195703 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-j95rk" Dec 11 11:41:20 crc kubenswrapper[4953]: I1211 11:41:20.272303 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-lhzgm" Dec 11 11:41:20 crc kubenswrapper[4953]: I1211 11:41:20.273314 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-lhzgm" Dec 11 11:41:20 crc kubenswrapper[4953]: I1211 11:41:20.664811 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-lhzgm" Dec 11 11:41:20 crc kubenswrapper[4953]: I1211 11:41:20.730434 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-lhzgm" Dec 11 11:41:20 crc kubenswrapper[4953]: I1211 11:41:20.731722 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-j95rk" Dec 11 11:41:22 crc kubenswrapper[4953]: I1211 11:41:22.030840 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lhzgm"] Dec 11 11:41:23 crc kubenswrapper[4953]: I1211 11:41:23.039635 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-j95rk"] Dec 11 11:41:23 crc kubenswrapper[4953]: I1211 11:41:23.040830 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-j95rk" podUID="bd7e46c8-af2b-46e9-9379-b44ce8676698" containerName="registry-server" containerID="cri-o://c679fe83e0914f0186411178ade77df0b766f5b8a1f139a5ab6c969a84e844db" gracePeriod=2 Dec 11 11:41:23 crc kubenswrapper[4953]: I1211 11:41:23.700694 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-lhzgm" podUID="52fd7f85-8e6b-42cc-ad11-c625aef75a4d" containerName="registry-server" containerID="cri-o://990def9658a57673c466c3cebc6fbe76892a679e0b39f457e6c6dd1b739e9cbf" gracePeriod=2 Dec 11 11:41:25 crc kubenswrapper[4953]: I1211 11:41:25.275983 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j95rk" Dec 11 11:41:25 crc kubenswrapper[4953]: I1211 11:41:25.388190 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lhzgm" Dec 11 11:41:25 crc kubenswrapper[4953]: I1211 11:41:25.413433 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd7e46c8-af2b-46e9-9379-b44ce8676698-utilities\") pod \"bd7e46c8-af2b-46e9-9379-b44ce8676698\" (UID: \"bd7e46c8-af2b-46e9-9379-b44ce8676698\") " Dec 11 11:41:25 crc kubenswrapper[4953]: I1211 11:41:25.413520 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd7e46c8-af2b-46e9-9379-b44ce8676698-catalog-content\") pod \"bd7e46c8-af2b-46e9-9379-b44ce8676698\" (UID: \"bd7e46c8-af2b-46e9-9379-b44ce8676698\") " Dec 11 11:41:25 crc kubenswrapper[4953]: I1211 11:41:25.413591 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2h9ck\" (UniqueName: \"kubernetes.io/projected/bd7e46c8-af2b-46e9-9379-b44ce8676698-kube-api-access-2h9ck\") pod \"bd7e46c8-af2b-46e9-9379-b44ce8676698\" (UID: \"bd7e46c8-af2b-46e9-9379-b44ce8676698\") " Dec 11 11:41:25 crc kubenswrapper[4953]: I1211 11:41:25.415469 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bd7e46c8-af2b-46e9-9379-b44ce8676698-utilities" (OuterVolumeSpecName: "utilities") pod "bd7e46c8-af2b-46e9-9379-b44ce8676698" (UID: "bd7e46c8-af2b-46e9-9379-b44ce8676698"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 11:41:25 crc kubenswrapper[4953]: I1211 11:41:25.420733 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd7e46c8-af2b-46e9-9379-b44ce8676698-kube-api-access-2h9ck" (OuterVolumeSpecName: "kube-api-access-2h9ck") pod "bd7e46c8-af2b-46e9-9379-b44ce8676698" (UID: "bd7e46c8-af2b-46e9-9379-b44ce8676698"). InnerVolumeSpecName "kube-api-access-2h9ck". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 11:41:25 crc kubenswrapper[4953]: I1211 11:41:25.471745 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bd7e46c8-af2b-46e9-9379-b44ce8676698-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bd7e46c8-af2b-46e9-9379-b44ce8676698" (UID: "bd7e46c8-af2b-46e9-9379-b44ce8676698"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 11:41:25 crc kubenswrapper[4953]: I1211 11:41:25.515069 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52fd7f85-8e6b-42cc-ad11-c625aef75a4d-utilities\") pod \"52fd7f85-8e6b-42cc-ad11-c625aef75a4d\" (UID: \"52fd7f85-8e6b-42cc-ad11-c625aef75a4d\") " Dec 11 11:41:25 crc kubenswrapper[4953]: I1211 11:41:25.515156 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52fd7f85-8e6b-42cc-ad11-c625aef75a4d-catalog-content\") pod \"52fd7f85-8e6b-42cc-ad11-c625aef75a4d\" (UID: \"52fd7f85-8e6b-42cc-ad11-c625aef75a4d\") " Dec 11 11:41:25 crc kubenswrapper[4953]: I1211 11:41:25.515247 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wrgt2\" (UniqueName: \"kubernetes.io/projected/52fd7f85-8e6b-42cc-ad11-c625aef75a4d-kube-api-access-wrgt2\") pod \"52fd7f85-8e6b-42cc-ad11-c625aef75a4d\" (UID: \"52fd7f85-8e6b-42cc-ad11-c625aef75a4d\") " Dec 11 11:41:25 crc kubenswrapper[4953]: I1211 11:41:25.516803 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/52fd7f85-8e6b-42cc-ad11-c625aef75a4d-utilities" (OuterVolumeSpecName: "utilities") pod "52fd7f85-8e6b-42cc-ad11-c625aef75a4d" (UID: "52fd7f85-8e6b-42cc-ad11-c625aef75a4d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 11:41:25 crc kubenswrapper[4953]: I1211 11:41:25.519364 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52fd7f85-8e6b-42cc-ad11-c625aef75a4d-kube-api-access-wrgt2" (OuterVolumeSpecName: "kube-api-access-wrgt2") pod "52fd7f85-8e6b-42cc-ad11-c625aef75a4d" (UID: "52fd7f85-8e6b-42cc-ad11-c625aef75a4d"). InnerVolumeSpecName "kube-api-access-wrgt2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 11:41:25 crc kubenswrapper[4953]: I1211 11:41:25.520159 4953 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd7e46c8-af2b-46e9-9379-b44ce8676698-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 11:41:25 crc kubenswrapper[4953]: I1211 11:41:25.520222 4953 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd7e46c8-af2b-46e9-9379-b44ce8676698-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 11:41:25 crc kubenswrapper[4953]: I1211 11:41:25.520250 4953 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52fd7f85-8e6b-42cc-ad11-c625aef75a4d-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 11:41:25 crc kubenswrapper[4953]: I1211 11:41:25.520267 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2h9ck\" (UniqueName: \"kubernetes.io/projected/bd7e46c8-af2b-46e9-9379-b44ce8676698-kube-api-access-2h9ck\") on node \"crc\" DevicePath \"\"" Dec 11 11:41:25 crc kubenswrapper[4953]: I1211 11:41:25.520285 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wrgt2\" (UniqueName: \"kubernetes.io/projected/52fd7f85-8e6b-42cc-ad11-c625aef75a4d-kube-api-access-wrgt2\") on node \"crc\" DevicePath \"\"" Dec 11 11:41:25 crc kubenswrapper[4953]: I1211 11:41:25.570618 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/52fd7f85-8e6b-42cc-ad11-c625aef75a4d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "52fd7f85-8e6b-42cc-ad11-c625aef75a4d" (UID: "52fd7f85-8e6b-42cc-ad11-c625aef75a4d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 11:41:25 crc kubenswrapper[4953]: I1211 11:41:25.621444 4953 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52fd7f85-8e6b-42cc-ad11-c625aef75a4d-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 11:41:25 crc kubenswrapper[4953]: I1211 11:41:25.959359 4953 generic.go:334] "Generic (PLEG): container finished" podID="bd7e46c8-af2b-46e9-9379-b44ce8676698" containerID="c679fe83e0914f0186411178ade77df0b766f5b8a1f139a5ab6c969a84e844db" exitCode=0 Dec 11 11:41:25 crc kubenswrapper[4953]: I1211 11:41:25.959463 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j95rk" event={"ID":"bd7e46c8-af2b-46e9-9379-b44ce8676698","Type":"ContainerDied","Data":"c679fe83e0914f0186411178ade77df0b766f5b8a1f139a5ab6c969a84e844db"} Dec 11 11:41:25 crc kubenswrapper[4953]: I1211 11:41:25.959492 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j95rk" Dec 11 11:41:25 crc kubenswrapper[4953]: I1211 11:41:25.959518 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j95rk" event={"ID":"bd7e46c8-af2b-46e9-9379-b44ce8676698","Type":"ContainerDied","Data":"dc0d6eb0dd411fed2e6143848e4e8895bc9a2e15bbb4876207481e8d6088f385"} Dec 11 11:41:25 crc kubenswrapper[4953]: I1211 11:41:25.959538 4953 scope.go:117] "RemoveContainer" containerID="c679fe83e0914f0186411178ade77df0b766f5b8a1f139a5ab6c969a84e844db" Dec 11 11:41:25 crc kubenswrapper[4953]: I1211 11:41:25.962290 4953 generic.go:334] "Generic (PLEG): container finished" podID="52fd7f85-8e6b-42cc-ad11-c625aef75a4d" containerID="990def9658a57673c466c3cebc6fbe76892a679e0b39f457e6c6dd1b739e9cbf" exitCode=0 Dec 11 11:41:25 crc kubenswrapper[4953]: I1211 11:41:25.962330 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lhzgm" event={"ID":"52fd7f85-8e6b-42cc-ad11-c625aef75a4d","Type":"ContainerDied","Data":"990def9658a57673c466c3cebc6fbe76892a679e0b39f457e6c6dd1b739e9cbf"} Dec 11 11:41:25 crc kubenswrapper[4953]: I1211 11:41:25.962353 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lhzgm" Dec 11 11:41:25 crc kubenswrapper[4953]: I1211 11:41:25.962370 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lhzgm" event={"ID":"52fd7f85-8e6b-42cc-ad11-c625aef75a4d","Type":"ContainerDied","Data":"a61ac9bfeb4793c99912606f33842399ecc21c939a215bd7fc34055de42a2b95"} Dec 11 11:41:25 crc kubenswrapper[4953]: I1211 11:41:25.985870 4953 scope.go:117] "RemoveContainer" containerID="f7643b3ee3999181d5cf5711d34c01f3917e9a9fa13db055aebc3c287b9e5222" Dec 11 11:41:26 crc kubenswrapper[4953]: I1211 11:41:26.002102 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-j95rk"] Dec 11 11:41:26 crc kubenswrapper[4953]: I1211 11:41:26.004204 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-j95rk"] Dec 11 11:41:26 crc kubenswrapper[4953]: I1211 11:41:26.023505 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lhzgm"] Dec 11 11:41:26 crc kubenswrapper[4953]: I1211 11:41:26.026369 4953 scope.go:117] "RemoveContainer" containerID="0b4cd8aaa01c5802dbe7f91cbd33d12c2917c30d73e66b262bf1eb1e6283563a" Dec 11 11:41:26 crc kubenswrapper[4953]: I1211 11:41:26.031345 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-lhzgm"] Dec 11 11:41:26 crc kubenswrapper[4953]: I1211 11:41:26.044158 4953 scope.go:117] "RemoveContainer" containerID="c679fe83e0914f0186411178ade77df0b766f5b8a1f139a5ab6c969a84e844db" Dec 11 11:41:26 crc kubenswrapper[4953]: E1211 11:41:26.044676 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c679fe83e0914f0186411178ade77df0b766f5b8a1f139a5ab6c969a84e844db\": container with ID starting with c679fe83e0914f0186411178ade77df0b766f5b8a1f139a5ab6c969a84e844db not found: ID does not exist" containerID="c679fe83e0914f0186411178ade77df0b766f5b8a1f139a5ab6c969a84e844db" Dec 11 11:41:26 crc kubenswrapper[4953]: I1211 11:41:26.044721 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c679fe83e0914f0186411178ade77df0b766f5b8a1f139a5ab6c969a84e844db"} err="failed to get container status \"c679fe83e0914f0186411178ade77df0b766f5b8a1f139a5ab6c969a84e844db\": rpc error: code = NotFound desc = could not find container \"c679fe83e0914f0186411178ade77df0b766f5b8a1f139a5ab6c969a84e844db\": container with ID starting with c679fe83e0914f0186411178ade77df0b766f5b8a1f139a5ab6c969a84e844db not found: ID does not exist" Dec 11 11:41:26 crc kubenswrapper[4953]: I1211 11:41:26.044745 4953 scope.go:117] "RemoveContainer" containerID="f7643b3ee3999181d5cf5711d34c01f3917e9a9fa13db055aebc3c287b9e5222" Dec 11 11:41:26 crc kubenswrapper[4953]: E1211 11:41:26.045078 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f7643b3ee3999181d5cf5711d34c01f3917e9a9fa13db055aebc3c287b9e5222\": container with ID starting with f7643b3ee3999181d5cf5711d34c01f3917e9a9fa13db055aebc3c287b9e5222 not found: ID does not exist" containerID="f7643b3ee3999181d5cf5711d34c01f3917e9a9fa13db055aebc3c287b9e5222" Dec 11 11:41:26 crc kubenswrapper[4953]: I1211 11:41:26.045106 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7643b3ee3999181d5cf5711d34c01f3917e9a9fa13db055aebc3c287b9e5222"} err="failed to get container status \"f7643b3ee3999181d5cf5711d34c01f3917e9a9fa13db055aebc3c287b9e5222\": rpc error: code = NotFound desc = could not find container \"f7643b3ee3999181d5cf5711d34c01f3917e9a9fa13db055aebc3c287b9e5222\": container with ID starting with f7643b3ee3999181d5cf5711d34c01f3917e9a9fa13db055aebc3c287b9e5222 not found: ID does not exist" Dec 11 11:41:26 crc kubenswrapper[4953]: I1211 11:41:26.045125 4953 scope.go:117] "RemoveContainer" containerID="0b4cd8aaa01c5802dbe7f91cbd33d12c2917c30d73e66b262bf1eb1e6283563a" Dec 11 11:41:26 crc kubenswrapper[4953]: E1211 11:41:26.045425 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b4cd8aaa01c5802dbe7f91cbd33d12c2917c30d73e66b262bf1eb1e6283563a\": container with ID starting with 0b4cd8aaa01c5802dbe7f91cbd33d12c2917c30d73e66b262bf1eb1e6283563a not found: ID does not exist" containerID="0b4cd8aaa01c5802dbe7f91cbd33d12c2917c30d73e66b262bf1eb1e6283563a" Dec 11 11:41:26 crc kubenswrapper[4953]: I1211 11:41:26.045455 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b4cd8aaa01c5802dbe7f91cbd33d12c2917c30d73e66b262bf1eb1e6283563a"} err="failed to get container status \"0b4cd8aaa01c5802dbe7f91cbd33d12c2917c30d73e66b262bf1eb1e6283563a\": rpc error: code = NotFound desc = could not find container \"0b4cd8aaa01c5802dbe7f91cbd33d12c2917c30d73e66b262bf1eb1e6283563a\": container with ID starting with 0b4cd8aaa01c5802dbe7f91cbd33d12c2917c30d73e66b262bf1eb1e6283563a not found: ID does not exist" Dec 11 11:41:26 crc kubenswrapper[4953]: I1211 11:41:26.045472 4953 scope.go:117] "RemoveContainer" containerID="990def9658a57673c466c3cebc6fbe76892a679e0b39f457e6c6dd1b739e9cbf" Dec 11 11:41:26 crc kubenswrapper[4953]: I1211 11:41:26.066961 4953 scope.go:117] "RemoveContainer" containerID="79bdd5082a741c8fd0f002251c4b12d0c336974b84f066f3591439412c3c8a74" Dec 11 11:41:26 crc kubenswrapper[4953]: I1211 11:41:26.083157 4953 scope.go:117] "RemoveContainer" containerID="0a5067b0df671a0c153ae94bc82d0c972d9efd6f2e29ec767762ed0f88a499b6" Dec 11 11:41:26 crc kubenswrapper[4953]: I1211 11:41:26.111359 4953 scope.go:117] "RemoveContainer" containerID="990def9658a57673c466c3cebc6fbe76892a679e0b39f457e6c6dd1b739e9cbf" Dec 11 11:41:26 crc kubenswrapper[4953]: E1211 11:41:26.112286 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"990def9658a57673c466c3cebc6fbe76892a679e0b39f457e6c6dd1b739e9cbf\": container with ID starting with 990def9658a57673c466c3cebc6fbe76892a679e0b39f457e6c6dd1b739e9cbf not found: ID does not exist" containerID="990def9658a57673c466c3cebc6fbe76892a679e0b39f457e6c6dd1b739e9cbf" Dec 11 11:41:26 crc kubenswrapper[4953]: I1211 11:41:26.112322 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"990def9658a57673c466c3cebc6fbe76892a679e0b39f457e6c6dd1b739e9cbf"} err="failed to get container status \"990def9658a57673c466c3cebc6fbe76892a679e0b39f457e6c6dd1b739e9cbf\": rpc error: code = NotFound desc = could not find container \"990def9658a57673c466c3cebc6fbe76892a679e0b39f457e6c6dd1b739e9cbf\": container with ID starting with 990def9658a57673c466c3cebc6fbe76892a679e0b39f457e6c6dd1b739e9cbf not found: ID does not exist" Dec 11 11:41:26 crc kubenswrapper[4953]: I1211 11:41:26.112348 4953 scope.go:117] "RemoveContainer" containerID="79bdd5082a741c8fd0f002251c4b12d0c336974b84f066f3591439412c3c8a74" Dec 11 11:41:26 crc kubenswrapper[4953]: E1211 11:41:26.112648 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"79bdd5082a741c8fd0f002251c4b12d0c336974b84f066f3591439412c3c8a74\": container with ID starting with 79bdd5082a741c8fd0f002251c4b12d0c336974b84f066f3591439412c3c8a74 not found: ID does not exist" containerID="79bdd5082a741c8fd0f002251c4b12d0c336974b84f066f3591439412c3c8a74" Dec 11 11:41:26 crc kubenswrapper[4953]: I1211 11:41:26.112681 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79bdd5082a741c8fd0f002251c4b12d0c336974b84f066f3591439412c3c8a74"} err="failed to get container status \"79bdd5082a741c8fd0f002251c4b12d0c336974b84f066f3591439412c3c8a74\": rpc error: code = NotFound desc = could not find container \"79bdd5082a741c8fd0f002251c4b12d0c336974b84f066f3591439412c3c8a74\": container with ID starting with 79bdd5082a741c8fd0f002251c4b12d0c336974b84f066f3591439412c3c8a74 not found: ID does not exist" Dec 11 11:41:26 crc kubenswrapper[4953]: I1211 11:41:26.112697 4953 scope.go:117] "RemoveContainer" containerID="0a5067b0df671a0c153ae94bc82d0c972d9efd6f2e29ec767762ed0f88a499b6" Dec 11 11:41:26 crc kubenswrapper[4953]: E1211 11:41:26.113349 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a5067b0df671a0c153ae94bc82d0c972d9efd6f2e29ec767762ed0f88a499b6\": container with ID starting with 0a5067b0df671a0c153ae94bc82d0c972d9efd6f2e29ec767762ed0f88a499b6 not found: ID does not exist" containerID="0a5067b0df671a0c153ae94bc82d0c972d9efd6f2e29ec767762ed0f88a499b6" Dec 11 11:41:26 crc kubenswrapper[4953]: I1211 11:41:26.113385 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a5067b0df671a0c153ae94bc82d0c972d9efd6f2e29ec767762ed0f88a499b6"} err="failed to get container status \"0a5067b0df671a0c153ae94bc82d0c972d9efd6f2e29ec767762ed0f88a499b6\": rpc error: code = NotFound desc = could not find container \"0a5067b0df671a0c153ae94bc82d0c972d9efd6f2e29ec767762ed0f88a499b6\": container with ID starting with 0a5067b0df671a0c153ae94bc82d0c972d9efd6f2e29ec767762ed0f88a499b6 not found: ID does not exist" Dec 11 11:41:26 crc kubenswrapper[4953]: I1211 11:41:26.487956 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52fd7f85-8e6b-42cc-ad11-c625aef75a4d" path="/var/lib/kubelet/pods/52fd7f85-8e6b-42cc-ad11-c625aef75a4d/volumes" Dec 11 11:41:26 crc kubenswrapper[4953]: I1211 11:41:26.488943 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd7e46c8-af2b-46e9-9379-b44ce8676698" path="/var/lib/kubelet/pods/bd7e46c8-af2b-46e9-9379-b44ce8676698/volumes"